instance_id
stringlengths 13
37
| text
stringlengths 3.08k
667k
| repo
stringclasses 35
values | base_commit
stringlengths 40
40
| problem_statement
stringlengths 10
256k
| hints_text
stringlengths 0
908k
| created_at
stringlengths 20
20
| patch
stringlengths 18
101M
| test_patch
stringclasses 1
value | version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|
ipython__ipython-7819 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Inspect requests inside a function call should be smarter about what they inspect.
Previously, `func(a, b, <shift-tab>` would give information on `func`, now it gives information on `b`, which is not especially helpful.
This is because we removed logic from the frontend to make it more language agnostic, and we have not yet reimplemented that on the frontend. For 3.1, we should make it at least as smart as 2.x was. The quicky and dirty approach would be a regex; the proper way is tokenising the code.
Ping @mwaskom who brought this up on the mailing list.
</issue>
<code>
[start of README.rst]
1 .. image:: https://img.shields.io/coveralls/ipython/ipython.svg
2 :target: https://coveralls.io/r/ipython/ipython?branch=master
3
4 .. image:: https://img.shields.io/pypi/dm/IPython.svg
5 :target: https://pypi.python.org/pypi/ipython
6
7 .. image:: https://img.shields.io/pypi/v/IPython.svg
8 :target: https://pypi.python.org/pypi/ipython
9
10 .. image:: https://img.shields.io/travis/ipython/ipython.svg
11 :target: https://travis-ci.org/ipython/ipython
12
13
14 ===========================================
15 IPython: Productive Interactive Computing
16 ===========================================
17
18 Overview
19 ========
20
21 Welcome to IPython. Our full documentation is available on `our website
22 <http://ipython.org/documentation.html>`_; if you downloaded a built source
23 distribution the ``docs/source`` directory contains the plaintext version of
24 these manuals. If you have Sphinx installed, you can build them by typing
25 ``cd docs; make html`` for local browsing.
26
27
28 Dependencies and supported Python versions
29 ==========================================
30
31 For full details, see the installation section of the manual. The basic parts
32 of IPython only need the Python standard library, but much of its more advanced
33 functionality requires extra packages.
34
35 Officially, IPython requires Python version 2.7, or 3.3 and above.
36 IPython 1.x is the last IPython version to support Python 2.6 and 3.2.
37
38
39 Instant running
40 ===============
41
42 You can run IPython from this directory without even installing it system-wide
43 by typing at the terminal::
44
45 $ python -m IPython
46
47
48 Development installation
49 ========================
50
51 If you want to hack on certain parts, e.g. the IPython notebook, in a clean
52 environment (such as a virtualenv) you can use ``pip`` to grab the necessary
53 dependencies quickly::
54
55 $ git clone --recursive https://github.com/ipython/ipython.git
56 $ cd ipython
57 $ pip install -e ".[notebook]" --user
58
59 This installs the necessary packages and symlinks IPython into your current
60 environment so that you can work on your local repo copy and run it from anywhere::
61
62 $ ipython notebook
63
64 The same process applies for other parts, such as the qtconsole (the
65 ``extras_require`` attribute in the setup.py file lists all the possibilities).
66
67 Git Hooks and Submodules
68 ************************
69
70 IPython now uses git submodules to ship its javascript dependencies.
71 If you run IPython from git master, you may need to update submodules once in a while with::
72
73 $ git submodule update
74
75 or::
76
77 $ python setup.py submodule
78
79 We have some git hooks for helping keep your submodules always in sync,
80 see our ``git-hooks`` directory for more info.
81
[end of README.rst]
[start of IPython/core/ultratb.py]
1 # -*- coding: utf-8 -*-
2 """
3 Verbose and colourful traceback formatting.
4
5 **ColorTB**
6
7 I've always found it a bit hard to visually parse tracebacks in Python. The
8 ColorTB class is a solution to that problem. It colors the different parts of a
9 traceback in a manner similar to what you would expect from a syntax-highlighting
10 text editor.
11
12 Installation instructions for ColorTB::
13
14 import sys,ultratb
15 sys.excepthook = ultratb.ColorTB()
16
17 **VerboseTB**
18
19 I've also included a port of Ka-Ping Yee's "cgitb.py" that produces all kinds
20 of useful info when a traceback occurs. Ping originally had it spit out HTML
21 and intended it for CGI programmers, but why should they have all the fun? I
22 altered it to spit out colored text to the terminal. It's a bit overwhelming,
23 but kind of neat, and maybe useful for long-running programs that you believe
24 are bug-free. If a crash *does* occur in that type of program you want details.
25 Give it a shot--you'll love it or you'll hate it.
26
27 .. note::
28
29 The Verbose mode prints the variables currently visible where the exception
30 happened (shortening their strings if too long). This can potentially be
31 very slow, if you happen to have a huge data structure whose string
32 representation is complex to compute. Your computer may appear to freeze for
33 a while with cpu usage at 100%. If this occurs, you can cancel the traceback
34 with Ctrl-C (maybe hitting it more than once).
35
36 If you encounter this kind of situation often, you may want to use the
37 Verbose_novars mode instead of the regular Verbose, which avoids formatting
38 variables (but otherwise includes the information and context given by
39 Verbose).
40
41
42 Installation instructions for VerboseTB::
43
44 import sys,ultratb
45 sys.excepthook = ultratb.VerboseTB()
46
47 Note: Much of the code in this module was lifted verbatim from the standard
48 library module 'traceback.py' and Ka-Ping Yee's 'cgitb.py'.
49
50 Color schemes
51 -------------
52
53 The colors are defined in the class TBTools through the use of the
54 ColorSchemeTable class. Currently the following exist:
55
56 - NoColor: allows all of this module to be used in any terminal (the color
57 escapes are just dummy blank strings).
58
59 - Linux: is meant to look good in a terminal like the Linux console (black
60 or very dark background).
61
62 - LightBG: similar to Linux but swaps dark/light colors to be more readable
63 in light background terminals.
64
65 You can implement other color schemes easily, the syntax is fairly
66 self-explanatory. Please send back new schemes you develop to the author for
67 possible inclusion in future releases.
68
69 Inheritance diagram:
70
71 .. inheritance-diagram:: IPython.core.ultratb
72 :parts: 3
73 """
74
75 #*****************************************************************************
76 # Copyright (C) 2001 Nathaniel Gray <n8gray@caltech.edu>
77 # Copyright (C) 2001-2004 Fernando Perez <fperez@colorado.edu>
78 #
79 # Distributed under the terms of the BSD License. The full license is in
80 # the file COPYING, distributed as part of this software.
81 #*****************************************************************************
82
83 from __future__ import unicode_literals
84 from __future__ import print_function
85
86 import inspect
87 import keyword
88 import linecache
89 import os
90 import pydoc
91 import re
92 import sys
93 import time
94 import tokenize
95 import traceback
96 import types
97
98 try: # Python 2
99 generate_tokens = tokenize.generate_tokens
100 except AttributeError: # Python 3
101 generate_tokens = tokenize.tokenize
102
103 # For purposes of monkeypatching inspect to fix a bug in it.
104 from inspect import getsourcefile, getfile, getmodule, \
105 ismodule, isclass, ismethod, isfunction, istraceback, isframe, iscode
106
107 # IPython's own modules
108 # Modified pdb which doesn't damage IPython's readline handling
109 from IPython import get_ipython
110 from IPython.core import debugger
111 from IPython.core.display_trap import DisplayTrap
112 from IPython.core.excolors import exception_colors
113 from IPython.utils import PyColorize
114 from IPython.utils import io
115 from IPython.utils import openpy
116 from IPython.utils import path as util_path
117 from IPython.utils import py3compat
118 from IPython.utils import ulinecache
119 from IPython.utils.data import uniq_stable
120 from IPython.utils.warn import info, error
121
122 # Globals
123 # amount of space to put line numbers before verbose tracebacks
124 INDENT_SIZE = 8
125
126 # Default color scheme. This is used, for example, by the traceback
127 # formatter. When running in an actual IPython instance, the user's rc.colors
128 # value is used, but having a module global makes this functionality available
129 # to users of ultratb who are NOT running inside ipython.
130 DEFAULT_SCHEME = 'NoColor'
131
132 # ---------------------------------------------------------------------------
133 # Code begins
134
135 # Utility functions
136 def inspect_error():
137 """Print a message about internal inspect errors.
138
139 These are unfortunately quite common."""
140
141 error('Internal Python error in the inspect module.\n'
142 'Below is the traceback from this internal error.\n')
143
144
145 # This function is a monkeypatch we apply to the Python inspect module. We have
146 # now found when it's needed (see discussion on issue gh-1456), and we have a
147 # test case (IPython.core.tests.test_ultratb.ChangedPyFileTest) that fails if
148 # the monkeypatch is not applied. TK, Aug 2012.
149 def findsource(object):
150 """Return the entire source file and starting line number for an object.
151
152 The argument may be a module, class, method, function, traceback, frame,
153 or code object. The source code is returned as a list of all the lines
154 in the file and the line number indexes a line in that list. An IOError
155 is raised if the source code cannot be retrieved.
156
157 FIXED version with which we monkeypatch the stdlib to work around a bug."""
158
159 file = getsourcefile(object) or getfile(object)
160 # If the object is a frame, then trying to get the globals dict from its
161 # module won't work. Instead, the frame object itself has the globals
162 # dictionary.
163 globals_dict = None
164 if inspect.isframe(object):
165 # XXX: can this ever be false?
166 globals_dict = object.f_globals
167 else:
168 module = getmodule(object, file)
169 if module:
170 globals_dict = module.__dict__
171 lines = linecache.getlines(file, globals_dict)
172 if not lines:
173 raise IOError('could not get source code')
174
175 if ismodule(object):
176 return lines, 0
177
178 if isclass(object):
179 name = object.__name__
180 pat = re.compile(r'^(\s*)class\s*' + name + r'\b')
181 # make some effort to find the best matching class definition:
182 # use the one with the least indentation, which is the one
183 # that's most probably not inside a function definition.
184 candidates = []
185 for i in range(len(lines)):
186 match = pat.match(lines[i])
187 if match:
188 # if it's at toplevel, it's already the best one
189 if lines[i][0] == 'c':
190 return lines, i
191 # else add whitespace to candidate list
192 candidates.append((match.group(1), i))
193 if candidates:
194 # this will sort by whitespace, and by line number,
195 # less whitespace first
196 candidates.sort()
197 return lines, candidates[0][1]
198 else:
199 raise IOError('could not find class definition')
200
201 if ismethod(object):
202 object = object.__func__
203 if isfunction(object):
204 object = object.__code__
205 if istraceback(object):
206 object = object.tb_frame
207 if isframe(object):
208 object = object.f_code
209 if iscode(object):
210 if not hasattr(object, 'co_firstlineno'):
211 raise IOError('could not find function definition')
212 pat = re.compile(r'^(\s*def\s)|(.*(?<!\w)lambda(:|\s))|^(\s*@)')
213 pmatch = pat.match
214 # fperez - fix: sometimes, co_firstlineno can give a number larger than
215 # the length of lines, which causes an error. Safeguard against that.
216 lnum = min(object.co_firstlineno, len(lines)) - 1
217 while lnum > 0:
218 if pmatch(lines[lnum]): break
219 lnum -= 1
220
221 return lines, lnum
222 raise IOError('could not find code object')
223
224
225 # Monkeypatch inspect to apply our bugfix.
226 def with_patch_inspect(f):
227 """decorator for monkeypatching inspect.findsource"""
228
229 def wrapped(*args, **kwargs):
230 save_findsource = inspect.findsource
231 inspect.findsource = findsource
232 try:
233 return f(*args, **kwargs)
234 finally:
235 inspect.findsource = save_findsource
236
237 return wrapped
238
239
240 def fix_frame_records_filenames(records):
241 """Try to fix the filenames in each record from inspect.getinnerframes().
242
243 Particularly, modules loaded from within zip files have useless filenames
244 attached to their code object, and inspect.getinnerframes() just uses it.
245 """
246 fixed_records = []
247 for frame, filename, line_no, func_name, lines, index in records:
248 # Look inside the frame's globals dictionary for __file__,
249 # which should be better. However, keep Cython filenames since
250 # we prefer the source filenames over the compiled .so file.
251 filename = py3compat.cast_unicode_py2(filename, "utf-8")
252 if not filename.endswith(('.pyx', '.pxd', '.pxi')):
253 better_fn = frame.f_globals.get('__file__', None)
254 if isinstance(better_fn, str):
255 # Check the type just in case someone did something weird with
256 # __file__. It might also be None if the error occurred during
257 # import.
258 filename = better_fn
259 fixed_records.append((frame, filename, line_no, func_name, lines, index))
260 return fixed_records
261
262
263 @with_patch_inspect
264 def _fixed_getinnerframes(etb, context=1, tb_offset=0):
265 LNUM_POS, LINES_POS, INDEX_POS = 2, 4, 5
266
267 records = fix_frame_records_filenames(inspect.getinnerframes(etb, context))
268 # If the error is at the console, don't build any context, since it would
269 # otherwise produce 5 blank lines printed out (there is no file at the
270 # console)
271 rec_check = records[tb_offset:]
272 try:
273 rname = rec_check[0][1]
274 if rname == '<ipython console>' or rname.endswith('<string>'):
275 return rec_check
276 except IndexError:
277 pass
278
279 aux = traceback.extract_tb(etb)
280 assert len(records) == len(aux)
281 for i, (file, lnum, _, _) in zip(range(len(records)), aux):
282 maybeStart = lnum - 1 - context // 2
283 start = max(maybeStart, 0)
284 end = start + context
285 lines = ulinecache.getlines(file)[start:end]
286 buf = list(records[i])
287 buf[LNUM_POS] = lnum
288 buf[INDEX_POS] = lnum - 1 - start
289 buf[LINES_POS] = lines
290 records[i] = tuple(buf)
291 return records[tb_offset:]
292
293 # Helper function -- largely belongs to VerboseTB, but we need the same
294 # functionality to produce a pseudo verbose TB for SyntaxErrors, so that they
295 # can be recognized properly by ipython.el's py-traceback-line-re
296 # (SyntaxErrors have to be treated specially because they have no traceback)
297
298 _parser = PyColorize.Parser()
299
300
301 def _format_traceback_lines(lnum, index, lines, Colors, lvals=None, scheme=None):
302 numbers_width = INDENT_SIZE - 1
303 res = []
304 i = lnum - index
305
306 # This lets us get fully syntax-highlighted tracebacks.
307 if scheme is None:
308 ipinst = get_ipython()
309 if ipinst is not None:
310 scheme = ipinst.colors
311 else:
312 scheme = DEFAULT_SCHEME
313
314 _line_format = _parser.format2
315
316 for line in lines:
317 line = py3compat.cast_unicode(line)
318
319 new_line, err = _line_format(line, 'str', scheme)
320 if not err: line = new_line
321
322 if i == lnum:
323 # This is the line with the error
324 pad = numbers_width - len(str(i))
325 if pad >= 3:
326 marker = '-' * (pad - 3) + '-> '
327 elif pad == 2:
328 marker = '> '
329 elif pad == 1:
330 marker = '>'
331 else:
332 marker = ''
333 num = marker + str(i)
334 line = '%s%s%s %s%s' % (Colors.linenoEm, num,
335 Colors.line, line, Colors.Normal)
336 else:
337 num = '%*s' % (numbers_width, i)
338 line = '%s%s%s %s' % (Colors.lineno, num,
339 Colors.Normal, line)
340
341 res.append(line)
342 if lvals and i == lnum:
343 res.append(lvals + '\n')
344 i = i + 1
345 return res
346
347
348 #---------------------------------------------------------------------------
349 # Module classes
350 class TBTools(object):
351 """Basic tools used by all traceback printer classes."""
352
353 # Number of frames to skip when reporting tracebacks
354 tb_offset = 0
355
356 def __init__(self, color_scheme='NoColor', call_pdb=False, ostream=None):
357 # Whether to call the interactive pdb debugger after printing
358 # tracebacks or not
359 self.call_pdb = call_pdb
360
361 # Output stream to write to. Note that we store the original value in
362 # a private attribute and then make the public ostream a property, so
363 # that we can delay accessing io.stdout until runtime. The way
364 # things are written now, the io.stdout object is dynamically managed
365 # so a reference to it should NEVER be stored statically. This
366 # property approach confines this detail to a single location, and all
367 # subclasses can simply access self.ostream for writing.
368 self._ostream = ostream
369
370 # Create color table
371 self.color_scheme_table = exception_colors()
372
373 self.set_colors(color_scheme)
374 self.old_scheme = color_scheme # save initial value for toggles
375
376 if call_pdb:
377 self.pdb = debugger.Pdb(self.color_scheme_table.active_scheme_name)
378 else:
379 self.pdb = None
380
381 def _get_ostream(self):
382 """Output stream that exceptions are written to.
383
384 Valid values are:
385
386 - None: the default, which means that IPython will dynamically resolve
387 to io.stdout. This ensures compatibility with most tools, including
388 Windows (where plain stdout doesn't recognize ANSI escapes).
389
390 - Any object with 'write' and 'flush' attributes.
391 """
392 return io.stdout if self._ostream is None else self._ostream
393
394 def _set_ostream(self, val):
395 assert val is None or (hasattr(val, 'write') and hasattr(val, 'flush'))
396 self._ostream = val
397
398 ostream = property(_get_ostream, _set_ostream)
399
400 def set_colors(self, *args, **kw):
401 """Shorthand access to the color table scheme selector method."""
402
403 # Set own color table
404 self.color_scheme_table.set_active_scheme(*args, **kw)
405 # for convenience, set Colors to the active scheme
406 self.Colors = self.color_scheme_table.active_colors
407 # Also set colors of debugger
408 if hasattr(self, 'pdb') and self.pdb is not None:
409 self.pdb.set_colors(*args, **kw)
410
411 def color_toggle(self):
412 """Toggle between the currently active color scheme and NoColor."""
413
414 if self.color_scheme_table.active_scheme_name == 'NoColor':
415 self.color_scheme_table.set_active_scheme(self.old_scheme)
416 self.Colors = self.color_scheme_table.active_colors
417 else:
418 self.old_scheme = self.color_scheme_table.active_scheme_name
419 self.color_scheme_table.set_active_scheme('NoColor')
420 self.Colors = self.color_scheme_table.active_colors
421
422 def stb2text(self, stb):
423 """Convert a structured traceback (a list) to a string."""
424 return '\n'.join(stb)
425
426 def text(self, etype, value, tb, tb_offset=None, context=5):
427 """Return formatted traceback.
428
429 Subclasses may override this if they add extra arguments.
430 """
431 tb_list = self.structured_traceback(etype, value, tb,
432 tb_offset, context)
433 return self.stb2text(tb_list)
434
435 def structured_traceback(self, etype, evalue, tb, tb_offset=None,
436 context=5, mode=None):
437 """Return a list of traceback frames.
438
439 Must be implemented by each class.
440 """
441 raise NotImplementedError()
442
443
444 #---------------------------------------------------------------------------
445 class ListTB(TBTools):
446 """Print traceback information from a traceback list, with optional color.
447
448 Calling requires 3 arguments: (etype, evalue, elist)
449 as would be obtained by::
450
451 etype, evalue, tb = sys.exc_info()
452 if tb:
453 elist = traceback.extract_tb(tb)
454 else:
455 elist = None
456
457 It can thus be used by programs which need to process the traceback before
458 printing (such as console replacements based on the code module from the
459 standard library).
460
461 Because they are meant to be called without a full traceback (only a
462 list), instances of this class can't call the interactive pdb debugger."""
463
464 def __init__(self, color_scheme='NoColor', call_pdb=False, ostream=None):
465 TBTools.__init__(self, color_scheme=color_scheme, call_pdb=call_pdb,
466 ostream=ostream)
467
468 def __call__(self, etype, value, elist):
469 self.ostream.flush()
470 self.ostream.write(self.text(etype, value, elist))
471 self.ostream.write('\n')
472
473 def structured_traceback(self, etype, value, elist, tb_offset=None,
474 context=5):
475 """Return a color formatted string with the traceback info.
476
477 Parameters
478 ----------
479 etype : exception type
480 Type of the exception raised.
481
482 value : object
483 Data stored in the exception
484
485 elist : list
486 List of frames, see class docstring for details.
487
488 tb_offset : int, optional
489 Number of frames in the traceback to skip. If not given, the
490 instance value is used (set in constructor).
491
492 context : int, optional
493 Number of lines of context information to print.
494
495 Returns
496 -------
497 String with formatted exception.
498 """
499 tb_offset = self.tb_offset if tb_offset is None else tb_offset
500 Colors = self.Colors
501 out_list = []
502 if elist:
503
504 if tb_offset and len(elist) > tb_offset:
505 elist = elist[tb_offset:]
506
507 out_list.append('Traceback %s(most recent call last)%s:' %
508 (Colors.normalEm, Colors.Normal) + '\n')
509 out_list.extend(self._format_list(elist))
510 # The exception info should be a single entry in the list.
511 lines = ''.join(self._format_exception_only(etype, value))
512 out_list.append(lines)
513
514 # Note: this code originally read:
515
516 ## for line in lines[:-1]:
517 ## out_list.append(" "+line)
518 ## out_list.append(lines[-1])
519
520 # This means it was indenting everything but the last line by a little
521 # bit. I've disabled this for now, but if we see ugliness somewhere we
522 # can restore it.
523
524 return out_list
525
526 def _format_list(self, extracted_list):
527 """Format a list of traceback entry tuples for printing.
528
529 Given a list of tuples as returned by extract_tb() or
530 extract_stack(), return a list of strings ready for printing.
531 Each string in the resulting list corresponds to the item with the
532 same index in the argument list. Each string ends in a newline;
533 the strings may contain internal newlines as well, for those items
534 whose source text line is not None.
535
536 Lifted almost verbatim from traceback.py
537 """
538
539 Colors = self.Colors
540 list = []
541 for filename, lineno, name, line in extracted_list[:-1]:
542 item = ' File %s"%s"%s, line %s%d%s, in %s%s%s\n' % \
543 (Colors.filename, py3compat.cast_unicode_py2(filename, "utf-8"), Colors.Normal,
544 Colors.lineno, lineno, Colors.Normal,
545 Colors.name, py3compat.cast_unicode_py2(name, "utf-8"), Colors.Normal)
546 if line:
547 item += ' %s\n' % line.strip()
548 list.append(item)
549 # Emphasize the last entry
550 filename, lineno, name, line = extracted_list[-1]
551 item = '%s File %s"%s"%s, line %s%d%s, in %s%s%s%s\n' % \
552 (Colors.normalEm,
553 Colors.filenameEm, py3compat.cast_unicode_py2(filename, "utf-8"), Colors.normalEm,
554 Colors.linenoEm, lineno, Colors.normalEm,
555 Colors.nameEm, py3compat.cast_unicode_py2(name, "utf-8"), Colors.normalEm,
556 Colors.Normal)
557 if line:
558 item += '%s %s%s\n' % (Colors.line, line.strip(),
559 Colors.Normal)
560 list.append(item)
561 return list
562
563 def _format_exception_only(self, etype, value):
564 """Format the exception part of a traceback.
565
566 The arguments are the exception type and value such as given by
567 sys.exc_info()[:2]. The return value is a list of strings, each ending
568 in a newline. Normally, the list contains a single string; however,
569 for SyntaxError exceptions, it contains several lines that (when
570 printed) display detailed information about where the syntax error
571 occurred. The message indicating which exception occurred is the
572 always last string in the list.
573
574 Also lifted nearly verbatim from traceback.py
575 """
576 have_filedata = False
577 Colors = self.Colors
578 list = []
579 stype = Colors.excName + etype.__name__ + Colors.Normal
580 if value is None:
581 # Not sure if this can still happen in Python 2.6 and above
582 list.append(py3compat.cast_unicode(stype) + '\n')
583 else:
584 if issubclass(etype, SyntaxError):
585 have_filedata = True
586 if not value.filename: value.filename = "<string>"
587 if value.lineno:
588 lineno = value.lineno
589 textline = ulinecache.getline(value.filename, value.lineno)
590 else:
591 lineno = 'unknown'
592 textline = ''
593 list.append('%s File %s"%s"%s, line %s%s%s\n' % \
594 (Colors.normalEm,
595 Colors.filenameEm, py3compat.cast_unicode(value.filename), Colors.normalEm,
596 Colors.linenoEm, lineno, Colors.Normal ))
597 if textline == '':
598 textline = py3compat.cast_unicode(value.text, "utf-8")
599
600 if textline is not None:
601 i = 0
602 while i < len(textline) and textline[i].isspace():
603 i += 1
604 list.append('%s %s%s\n' % (Colors.line,
605 textline.strip(),
606 Colors.Normal))
607 if value.offset is not None:
608 s = ' '
609 for c in textline[i:value.offset - 1]:
610 if c.isspace():
611 s += c
612 else:
613 s += ' '
614 list.append('%s%s^%s\n' % (Colors.caret, s,
615 Colors.Normal))
616
617 try:
618 s = value.msg
619 except Exception:
620 s = self._some_str(value)
621 if s:
622 list.append('%s%s:%s %s\n' % (str(stype), Colors.excName,
623 Colors.Normal, s))
624 else:
625 list.append('%s\n' % str(stype))
626
627 # sync with user hooks
628 if have_filedata:
629 ipinst = get_ipython()
630 if ipinst is not None:
631 ipinst.hooks.synchronize_with_editor(value.filename, value.lineno, 0)
632
633 return list
634
635 def get_exception_only(self, etype, value):
636 """Only print the exception type and message, without a traceback.
637
638 Parameters
639 ----------
640 etype : exception type
641 value : exception value
642 """
643 return ListTB.structured_traceback(self, etype, value, [])
644
645 def show_exception_only(self, etype, evalue):
646 """Only print the exception type and message, without a traceback.
647
648 Parameters
649 ----------
650 etype : exception type
651 value : exception value
652 """
653 # This method needs to use __call__ from *this* class, not the one from
654 # a subclass whose signature or behavior may be different
655 ostream = self.ostream
656 ostream.flush()
657 ostream.write('\n'.join(self.get_exception_only(etype, evalue)))
658 ostream.flush()
659
660 def _some_str(self, value):
661 # Lifted from traceback.py
662 try:
663 return str(value)
664 except:
665 return '<unprintable %s object>' % type(value).__name__
666
667
668 #----------------------------------------------------------------------------
669 class VerboseTB(TBTools):
670 """A port of Ka-Ping Yee's cgitb.py module that outputs color text instead
671 of HTML. Requires inspect and pydoc. Crazy, man.
672
673 Modified version which optionally strips the topmost entries from the
674 traceback, to be used with alternate interpreters (because their own code
675 would appear in the traceback)."""
676
677 def __init__(self, color_scheme='Linux', call_pdb=False, ostream=None,
678 tb_offset=0, long_header=False, include_vars=True,
679 check_cache=None):
680 """Specify traceback offset, headers and color scheme.
681
682 Define how many frames to drop from the tracebacks. Calling it with
683 tb_offset=1 allows use of this handler in interpreters which will have
684 their own code at the top of the traceback (VerboseTB will first
685 remove that frame before printing the traceback info)."""
686 TBTools.__init__(self, color_scheme=color_scheme, call_pdb=call_pdb,
687 ostream=ostream)
688 self.tb_offset = tb_offset
689 self.long_header = long_header
690 self.include_vars = include_vars
691 # By default we use linecache.checkcache, but the user can provide a
692 # different check_cache implementation. This is used by the IPython
693 # kernel to provide tracebacks for interactive code that is cached,
694 # by a compiler instance that flushes the linecache but preserves its
695 # own code cache.
696 if check_cache is None:
697 check_cache = linecache.checkcache
698 self.check_cache = check_cache
699
700 def format_records(self, records):
701 Colors = self.Colors # just a shorthand + quicker name lookup
702 ColorsNormal = Colors.Normal # used a lot
703 col_scheme = self.color_scheme_table.active_scheme_name
704 indent = ' ' * INDENT_SIZE
705 em_normal = '%s\n%s%s' % (Colors.valEm, indent, ColorsNormal)
706 undefined = '%sundefined%s' % (Colors.em, ColorsNormal)
707 frames = []
708 # build some color string templates outside these nested loops
709 tpl_link = '%s%%s%s' % (Colors.filenameEm, ColorsNormal)
710 tpl_call = 'in %s%%s%s%%s%s' % (Colors.vName, Colors.valEm,
711 ColorsNormal)
712 tpl_call_fail = 'in %s%%s%s(***failed resolving arguments***)%s' % \
713 (Colors.vName, Colors.valEm, ColorsNormal)
714 tpl_local_var = '%s%%s%s' % (Colors.vName, ColorsNormal)
715 tpl_global_var = '%sglobal%s %s%%s%s' % (Colors.em, ColorsNormal,
716 Colors.vName, ColorsNormal)
717 tpl_name_val = '%%s %s= %%s%s' % (Colors.valEm, ColorsNormal)
718
719 tpl_line = '%s%%s%s %%s' % (Colors.lineno, ColorsNormal)
720 tpl_line_em = '%s%%s%s %%s%s' % (Colors.linenoEm, Colors.line,
721 ColorsNormal)
722
723 abspath = os.path.abspath
724 for frame, file, lnum, func, lines, index in records:
725 #print '*** record:',file,lnum,func,lines,index # dbg
726 if not file:
727 file = '?'
728 elif file.startswith(str("<")) and file.endswith(str(">")):
729 # Not a real filename, no problem...
730 pass
731 elif not os.path.isabs(file):
732 # Try to make the filename absolute by trying all
733 # sys.path entries (which is also what linecache does)
734 for dirname in sys.path:
735 try:
736 fullname = os.path.join(dirname, file)
737 if os.path.isfile(fullname):
738 file = os.path.abspath(fullname)
739 break
740 except Exception:
741 # Just in case that sys.path contains very
742 # strange entries...
743 pass
744
745 file = py3compat.cast_unicode(file, util_path.fs_encoding)
746 link = tpl_link % file
747 args, varargs, varkw, locals = inspect.getargvalues(frame)
748
749 if func == '?':
750 call = ''
751 else:
752 # Decide whether to include variable details or not
753 var_repr = self.include_vars and eqrepr or nullrepr
754 try:
755 call = tpl_call % (func, inspect.formatargvalues(args,
756 varargs, varkw,
757 locals, formatvalue=var_repr))
758 except KeyError:
759 # This happens in situations like errors inside generator
760 # expressions, where local variables are listed in the
761 # line, but can't be extracted from the frame. I'm not
762 # 100% sure this isn't actually a bug in inspect itself,
763 # but since there's no info for us to compute with, the
764 # best we can do is report the failure and move on. Here
765 # we must *not* call any traceback construction again,
766 # because that would mess up use of %debug later on. So we
767 # simply report the failure and move on. The only
768 # limitation will be that this frame won't have locals
769 # listed in the call signature. Quite subtle problem...
770 # I can't think of a good way to validate this in a unit
771 # test, but running a script consisting of:
772 # dict( (k,v.strip()) for (k,v) in range(10) )
773 # will illustrate the error, if this exception catch is
774 # disabled.
775 call = tpl_call_fail % func
776
777 # Don't attempt to tokenize binary files.
778 if file.endswith(('.so', '.pyd', '.dll')):
779 frames.append('%s %s\n' % (link, call))
780 continue
781 elif file.endswith(('.pyc', '.pyo')):
782 # Look up the corresponding source file.
783 file = openpy.source_from_cache(file)
784
785 def linereader(file=file, lnum=[lnum], getline=ulinecache.getline):
786 line = getline(file, lnum[0])
787 lnum[0] += 1
788 return line
789
790 # Build the list of names on this line of code where the exception
791 # occurred.
792 try:
793 names = []
794 name_cont = False
795
796 for token_type, token, start, end, line in generate_tokens(linereader):
797 # build composite names
798 if token_type == tokenize.NAME and token not in keyword.kwlist:
799 if name_cont:
800 # Continuation of a dotted name
801 try:
802 names[-1].append(token)
803 except IndexError:
804 names.append([token])
805 name_cont = False
806 else:
807 # Regular new names. We append everything, the caller
808 # will be responsible for pruning the list later. It's
809 # very tricky to try to prune as we go, b/c composite
810 # names can fool us. The pruning at the end is easy
811 # to do (or the caller can print a list with repeated
812 # names if so desired.
813 names.append([token])
814 elif token == '.':
815 name_cont = True
816 elif token_type == tokenize.NEWLINE:
817 break
818
819 except (IndexError, UnicodeDecodeError, SyntaxError):
820 # signals exit of tokenizer
821 # SyntaxError can occur if the file is not actually Python
822 # - see gh-6300
823 pass
824 except tokenize.TokenError as msg:
825 _m = ("An unexpected error occurred while tokenizing input\n"
826 "The following traceback may be corrupted or invalid\n"
827 "The error message is: %s\n" % msg)
828 error(_m)
829
830 # Join composite names (e.g. "dict.fromkeys")
831 names = ['.'.join(n) for n in names]
832 # prune names list of duplicates, but keep the right order
833 unique_names = uniq_stable(names)
834
835 # Start loop over vars
836 lvals = []
837 if self.include_vars:
838 for name_full in unique_names:
839 name_base = name_full.split('.', 1)[0]
840 if name_base in frame.f_code.co_varnames:
841 if name_base in locals:
842 try:
843 value = repr(eval(name_full, locals))
844 except:
845 value = undefined
846 else:
847 value = undefined
848 name = tpl_local_var % name_full
849 else:
850 if name_base in frame.f_globals:
851 try:
852 value = repr(eval(name_full, frame.f_globals))
853 except:
854 value = undefined
855 else:
856 value = undefined
857 name = tpl_global_var % name_full
858 lvals.append(tpl_name_val % (name, value))
859 if lvals:
860 lvals = '%s%s' % (indent, em_normal.join(lvals))
861 else:
862 lvals = ''
863
864 level = '%s %s\n' % (link, call)
865
866 if index is None:
867 frames.append(level)
868 else:
869 frames.append('%s%s' % (level, ''.join(
870 _format_traceback_lines(lnum, index, lines, Colors, lvals,
871 col_scheme))))
872
873 return frames
874
875 def prepare_chained_exception_message(self, cause):
876 direct_cause = "\nThe above exception was the direct cause of the following exception:\n"
877 exception_during_handling = "\nDuring handling of the above exception, another exception occurred:\n"
878
879 if cause:
880 message = [[direct_cause]]
881 else:
882 message = [[exception_during_handling]]
883 return message
884
885 def prepare_header(self, etype, long_version=False):
886 colors = self.Colors # just a shorthand + quicker name lookup
887 colorsnormal = colors.Normal # used a lot
888 exc = '%s%s%s' % (colors.excName, etype, colorsnormal)
889 if long_version:
890 # Header with the exception type, python version, and date
891 pyver = 'Python ' + sys.version.split()[0] + ': ' + sys.executable
892 date = time.ctime(time.time())
893
894 head = '%s%s%s\n%s%s%s\n%s' % (colors.topline, '-' * 75, colorsnormal,
895 exc, ' ' * (75 - len(str(etype)) - len(pyver)),
896 pyver, date.rjust(75) )
897 head += "\nA problem occurred executing Python code. Here is the sequence of function" \
898 "\ncalls leading up to the error, with the most recent (innermost) call last."
899 else:
900 # Simplified header
901 head = '%s%s' % (exc, 'Traceback (most recent call last)'. \
902 rjust(75 - len(str(etype))) )
903
904 return head
905
906 def format_exception(self, etype, evalue):
907 colors = self.Colors # just a shorthand + quicker name lookup
908 colorsnormal = colors.Normal # used a lot
909 indent = ' ' * INDENT_SIZE
910 # Get (safely) a string form of the exception info
911 try:
912 etype_str, evalue_str = map(str, (etype, evalue))
913 except:
914 # User exception is improperly defined.
915 etype, evalue = str, sys.exc_info()[:2]
916 etype_str, evalue_str = map(str, (etype, evalue))
917 # ... and format it
918 exception = ['%s%s%s: %s' % (colors.excName, etype_str,
919 colorsnormal, py3compat.cast_unicode(evalue_str))]
920
921 if (not py3compat.PY3) and type(evalue) is types.InstanceType:
922 try:
923 names = [w for w in dir(evalue) if isinstance(w, py3compat.string_types)]
924 except:
925 # Every now and then, an object with funny internals blows up
926 # when dir() is called on it. We do the best we can to report
927 # the problem and continue
928 _m = '%sException reporting error (object with broken dir())%s:'
929 exception.append(_m % (colors.excName, colorsnormal))
930 etype_str, evalue_str = map(str, sys.exc_info()[:2])
931 exception.append('%s%s%s: %s' % (colors.excName, etype_str,
932 colorsnormal, py3compat.cast_unicode(evalue_str)))
933 names = []
934 for name in names:
935 value = text_repr(getattr(evalue, name))
936 exception.append('\n%s%s = %s' % (indent, name, value))
937
938 return exception
939
940 def format_exception_as_a_whole(self, etype, evalue, etb, number_of_lines_of_context, tb_offset):
941 # some locals
942 try:
943 etype = etype.__name__
944 except AttributeError:
945 pass
946
947 tb_offset = self.tb_offset if tb_offset is None else tb_offset
948 head = self.prepare_header(etype, self.long_header)
949 records = self.get_records(etb, number_of_lines_of_context, tb_offset)
950
951 frames = self.format_records(records)
952 if records is None:
953 return ""
954
955 formatted_exception = self.format_exception(etype, evalue)
956 if records:
957 filepath, lnum = records[-1][1:3]
958 filepath = os.path.abspath(filepath)
959 ipinst = get_ipython()
960 if ipinst is not None:
961 ipinst.hooks.synchronize_with_editor(filepath, lnum, 0)
962
963 return [[head] + frames + [''.join(formatted_exception[0])]]
964
965 def get_records(self, etb, number_of_lines_of_context, tb_offset):
966 try:
967 # Try the default getinnerframes and Alex's: Alex's fixes some
968 # problems, but it generates empty tracebacks for console errors
969 # (5 blanks lines) where none should be returned.
970 return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset)
971 except:
972 # FIXME: I've been getting many crash reports from python 2.3
973 # users, traceable to inspect.py. If I can find a small test-case
974 # to reproduce this, I should either write a better workaround or
975 # file a bug report against inspect (if that's the real problem).
976 # So far, I haven't been able to find an isolated example to
977 # reproduce the problem.
978 inspect_error()
979 traceback.print_exc(file=self.ostream)
980 info('\nUnfortunately, your original traceback can not be constructed.\n')
981 return None
982
983 def get_parts_of_chained_exception(self, evalue):
984 def get_chained_exception(exception_value):
985 cause = getattr(exception_value, '__cause__', None)
986 if cause:
987 return cause
988 return getattr(exception_value, '__context__', None)
989
990 chained_evalue = get_chained_exception(evalue)
991
992 if chained_evalue:
993 return chained_evalue.__class__, chained_evalue, chained_evalue.__traceback__
994
995 def structured_traceback(self, etype, evalue, etb, tb_offset=None,
996 number_of_lines_of_context=5):
997 """Return a nice text document describing the traceback."""
998
999 formatted_exception = self.format_exception_as_a_whole(etype, evalue, etb, number_of_lines_of_context,
1000 tb_offset)
1001
1002 colors = self.Colors # just a shorthand + quicker name lookup
1003 colorsnormal = colors.Normal # used a lot
1004 head = '%s%s%s' % (colors.topline, '-' * 75, colorsnormal)
1005 structured_traceback_parts = [head]
1006 if py3compat.PY3:
1007 chained_exceptions_tb_offset = 0
1008 lines_of_context = 3
1009 formatted_exceptions = formatted_exception
1010 exception = self.get_parts_of_chained_exception(evalue)
1011 if exception:
1012 formatted_exceptions += self.prepare_chained_exception_message(evalue.__cause__)
1013 etype, evalue, etb = exception
1014 else:
1015 evalue = None
1016 while evalue:
1017 formatted_exceptions += self.format_exception_as_a_whole(etype, evalue, etb, lines_of_context,
1018 chained_exceptions_tb_offset)
1019 exception = self.get_parts_of_chained_exception(evalue)
1020
1021 if exception:
1022 formatted_exceptions += self.prepare_chained_exception_message(evalue.__cause__)
1023 etype, evalue, etb = exception
1024 else:
1025 evalue = None
1026
1027 # we want to see exceptions in a reversed order:
1028 # the first exception should be on top
1029 for formatted_exception in reversed(formatted_exceptions):
1030 structured_traceback_parts += formatted_exception
1031 else:
1032 structured_traceback_parts += formatted_exception[0]
1033
1034 return structured_traceback_parts
1035
1036 def debugger(self, force=False):
1037 """Call up the pdb debugger if desired, always clean up the tb
1038 reference.
1039
1040 Keywords:
1041
1042 - force(False): by default, this routine checks the instance call_pdb
1043 flag and does not actually invoke the debugger if the flag is false.
1044 The 'force' option forces the debugger to activate even if the flag
1045 is false.
1046
1047 If the call_pdb flag is set, the pdb interactive debugger is
1048 invoked. In all cases, the self.tb reference to the current traceback
1049 is deleted to prevent lingering references which hamper memory
1050 management.
1051
1052 Note that each call to pdb() does an 'import readline', so if your app
1053 requires a special setup for the readline completers, you'll have to
1054 fix that by hand after invoking the exception handler."""
1055
1056 if force or self.call_pdb:
1057 if self.pdb is None:
1058 self.pdb = debugger.Pdb(
1059 self.color_scheme_table.active_scheme_name)
1060 # the system displayhook may have changed, restore the original
1061 # for pdb
1062 display_trap = DisplayTrap(hook=sys.__displayhook__)
1063 with display_trap:
1064 self.pdb.reset()
1065 # Find the right frame so we don't pop up inside ipython itself
1066 if hasattr(self, 'tb') and self.tb is not None:
1067 etb = self.tb
1068 else:
1069 etb = self.tb = sys.last_traceback
1070 while self.tb is not None and self.tb.tb_next is not None:
1071 self.tb = self.tb.tb_next
1072 if etb and etb.tb_next:
1073 etb = etb.tb_next
1074 self.pdb.botframe = etb.tb_frame
1075 self.pdb.interaction(self.tb.tb_frame, self.tb)
1076
1077 if hasattr(self, 'tb'):
1078 del self.tb
1079
1080 def handler(self, info=None):
1081 (etype, evalue, etb) = info or sys.exc_info()
1082 self.tb = etb
1083 ostream = self.ostream
1084 ostream.flush()
1085 ostream.write(self.text(etype, evalue, etb))
1086 ostream.write('\n')
1087 ostream.flush()
1088
1089 # Changed so an instance can just be called as VerboseTB_inst() and print
1090 # out the right info on its own.
1091 def __call__(self, etype=None, evalue=None, etb=None):
1092 """This hook can replace sys.excepthook (for Python 2.1 or higher)."""
1093 if etb is None:
1094 self.handler()
1095 else:
1096 self.handler((etype, evalue, etb))
1097 try:
1098 self.debugger()
1099 except KeyboardInterrupt:
1100 print("\nKeyboardInterrupt")
1101
1102
1103 #----------------------------------------------------------------------------
1104 class FormattedTB(VerboseTB, ListTB):
1105 """Subclass ListTB but allow calling with a traceback.
1106
1107 It can thus be used as a sys.excepthook for Python > 2.1.
1108
1109 Also adds 'Context' and 'Verbose' modes, not available in ListTB.
1110
1111 Allows a tb_offset to be specified. This is useful for situations where
1112 one needs to remove a number of topmost frames from the traceback (such as
1113 occurs with python programs that themselves execute other python code,
1114 like Python shells). """
1115
1116 def __init__(self, mode='Plain', color_scheme='Linux', call_pdb=False,
1117 ostream=None,
1118 tb_offset=0, long_header=False, include_vars=False,
1119 check_cache=None):
1120
1121 # NEVER change the order of this list. Put new modes at the end:
1122 self.valid_modes = ['Plain', 'Context', 'Verbose']
1123 self.verbose_modes = self.valid_modes[1:3]
1124
1125 VerboseTB.__init__(self, color_scheme=color_scheme, call_pdb=call_pdb,
1126 ostream=ostream, tb_offset=tb_offset,
1127 long_header=long_header, include_vars=include_vars,
1128 check_cache=check_cache)
1129
1130 # Different types of tracebacks are joined with different separators to
1131 # form a single string. They are taken from this dict
1132 self._join_chars = dict(Plain='', Context='\n', Verbose='\n')
1133 # set_mode also sets the tb_join_char attribute
1134 self.set_mode(mode)
1135
1136 def _extract_tb(self, tb):
1137 if tb:
1138 return traceback.extract_tb(tb)
1139 else:
1140 return None
1141
1142 def structured_traceback(self, etype, value, tb, tb_offset=None, number_of_lines_of_context=5):
1143 tb_offset = self.tb_offset if tb_offset is None else tb_offset
1144 mode = self.mode
1145 if mode in self.verbose_modes:
1146 # Verbose modes need a full traceback
1147 return VerboseTB.structured_traceback(
1148 self, etype, value, tb, tb_offset, number_of_lines_of_context
1149 )
1150 else:
1151 # We must check the source cache because otherwise we can print
1152 # out-of-date source code.
1153 self.check_cache()
1154 # Now we can extract and format the exception
1155 elist = self._extract_tb(tb)
1156 return ListTB.structured_traceback(
1157 self, etype, value, elist, tb_offset, number_of_lines_of_context
1158 )
1159
1160 def stb2text(self, stb):
1161 """Convert a structured traceback (a list) to a string."""
1162 return self.tb_join_char.join(stb)
1163
1164
1165 def set_mode(self, mode=None):
1166 """Switch to the desired mode.
1167
1168 If mode is not specified, cycles through the available modes."""
1169
1170 if not mode:
1171 new_idx = (self.valid_modes.index(self.mode) + 1 ) % \
1172 len(self.valid_modes)
1173 self.mode = self.valid_modes[new_idx]
1174 elif mode not in self.valid_modes:
1175 raise ValueError('Unrecognized mode in FormattedTB: <' + mode + '>\n'
1176 'Valid modes: ' + str(self.valid_modes))
1177 else:
1178 self.mode = mode
1179 # include variable details only in 'Verbose' mode
1180 self.include_vars = (self.mode == self.valid_modes[2])
1181 # Set the join character for generating text tracebacks
1182 self.tb_join_char = self._join_chars[self.mode]
1183
1184 # some convenient shortcuts
1185 def plain(self):
1186 self.set_mode(self.valid_modes[0])
1187
1188 def context(self):
1189 self.set_mode(self.valid_modes[1])
1190
1191 def verbose(self):
1192 self.set_mode(self.valid_modes[2])
1193
1194
1195 #----------------------------------------------------------------------------
1196 class AutoFormattedTB(FormattedTB):
1197 """A traceback printer which can be called on the fly.
1198
1199 It will find out about exceptions by itself.
1200
1201 A brief example::
1202
1203 AutoTB = AutoFormattedTB(mode = 'Verbose',color_scheme='Linux')
1204 try:
1205 ...
1206 except:
1207 AutoTB() # or AutoTB(out=logfile) where logfile is an open file object
1208 """
1209
1210 def __call__(self, etype=None, evalue=None, etb=None,
1211 out=None, tb_offset=None):
1212 """Print out a formatted exception traceback.
1213
1214 Optional arguments:
1215 - out: an open file-like object to direct output to.
1216
1217 - tb_offset: the number of frames to skip over in the stack, on a
1218 per-call basis (this overrides temporarily the instance's tb_offset
1219 given at initialization time. """
1220
1221 if out is None:
1222 out = self.ostream
1223 out.flush()
1224 out.write(self.text(etype, evalue, etb, tb_offset))
1225 out.write('\n')
1226 out.flush()
1227 # FIXME: we should remove the auto pdb behavior from here and leave
1228 # that to the clients.
1229 try:
1230 self.debugger()
1231 except KeyboardInterrupt:
1232 print("\nKeyboardInterrupt")
1233
1234 def structured_traceback(self, etype=None, value=None, tb=None,
1235 tb_offset=None, number_of_lines_of_context=5):
1236 if etype is None:
1237 etype, value, tb = sys.exc_info()
1238 self.tb = tb
1239 return FormattedTB.structured_traceback(
1240 self, etype, value, tb, tb_offset, number_of_lines_of_context)
1241
1242
1243 #---------------------------------------------------------------------------
1244
1245 # A simple class to preserve Nathan's original functionality.
1246 class ColorTB(FormattedTB):
1247 """Shorthand to initialize a FormattedTB in Linux colors mode."""
1248
1249 def __init__(self, color_scheme='Linux', call_pdb=0):
1250 FormattedTB.__init__(self, color_scheme=color_scheme,
1251 call_pdb=call_pdb)
1252
1253
1254 class SyntaxTB(ListTB):
1255 """Extension which holds some state: the last exception value"""
1256
1257 def __init__(self, color_scheme='NoColor'):
1258 ListTB.__init__(self, color_scheme)
1259 self.last_syntax_error = None
1260
1261 def __call__(self, etype, value, elist):
1262 self.last_syntax_error = value
1263
1264 ListTB.__call__(self, etype, value, elist)
1265
1266 def structured_traceback(self, etype, value, elist, tb_offset=None,
1267 context=5):
1268 # If the source file has been edited, the line in the syntax error can
1269 # be wrong (retrieved from an outdated cache). This replaces it with
1270 # the current value.
1271 if isinstance(value, SyntaxError) \
1272 and isinstance(value.filename, py3compat.string_types) \
1273 and isinstance(value.lineno, int):
1274 linecache.checkcache(value.filename)
1275 newtext = ulinecache.getline(value.filename, value.lineno)
1276 if newtext:
1277 value.text = newtext
1278 return super(SyntaxTB, self).structured_traceback(etype, value, elist,
1279 tb_offset=tb_offset, context=context)
1280
1281 def clear_err_state(self):
1282 """Return the current error state and clear it"""
1283 e = self.last_syntax_error
1284 self.last_syntax_error = None
1285 return e
1286
1287 def stb2text(self, stb):
1288 """Convert a structured traceback (a list) to a string."""
1289 return ''.join(stb)
1290
1291
1292 # some internal-use functions
1293 def text_repr(value):
1294 """Hopefully pretty robust repr equivalent."""
1295 # this is pretty horrible but should always return *something*
1296 try:
1297 return pydoc.text.repr(value)
1298 except KeyboardInterrupt:
1299 raise
1300 except:
1301 try:
1302 return repr(value)
1303 except KeyboardInterrupt:
1304 raise
1305 except:
1306 try:
1307 # all still in an except block so we catch
1308 # getattr raising
1309 name = getattr(value, '__name__', None)
1310 if name:
1311 # ick, recursion
1312 return text_repr(name)
1313 klass = getattr(value, '__class__', None)
1314 if klass:
1315 return '%s instance' % text_repr(klass)
1316 except KeyboardInterrupt:
1317 raise
1318 except:
1319 return 'UNRECOVERABLE REPR FAILURE'
1320
1321
1322 def eqrepr(value, repr=text_repr):
1323 return '=%s' % repr(value)
1324
1325
1326 def nullrepr(value, repr=text_repr):
1327 return ''
1328
1329
1330 #----------------------------------------------------------------------------
1331
1332 # module testing (minimal)
1333 if __name__ == "__main__":
1334 def spam(c, d_e):
1335 (d, e) = d_e
1336 x = c + d
1337 y = c * d
1338 foo(x, y)
1339
1340 def foo(a, b, bar=1):
1341 eggs(a, b + bar)
1342
1343 def eggs(f, g, z=globals()):
1344 h = f + g
1345 i = f - g
1346 return h / i
1347
1348 print('')
1349 print('*** Before ***')
1350 try:
1351 print(spam(1, (2, 3)))
1352 except:
1353 traceback.print_exc()
1354 print('')
1355
1356 handler = ColorTB()
1357 print('*** ColorTB ***')
1358 try:
1359 print(spam(1, (2, 3)))
1360 except:
1361 handler(*sys.exc_info())
1362 print('')
1363
1364 handler = VerboseTB()
1365 print('*** VerboseTB ***')
1366 try:
1367 print(spam(1, (2, 3)))
1368 except:
1369 handler(*sys.exc_info())
1370 print('')
1371
1372
[end of IPython/core/ultratb.py]
[start of IPython/core/usage.py]
1 # -*- coding: utf-8 -*-
2 """Usage information for the main IPython applications.
3 """
4 #-----------------------------------------------------------------------------
5 # Copyright (C) 2008-2011 The IPython Development Team
6 # Copyright (C) 2001-2007 Fernando Perez. <fperez@colorado.edu>
7 #
8 # Distributed under the terms of the BSD License. The full license is in
9 # the file COPYING, distributed as part of this software.
10 #-----------------------------------------------------------------------------
11
12 import sys
13 from IPython.core import release
14
15 cl_usage = """\
16 =========
17 IPython
18 =========
19
20 Tools for Interactive Computing in Python
21 =========================================
22
23 A Python shell with automatic history (input and output), dynamic object
24 introspection, easier configuration, command completion, access to the
25 system shell and more. IPython can also be embedded in running programs.
26
27
28 Usage
29
30 ipython [subcommand] [options] [-c cmd | -m mod | file] [--] [arg] ...
31
32 If invoked with no options, it executes the file and exits, passing the
33 remaining arguments to the script, just as if you had specified the same
34 command with python. You may need to specify `--` before args to be passed
35 to the script, to prevent IPython from attempting to parse them. If you
36 specify the option `-i` before the filename, it will enter an interactive
37 IPython session after running the script, rather than exiting. Files ending
38 in .py will be treated as normal Python, but files ending in .ipy can
39 contain special IPython syntax (magic commands, shell expansions, etc.).
40
41 Almost all configuration in IPython is available via the command-line. Do
42 `ipython --help-all` to see all available options. For persistent
43 configuration, look into your `ipython_config.py` configuration file for
44 details.
45
46 This file is typically installed in the `IPYTHONDIR` directory, and there
47 is a separate configuration directory for each profile. The default profile
48 directory will be located in $IPYTHONDIR/profile_default. IPYTHONDIR
49 defaults to to `$HOME/.ipython`. For Windows users, $HOME resolves to
50 C:\\Documents and Settings\\YourUserName in most instances.
51
52 To initialize a profile with the default configuration file, do::
53
54 $> ipython profile create
55
56 and start editing `IPYTHONDIR/profile_default/ipython_config.py`
57
58 In IPython's documentation, we will refer to this directory as
59 `IPYTHONDIR`, you can change its default location by creating an
60 environment variable with this name and setting it to the desired path.
61
62 For more information, see the manual available in HTML and PDF in your
63 installation, or online at http://ipython.org/documentation.html.
64 """
65
66 interactive_usage = """
67 IPython -- An enhanced Interactive Python
68 =========================================
69
70 IPython offers a combination of convenient shell features, special commands
71 and a history mechanism for both input (command history) and output (results
72 caching, similar to Mathematica). It is intended to be a fully compatible
73 replacement for the standard Python interpreter, while offering vastly
74 improved functionality and flexibility.
75
76 At your system command line, type 'ipython -h' to see the command line
77 options available. This document only describes interactive features.
78
79 MAIN FEATURES
80 -------------
81
82 * Access to the standard Python help. As of Python 2.1, a help system is
83 available with access to object docstrings and the Python manuals. Simply
84 type 'help' (no quotes) to access it.
85
86 * Magic commands: type %magic for information on the magic subsystem.
87
88 * System command aliases, via the %alias command or the configuration file(s).
89
90 * Dynamic object information:
91
92 Typing ?word or word? prints detailed information about an object. If
93 certain strings in the object are too long (docstrings, code, etc.) they get
94 snipped in the center for brevity.
95
96 Typing ??word or word?? gives access to the full information without
97 snipping long strings. Long strings are sent to the screen through the less
98 pager if longer than the screen, printed otherwise.
99
100 The ?/?? system gives access to the full source code for any object (if
101 available), shows function prototypes and other useful information.
102
103 If you just want to see an object's docstring, type '%pdoc object' (without
104 quotes, and without % if you have automagic on).
105
106 * Completion in the local namespace, by typing TAB at the prompt.
107
108 At any time, hitting tab will complete any available python commands or
109 variable names, and show you a list of the possible completions if there's
110 no unambiguous one. It will also complete filenames in the current directory.
111
112 This feature requires the readline and rlcomplete modules, so it won't work
113 if your Python lacks readline support (such as under Windows).
114
115 * Search previous command history in two ways (also requires readline):
116
117 - Start typing, and then use Ctrl-p (previous,up) and Ctrl-n (next,down) to
118 search through only the history items that match what you've typed so
119 far. If you use Ctrl-p/Ctrl-n at a blank prompt, they just behave like
120 normal arrow keys.
121
122 - Hit Ctrl-r: opens a search prompt. Begin typing and the system searches
123 your history for lines that match what you've typed so far, completing as
124 much as it can.
125
126 - %hist: search history by index (this does *not* require readline).
127
128 * Persistent command history across sessions.
129
130 * Logging of input with the ability to save and restore a working session.
131
132 * System escape with !. Typing !ls will run 'ls' in the current directory.
133
134 * The reload command does a 'deep' reload of a module: changes made to the
135 module since you imported will actually be available without having to exit.
136
137 * Verbose and colored exception traceback printouts. See the magic xmode and
138 xcolor functions for details (just type %magic).
139
140 * Input caching system:
141
142 IPython offers numbered prompts (In/Out) with input and output caching. All
143 input is saved and can be retrieved as variables (besides the usual arrow
144 key recall).
145
146 The following GLOBAL variables always exist (so don't overwrite them!):
147 _i: stores previous input.
148 _ii: next previous.
149 _iii: next-next previous.
150 _ih : a list of all input _ih[n] is the input from line n.
151
152 Additionally, global variables named _i<n> are dynamically created (<n>
153 being the prompt counter), such that _i<n> == _ih[<n>]
154
155 For example, what you typed at prompt 14 is available as _i14 and _ih[14].
156
157 You can create macros which contain multiple input lines from this history,
158 for later re-execution, with the %macro function.
159
160 The history function %hist allows you to see any part of your input history
161 by printing a range of the _i variables. Note that inputs which contain
162 magic functions (%) appear in the history with a prepended comment. This is
163 because they aren't really valid Python code, so you can't exec them.
164
165 * Output caching system:
166
167 For output that is returned from actions, a system similar to the input
168 cache exists but using _ instead of _i. Only actions that produce a result
169 (NOT assignments, for example) are cached. If you are familiar with
170 Mathematica, IPython's _ variables behave exactly like Mathematica's %
171 variables.
172
173 The following GLOBAL variables always exist (so don't overwrite them!):
174 _ (one underscore): previous output.
175 __ (two underscores): next previous.
176 ___ (three underscores): next-next previous.
177
178 Global variables named _<n> are dynamically created (<n> being the prompt
179 counter), such that the result of output <n> is always available as _<n>.
180
181 Finally, a global dictionary named _oh exists with entries for all lines
182 which generated output.
183
184 * Directory history:
185
186 Your history of visited directories is kept in the global list _dh, and the
187 magic %cd command can be used to go to any entry in that list.
188
189 * Auto-parentheses and auto-quotes (adapted from Nathan Gray's LazyPython)
190
191 1. Auto-parentheses
192
193 Callable objects (i.e. functions, methods, etc) can be invoked like
194 this (notice the commas between the arguments)::
195
196 In [1]: callable_ob arg1, arg2, arg3
197
198 and the input will be translated to this::
199
200 callable_ob(arg1, arg2, arg3)
201
202 This feature is off by default (in rare cases it can produce
203 undesirable side-effects), but you can activate it at the command-line
204 by starting IPython with `--autocall 1`, set it permanently in your
205 configuration file, or turn on at runtime with `%autocall 1`.
206
207 You can force auto-parentheses by using '/' as the first character
208 of a line. For example::
209
210 In [1]: /globals # becomes 'globals()'
211
212 Note that the '/' MUST be the first character on the line! This
213 won't work::
214
215 In [2]: print /globals # syntax error
216
217 In most cases the automatic algorithm should work, so you should
218 rarely need to explicitly invoke /. One notable exception is if you
219 are trying to call a function with a list of tuples as arguments (the
220 parenthesis will confuse IPython)::
221
222 In [1]: zip (1,2,3),(4,5,6) # won't work
223
224 but this will work::
225
226 In [2]: /zip (1,2,3),(4,5,6)
227 ------> zip ((1,2,3),(4,5,6))
228 Out[2]= [(1, 4), (2, 5), (3, 6)]
229
230 IPython tells you that it has altered your command line by
231 displaying the new command line preceded by -->. e.g.::
232
233 In [18]: callable list
234 -------> callable (list)
235
236 2. Auto-Quoting
237
238 You can force auto-quoting of a function's arguments by using ',' as
239 the first character of a line. For example::
240
241 In [1]: ,my_function /home/me # becomes my_function("/home/me")
242
243 If you use ';' instead, the whole argument is quoted as a single
244 string (while ',' splits on whitespace)::
245
246 In [2]: ,my_function a b c # becomes my_function("a","b","c")
247 In [3]: ;my_function a b c # becomes my_function("a b c")
248
249 Note that the ',' MUST be the first character on the line! This
250 won't work::
251
252 In [4]: x = ,my_function /home/me # syntax error
253 """
254
255 interactive_usage_min = """\
256 An enhanced console for Python.
257 Some of its features are:
258 - Readline support if the readline library is present.
259 - Tab completion in the local namespace.
260 - Logging of input, see command-line options.
261 - System shell escape via ! , eg !ls.
262 - Magic commands, starting with a % (like %ls, %pwd, %cd, etc.)
263 - Keeps track of locally defined variables via %who, %whos.
264 - Show object information with a ? eg ?x or x? (use ?? for more info).
265 """
266
267 quick_reference = r"""
268 IPython -- An enhanced Interactive Python - Quick Reference Card
269 ================================================================
270
271 obj?, obj?? : Get help, or more help for object (also works as
272 ?obj, ??obj).
273 ?foo.*abc* : List names in 'foo' containing 'abc' in them.
274 %magic : Information about IPython's 'magic' % functions.
275
276 Magic functions are prefixed by % or %%, and typically take their arguments
277 without parentheses, quotes or even commas for convenience. Line magics take a
278 single % and cell magics are prefixed with two %%.
279
280 Example magic function calls:
281
282 %alias d ls -F : 'd' is now an alias for 'ls -F'
283 alias d ls -F : Works if 'alias' not a python name
284 alist = %alias : Get list of aliases to 'alist'
285 cd /usr/share : Obvious. cd -<tab> to choose from visited dirs.
286 %cd?? : See help AND source for magic %cd
287 %timeit x=10 : time the 'x=10' statement with high precision.
288 %%timeit x=2**100
289 x**100 : time 'x**100' with a setup of 'x=2**100'; setup code is not
290 counted. This is an example of a cell magic.
291
292 System commands:
293
294 !cp a.txt b/ : System command escape, calls os.system()
295 cp a.txt b/ : after %rehashx, most system commands work without !
296 cp ${f}.txt $bar : Variable expansion in magics and system commands
297 files = !ls /usr : Capture sytem command output
298 files.s, files.l, files.n: "a b c", ['a','b','c'], 'a\nb\nc'
299
300 History:
301
302 _i, _ii, _iii : Previous, next previous, next next previous input
303 _i4, _ih[2:5] : Input history line 4, lines 2-4
304 exec _i81 : Execute input history line #81 again
305 %rep 81 : Edit input history line #81
306 _, __, ___ : previous, next previous, next next previous output
307 _dh : Directory history
308 _oh : Output history
309 %hist : Command history. '%hist -g foo' search history for 'foo'
310
311 Autocall:
312
313 f 1,2 : f(1,2) # Off by default, enable with %autocall magic.
314 /f 1,2 : f(1,2) (forced autoparen)
315 ,f 1 2 : f("1","2")
316 ;f 1 2 : f("1 2")
317
318 Remember: TAB completion works in many contexts, not just file names
319 or python names.
320
321 The following magic functions are currently available:
322
323 """
324
325 gui_reference = """\
326 ===============================
327 The graphical IPython console
328 ===============================
329
330 This console is designed to emulate the look, feel and workflow of a terminal
331 environment, while adding a number of enhancements that are simply not possible
332 in a real terminal, such as inline syntax highlighting, true multiline editing,
333 inline graphics and much more.
334
335 This quick reference document contains the basic information you'll need to
336 know to make the most efficient use of it. For the various command line
337 options available at startup, type ``ipython qtconsole --help`` at the command line.
338
339
340 Multiline editing
341 =================
342
343 The graphical console is capable of true multiline editing, but it also tries
344 to behave intuitively like a terminal when possible. If you are used to
345 IPython's old terminal behavior, you should find the transition painless, and
346 once you learn a few basic keybindings it will be a much more efficient
347 environment.
348
349 For single expressions or indented blocks, the console behaves almost like the
350 terminal IPython: single expressions are immediately evaluated, and indented
351 blocks are evaluated once a single blank line is entered::
352
353 In [1]: print "Hello IPython!" # Enter was pressed at the end of the line
354 Hello IPython!
355
356 In [2]: for i in range(10):
357 ...: print i,
358 ...:
359 0 1 2 3 4 5 6 7 8 9
360
361 If you want to enter more than one expression in a single input block
362 (something not possible in the terminal), you can use ``Control-Enter`` at the
363 end of your first line instead of ``Enter``. At that point the console goes
364 into 'cell mode' and even if your inputs are not indented, it will continue
365 accepting arbitrarily many lines until either you enter an extra blank line or
366 you hit ``Shift-Enter`` (the key binding that forces execution). When a
367 multiline cell is entered, IPython analyzes it and executes its code producing
368 an ``Out[n]`` prompt only for the last expression in it, while the rest of the
369 cell is executed as if it was a script. An example should clarify this::
370
371 In [3]: x=1 # Hit C-Enter here
372 ...: y=2 # from now on, regular Enter is sufficient
373 ...: z=3
374 ...: x**2 # This does *not* produce an Out[] value
375 ...: x+y+z # Only the last expression does
376 ...:
377 Out[3]: 6
378
379 The behavior where an extra blank line forces execution is only active if you
380 are actually typing at the keyboard each line, and is meant to make it mimic
381 the IPython terminal behavior. If you paste a long chunk of input (for example
382 a long script copied form an editor or web browser), it can contain arbitrarily
383 many intermediate blank lines and they won't cause any problems. As always,
384 you can then make it execute by appending a blank line *at the end* or hitting
385 ``Shift-Enter`` anywhere within the cell.
386
387 With the up arrow key, you can retrieve previous blocks of input that contain
388 multiple lines. You can move inside of a multiline cell like you would in any
389 text editor. When you want it executed, the simplest thing to do is to hit the
390 force execution key, ``Shift-Enter`` (though you can also navigate to the end
391 and append a blank line by using ``Enter`` twice).
392
393 If you've edited a multiline cell and accidentally navigate out of it with the
394 up or down arrow keys, IPython will clear the cell and replace it with the
395 contents of the one above or below that you navigated to. If this was an
396 accident and you want to retrieve the cell you were editing, use the Undo
397 keybinding, ``Control-z``.
398
399
400 Key bindings
401 ============
402
403 The IPython console supports most of the basic Emacs line-oriented keybindings,
404 in addition to some of its own.
405
406 The keybinding prefixes mean:
407
408 - ``C``: Control
409 - ``S``: Shift
410 - ``M``: Meta (typically the Alt key)
411
412 The keybindings themselves are:
413
414 - ``Enter``: insert new line (may cause execution, see above).
415 - ``C-Enter``: *force* new line, *never* causes execution.
416 - ``S-Enter``: *force* execution regardless of where cursor is, no newline added.
417 - ``Up``: step backwards through the history.
418 - ``Down``: step forwards through the history.
419 - ``S-Up``: search backwards through the history (like ``C-r`` in bash).
420 - ``S-Down``: search forwards through the history.
421 - ``C-c``: copy highlighted text to clipboard (prompts are automatically stripped).
422 - ``C-S-c``: copy highlighted text to clipboard (prompts are not stripped).
423 - ``C-v``: paste text from clipboard.
424 - ``C-z``: undo (retrieves lost text if you move out of a cell with the arrows).
425 - ``C-S-z``: redo.
426 - ``C-o``: move to 'other' area, between pager and terminal.
427 - ``C-l``: clear terminal.
428 - ``C-a``: go to beginning of line.
429 - ``C-e``: go to end of line.
430 - ``C-u``: kill from cursor to the begining of the line.
431 - ``C-k``: kill from cursor to the end of the line.
432 - ``C-y``: yank (paste)
433 - ``C-p``: previous line (like up arrow)
434 - ``C-n``: next line (like down arrow)
435 - ``C-f``: forward (like right arrow)
436 - ``C-b``: back (like left arrow)
437 - ``C-d``: delete next character, or exits if input is empty
438 - ``M-<``: move to the beginning of the input region.
439 - ``M->``: move to the end of the input region.
440 - ``M-d``: delete next word.
441 - ``M-Backspace``: delete previous word.
442 - ``C-.``: force a kernel restart (a confirmation dialog appears).
443 - ``C-+``: increase font size.
444 - ``C--``: decrease font size.
445 - ``C-M-Space``: toggle full screen. (Command-Control-Space on Mac OS X)
446
447 The IPython pager
448 =================
449
450 IPython will show long blocks of text from many sources using a builtin pager.
451 You can control where this pager appears with the ``--paging`` command-line
452 flag:
453
454 - ``inside`` [default]: the pager is overlaid on top of the main terminal. You
455 must quit the pager to get back to the terminal (similar to how a pager such
456 as ``less`` or ``more`` works).
457
458 - ``vsplit``: the console is made double-tall, and the pager appears on the
459 bottom area when needed. You can view its contents while using the terminal.
460
461 - ``hsplit``: the console is made double-wide, and the pager appears on the
462 right area when needed. You can view its contents while using the terminal.
463
464 - ``none``: the console never pages output.
465
466 If you use the vertical or horizontal paging modes, you can navigate between
467 terminal and pager as follows:
468
469 - Tab key: goes from pager to terminal (but not the other way around).
470 - Control-o: goes from one to another always.
471 - Mouse: click on either.
472
473 In all cases, the ``q`` or ``Escape`` keys quit the pager (when used with the
474 focus on the pager area).
475
476 Running subprocesses
477 ====================
478
479 The graphical IPython console uses the ``pexpect`` module to run subprocesses
480 when you type ``!command``. This has a number of advantages (true asynchronous
481 output from subprocesses as well as very robust termination of rogue
482 subprocesses with ``Control-C``), as well as some limitations. The main
483 limitation is that you can *not* interact back with the subprocess, so anything
484 that invokes a pager or expects you to type input into it will block and hang
485 (you can kill it with ``Control-C``).
486
487 We have provided as magics ``%less`` to page files (aliased to ``%more``),
488 ``%clear`` to clear the terminal, and ``%man`` on Linux/OSX. These cover the
489 most common commands you'd want to call in your subshell and that would cause
490 problems if invoked via ``!cmd``, but you need to be aware of this limitation.
491
492 Display
493 =======
494
495 The IPython console can now display objects in a variety of formats, including
496 HTML, PNG and SVG. This is accomplished using the display functions in
497 ``IPython.core.display``::
498
499 In [4]: from IPython.core.display import display, display_html
500
501 In [5]: from IPython.core.display import display_png, display_svg
502
503 Python objects can simply be passed to these functions and the appropriate
504 representations will be displayed in the console as long as the objects know
505 how to compute those representations. The easiest way of teaching objects how
506 to format themselves in various representations is to define special methods
507 such as: ``_repr_html_``, ``_repr_svg_`` and ``_repr_png_``. IPython's display formatters
508 can also be given custom formatter functions for various types::
509
510 In [6]: ip = get_ipython()
511
512 In [7]: html_formatter = ip.display_formatter.formatters['text/html']
513
514 In [8]: html_formatter.for_type(Foo, foo_to_html)
515
516 For further details, see ``IPython.core.formatters``.
517
518 Inline matplotlib graphics
519 ==========================
520
521 The IPython console is capable of displaying matplotlib figures inline, in SVG
522 or PNG format. If started with the ``matplotlib=inline``, then all figures are
523 rendered inline automatically (PNG by default). If started with ``--matplotlib``
524 or ``matplotlib=<your backend>``, then a GUI backend will be used, but IPython's
525 ``display()`` and ``getfigs()`` functions can be used to view plots inline::
526
527 In [9]: display(*getfigs()) # display all figures inline
528
529 In[10]: display(*getfigs(1,2)) # display figures 1 and 2 inline
530 """
531
532
533 quick_guide = """\
534 ? -> Introduction and overview of IPython's features.
535 %quickref -> Quick reference.
536 help -> Python's own help system.
537 object? -> Details about 'object', use 'object??' for extra details.
538 """
539
540 gui_note = """\
541 %guiref -> A brief reference about the graphical user interface.
542 """
543
544 default_banner_parts = [
545 'Python %s\n' % (sys.version.split('\n')[0],),
546 'Type "copyright", "credits" or "license" for more information.\n\n',
547 'IPython {version} -- An enhanced Interactive Python.\n'.format(
548 version=release.version,
549 ),
550 quick_guide
551 ]
552
553 default_gui_banner_parts = default_banner_parts + [gui_note]
554
555 default_banner = ''.join(default_banner_parts)
556
557 default_gui_banner = ''.join(default_gui_banner_parts)
558
559 # page GUI Reference, for use as a magic:
560
561 def page_guiref(arg_s=None):
562 """Show a basic reference about the GUI Console."""
563 from IPython.core import page
564 page.page(gui_reference)
565
566
[end of IPython/core/usage.py]
[start of IPython/terminal/interactiveshell.py]
1 # -*- coding: utf-8 -*-
2 """Subclass of InteractiveShell for terminal based frontends."""
3
4 #-----------------------------------------------------------------------------
5 # Copyright (C) 2001 Janko Hauser <jhauser@zscout.de>
6 # Copyright (C) 2001-2007 Fernando Perez. <fperez@colorado.edu>
7 # Copyright (C) 2008-2011 The IPython Development Team
8 #
9 # Distributed under the terms of the BSD License. The full license is in
10 # the file COPYING, distributed as part of this software.
11 #-----------------------------------------------------------------------------
12
13 #-----------------------------------------------------------------------------
14 # Imports
15 #-----------------------------------------------------------------------------
16 from __future__ import print_function
17
18 import bdb
19 import os
20 import sys
21
22 from IPython.core.error import TryNext, UsageError
23 from IPython.core.usage import interactive_usage
24 from IPython.core.inputsplitter import IPythonInputSplitter
25 from IPython.core.interactiveshell import InteractiveShell, InteractiveShellABC
26 from IPython.core.magic import Magics, magics_class, line_magic
27 from IPython.lib.clipboard import ClipboardEmpty
28 from IPython.testing.skipdoctest import skip_doctest
29 from IPython.utils.encoding import get_stream_enc
30 from IPython.utils import py3compat
31 from IPython.utils.terminal import toggle_set_term_title, set_term_title
32 from IPython.utils.process import abbrev_cwd
33 from IPython.utils.warn import warn, error
34 from IPython.utils.text import num_ini_spaces, SList, strip_email_quotes
35 from IPython.utils.traitlets import Integer, CBool, Unicode
36
37 #-----------------------------------------------------------------------------
38 # Utilities
39 #-----------------------------------------------------------------------------
40
41 def get_default_editor():
42 try:
43 ed = os.environ['EDITOR']
44 if not py3compat.PY3:
45 ed = ed.decode()
46 return ed
47 except KeyError:
48 pass
49 except UnicodeError:
50 warn("$EDITOR environment variable is not pure ASCII. Using platform "
51 "default editor.")
52
53 if os.name == 'posix':
54 return 'vi' # the only one guaranteed to be there!
55 else:
56 return 'notepad' # same in Windows!
57
58 def get_pasted_lines(sentinel, l_input=py3compat.input, quiet=False):
59 """ Yield pasted lines until the user enters the given sentinel value.
60 """
61 if not quiet:
62 print("Pasting code; enter '%s' alone on the line to stop or use Ctrl-D." \
63 % sentinel)
64 prompt = ":"
65 else:
66 prompt = ""
67 while True:
68 try:
69 l = py3compat.str_to_unicode(l_input(prompt))
70 if l == sentinel:
71 return
72 else:
73 yield l
74 except EOFError:
75 print('<EOF>')
76 return
77
78
79 #------------------------------------------------------------------------
80 # Terminal-specific magics
81 #------------------------------------------------------------------------
82
83 @magics_class
84 class TerminalMagics(Magics):
85 def __init__(self, shell):
86 super(TerminalMagics, self).__init__(shell)
87 self.input_splitter = IPythonInputSplitter()
88
89 def store_or_execute(self, block, name):
90 """ Execute a block, or store it in a variable, per the user's request.
91 """
92 if name:
93 # If storing it for further editing
94 self.shell.user_ns[name] = SList(block.splitlines())
95 print("Block assigned to '%s'" % name)
96 else:
97 b = self.preclean_input(block)
98 self.shell.user_ns['pasted_block'] = b
99 self.shell.using_paste_magics = True
100 try:
101 self.shell.run_cell(b)
102 finally:
103 self.shell.using_paste_magics = False
104
105 def preclean_input(self, block):
106 lines = block.splitlines()
107 while lines and not lines[0].strip():
108 lines = lines[1:]
109 return strip_email_quotes('\n'.join(lines))
110
111 def rerun_pasted(self, name='pasted_block'):
112 """ Rerun a previously pasted command.
113 """
114 b = self.shell.user_ns.get(name)
115
116 # Sanity checks
117 if b is None:
118 raise UsageError('No previous pasted block available')
119 if not isinstance(b, py3compat.string_types):
120 raise UsageError(
121 "Variable 'pasted_block' is not a string, can't execute")
122
123 print("Re-executing '%s...' (%d chars)"% (b.split('\n',1)[0], len(b)))
124 self.shell.run_cell(b)
125
126 @line_magic
127 def autoindent(self, parameter_s = ''):
128 """Toggle autoindent on/off (if available)."""
129
130 self.shell.set_autoindent()
131 print("Automatic indentation is:",['OFF','ON'][self.shell.autoindent])
132
133 @skip_doctest
134 @line_magic
135 def cpaste(self, parameter_s=''):
136 """Paste & execute a pre-formatted code block from clipboard.
137
138 You must terminate the block with '--' (two minus-signs) or Ctrl-D
139 alone on the line. You can also provide your own sentinel with '%paste
140 -s %%' ('%%' is the new sentinel for this operation).
141
142 The block is dedented prior to execution to enable execution of method
143 definitions. '>' and '+' characters at the beginning of a line are
144 ignored, to allow pasting directly from e-mails, diff files and
145 doctests (the '...' continuation prompt is also stripped). The
146 executed block is also assigned to variable named 'pasted_block' for
147 later editing with '%edit pasted_block'.
148
149 You can also pass a variable name as an argument, e.g. '%cpaste foo'.
150 This assigns the pasted block to variable 'foo' as string, without
151 dedenting or executing it (preceding >>> and + is still stripped)
152
153 '%cpaste -r' re-executes the block previously entered by cpaste.
154 '%cpaste -q' suppresses any additional output messages.
155
156 Do not be alarmed by garbled output on Windows (it's a readline bug).
157 Just press enter and type -- (and press enter again) and the block
158 will be what was just pasted.
159
160 IPython statements (magics, shell escapes) are not supported (yet).
161
162 See also
163 --------
164 paste: automatically pull code from clipboard.
165
166 Examples
167 --------
168 ::
169
170 In [8]: %cpaste
171 Pasting code; enter '--' alone on the line to stop.
172 :>>> a = ["world!", "Hello"]
173 :>>> print " ".join(sorted(a))
174 :--
175 Hello world!
176 """
177 opts, name = self.parse_options(parameter_s, 'rqs:', mode='string')
178 if 'r' in opts:
179 self.rerun_pasted()
180 return
181
182 quiet = ('q' in opts)
183
184 sentinel = opts.get('s', u'--')
185 block = '\n'.join(get_pasted_lines(sentinel, quiet=quiet))
186 self.store_or_execute(block, name)
187
188 @line_magic
189 def paste(self, parameter_s=''):
190 """Paste & execute a pre-formatted code block from clipboard.
191
192 The text is pulled directly from the clipboard without user
193 intervention and printed back on the screen before execution (unless
194 the -q flag is given to force quiet mode).
195
196 The block is dedented prior to execution to enable execution of method
197 definitions. '>' and '+' characters at the beginning of a line are
198 ignored, to allow pasting directly from e-mails, diff files and
199 doctests (the '...' continuation prompt is also stripped). The
200 executed block is also assigned to variable named 'pasted_block' for
201 later editing with '%edit pasted_block'.
202
203 You can also pass a variable name as an argument, e.g. '%paste foo'.
204 This assigns the pasted block to variable 'foo' as string, without
205 executing it (preceding >>> and + is still stripped).
206
207 Options:
208
209 -r: re-executes the block previously entered by cpaste.
210
211 -q: quiet mode: do not echo the pasted text back to the terminal.
212
213 IPython statements (magics, shell escapes) are not supported (yet).
214
215 See also
216 --------
217 cpaste: manually paste code into terminal until you mark its end.
218 """
219 opts, name = self.parse_options(parameter_s, 'rq', mode='string')
220 if 'r' in opts:
221 self.rerun_pasted()
222 return
223 try:
224 block = self.shell.hooks.clipboard_get()
225 except TryNext as clipboard_exc:
226 message = getattr(clipboard_exc, 'args')
227 if message:
228 error(message[0])
229 else:
230 error('Could not get text from the clipboard.')
231 return
232 except ClipboardEmpty:
233 raise UsageError("The clipboard appears to be empty")
234
235 # By default, echo back to terminal unless quiet mode is requested
236 if 'q' not in opts:
237 write = self.shell.write
238 write(self.shell.pycolorize(block))
239 if not block.endswith('\n'):
240 write('\n')
241 write("## -- End pasted text --\n")
242
243 self.store_or_execute(block, name)
244
245 # Class-level: add a '%cls' magic only on Windows
246 if sys.platform == 'win32':
247 @line_magic
248 def cls(self, s):
249 """Clear screen.
250 """
251 os.system("cls")
252
253 #-----------------------------------------------------------------------------
254 # Main class
255 #-----------------------------------------------------------------------------
256
257 class TerminalInteractiveShell(InteractiveShell):
258
259 autoedit_syntax = CBool(False, config=True,
260 help="auto editing of files with syntax errors.")
261 confirm_exit = CBool(True, config=True,
262 help="""
263 Set to confirm when you try to exit IPython with an EOF (Control-D
264 in Unix, Control-Z/Enter in Windows). By typing 'exit' or 'quit',
265 you can force a direct exit without any confirmation.""",
266 )
267 # This display_banner only controls whether or not self.show_banner()
268 # is called when mainloop/interact are called. The default is False
269 # because for the terminal based application, the banner behavior
270 # is controlled by the application.
271 display_banner = CBool(False) # This isn't configurable!
272 embedded = CBool(False)
273 embedded_active = CBool(False)
274 editor = Unicode(get_default_editor(), config=True,
275 help="Set the editor used by IPython (default to $EDITOR/vi/notepad)."
276 )
277 pager = Unicode('less', config=True,
278 help="The shell program to be used for paging.")
279
280 screen_length = Integer(0, config=True,
281 help=
282 """Number of lines of your screen, used to control printing of very
283 long strings. Strings longer than this number of lines will be sent
284 through a pager instead of directly printed. The default value for
285 this is 0, which means IPython will auto-detect your screen size every
286 time it needs to print certain potentially long strings (this doesn't
287 change the behavior of the 'print' keyword, it's only triggered
288 internally). If for some reason this isn't working well (it needs
289 curses support), specify it yourself. Otherwise don't change the
290 default.""",
291 )
292 term_title = CBool(False, config=True,
293 help="Enable auto setting the terminal title."
294 )
295 usage = Unicode(interactive_usage)
296
297 # This `using_paste_magics` is used to detect whether the code is being
298 # executed via paste magics functions
299 using_paste_magics = CBool(False)
300
301 # In the terminal, GUI control is done via PyOS_InputHook
302 @staticmethod
303 def enable_gui(gui=None, app=None):
304 """Switch amongst GUI input hooks by name.
305 """
306 # Deferred import
307 from IPython.lib.inputhook import enable_gui as real_enable_gui
308 try:
309 return real_enable_gui(gui, app)
310 except ValueError as e:
311 raise UsageError("%s" % e)
312
313 system = InteractiveShell.system_raw
314
315 #-------------------------------------------------------------------------
316 # Overrides of init stages
317 #-------------------------------------------------------------------------
318
319 def init_display_formatter(self):
320 super(TerminalInteractiveShell, self).init_display_formatter()
321 # terminal only supports plaintext
322 self.display_formatter.active_types = ['text/plain']
323
324 #-------------------------------------------------------------------------
325 # Things related to the terminal
326 #-------------------------------------------------------------------------
327
328 @property
329 def usable_screen_length(self):
330 if self.screen_length == 0:
331 return 0
332 else:
333 num_lines_bot = self.separate_in.count('\n')+1
334 return self.screen_length - num_lines_bot
335
336 def _term_title_changed(self, name, new_value):
337 self.init_term_title()
338
339 def init_term_title(self):
340 # Enable or disable the terminal title.
341 if self.term_title:
342 toggle_set_term_title(True)
343 set_term_title('IPython: ' + abbrev_cwd())
344 else:
345 toggle_set_term_title(False)
346
347 #-------------------------------------------------------------------------
348 # Things related to aliases
349 #-------------------------------------------------------------------------
350
351 def init_alias(self):
352 # The parent class defines aliases that can be safely used with any
353 # frontend.
354 super(TerminalInteractiveShell, self).init_alias()
355
356 # Now define aliases that only make sense on the terminal, because they
357 # need direct access to the console in a way that we can't emulate in
358 # GUI or web frontend
359 if os.name == 'posix':
360 aliases = [('clear', 'clear'), ('more', 'more'), ('less', 'less'),
361 ('man', 'man')]
362 else :
363 aliases = []
364
365 for name, cmd in aliases:
366 self.alias_manager.soft_define_alias(name, cmd)
367
368 #-------------------------------------------------------------------------
369 # Mainloop and code execution logic
370 #-------------------------------------------------------------------------
371
372 def mainloop(self, display_banner=None):
373 """Start the mainloop.
374
375 If an optional banner argument is given, it will override the
376 internally created default banner.
377 """
378
379 with self.builtin_trap, self.display_trap:
380
381 while 1:
382 try:
383 self.interact(display_banner=display_banner)
384 #self.interact_with_readline()
385 # XXX for testing of a readline-decoupled repl loop, call
386 # interact_with_readline above
387 break
388 except KeyboardInterrupt:
389 # this should not be necessary, but KeyboardInterrupt
390 # handling seems rather unpredictable...
391 self.write("\nKeyboardInterrupt in interact()\n")
392
393 def _replace_rlhist_multiline(self, source_raw, hlen_before_cell):
394 """Store multiple lines as a single entry in history"""
395
396 # do nothing without readline or disabled multiline
397 if not self.has_readline or not self.multiline_history:
398 return hlen_before_cell
399
400 # windows rl has no remove_history_item
401 if not hasattr(self.readline, "remove_history_item"):
402 return hlen_before_cell
403
404 # skip empty cells
405 if not source_raw.rstrip():
406 return hlen_before_cell
407
408 # nothing changed do nothing, e.g. when rl removes consecutive dups
409 hlen = self.readline.get_current_history_length()
410 if hlen == hlen_before_cell:
411 return hlen_before_cell
412
413 for i in range(hlen - hlen_before_cell):
414 self.readline.remove_history_item(hlen - i - 1)
415 stdin_encoding = get_stream_enc(sys.stdin, 'utf-8')
416 self.readline.add_history(py3compat.unicode_to_str(source_raw.rstrip(),
417 stdin_encoding))
418 return self.readline.get_current_history_length()
419
420 def interact(self, display_banner=None):
421 """Closely emulate the interactive Python console."""
422
423 # batch run -> do not interact
424 if self.exit_now:
425 return
426
427 if display_banner is None:
428 display_banner = self.display_banner
429
430 if isinstance(display_banner, py3compat.string_types):
431 self.show_banner(display_banner)
432 elif display_banner:
433 self.show_banner()
434
435 more = False
436
437 if self.has_readline:
438 self.readline_startup_hook(self.pre_readline)
439 hlen_b4_cell = self.readline.get_current_history_length()
440 else:
441 hlen_b4_cell = 0
442 # exit_now is set by a call to %Exit or %Quit, through the
443 # ask_exit callback.
444
445 while not self.exit_now:
446 self.hooks.pre_prompt_hook()
447 if more:
448 try:
449 prompt = self.prompt_manager.render('in2')
450 except:
451 self.showtraceback()
452 if self.autoindent:
453 self.rl_do_indent = True
454
455 else:
456 try:
457 prompt = self.separate_in + self.prompt_manager.render('in')
458 except:
459 self.showtraceback()
460 try:
461 line = self.raw_input(prompt)
462 if self.exit_now:
463 # quick exit on sys.std[in|out] close
464 break
465 if self.autoindent:
466 self.rl_do_indent = False
467
468 except KeyboardInterrupt:
469 #double-guard against keyboardinterrupts during kbdint handling
470 try:
471 self.write('\n' + self.get_exception_only())
472 source_raw = self.input_splitter.raw_reset()
473 hlen_b4_cell = \
474 self._replace_rlhist_multiline(source_raw, hlen_b4_cell)
475 more = False
476 except KeyboardInterrupt:
477 pass
478 except EOFError:
479 if self.autoindent:
480 self.rl_do_indent = False
481 if self.has_readline:
482 self.readline_startup_hook(None)
483 self.write('\n')
484 self.exit()
485 except bdb.BdbQuit:
486 warn('The Python debugger has exited with a BdbQuit exception.\n'
487 'Because of how pdb handles the stack, it is impossible\n'
488 'for IPython to properly format this particular exception.\n'
489 'IPython will resume normal operation.')
490 except:
491 # exceptions here are VERY RARE, but they can be triggered
492 # asynchronously by signal handlers, for example.
493 self.showtraceback()
494 else:
495 try:
496 self.input_splitter.push(line)
497 more = self.input_splitter.push_accepts_more()
498 except SyntaxError:
499 # Run the code directly - run_cell takes care of displaying
500 # the exception.
501 more = False
502 if (self.SyntaxTB.last_syntax_error and
503 self.autoedit_syntax):
504 self.edit_syntax_error()
505 if not more:
506 source_raw = self.input_splitter.raw_reset()
507 self.run_cell(source_raw, store_history=True)
508 hlen_b4_cell = \
509 self._replace_rlhist_multiline(source_raw, hlen_b4_cell)
510
511 # Turn off the exit flag, so the mainloop can be restarted if desired
512 self.exit_now = False
513
514 def raw_input(self, prompt=''):
515 """Write a prompt and read a line.
516
517 The returned line does not include the trailing newline.
518 When the user enters the EOF key sequence, EOFError is raised.
519
520 Parameters
521 ----------
522
523 prompt : str, optional
524 A string to be printed to prompt the user.
525 """
526 # raw_input expects str, but we pass it unicode sometimes
527 prompt = py3compat.cast_bytes_py2(prompt)
528
529 try:
530 line = py3compat.str_to_unicode(self.raw_input_original(prompt))
531 except ValueError:
532 warn("\n********\nYou or a %run:ed script called sys.stdin.close()"
533 " or sys.stdout.close()!\nExiting IPython!\n")
534 self.ask_exit()
535 return ""
536
537 # Try to be reasonably smart about not re-indenting pasted input more
538 # than necessary. We do this by trimming out the auto-indent initial
539 # spaces, if the user's actual input started itself with whitespace.
540 if self.autoindent:
541 if num_ini_spaces(line) > self.indent_current_nsp:
542 line = line[self.indent_current_nsp:]
543 self.indent_current_nsp = 0
544
545 return line
546
547 #-------------------------------------------------------------------------
548 # Methods to support auto-editing of SyntaxErrors.
549 #-------------------------------------------------------------------------
550
551 def edit_syntax_error(self):
552 """The bottom half of the syntax error handler called in the main loop.
553
554 Loop until syntax error is fixed or user cancels.
555 """
556
557 while self.SyntaxTB.last_syntax_error:
558 # copy and clear last_syntax_error
559 err = self.SyntaxTB.clear_err_state()
560 if not self._should_recompile(err):
561 return
562 try:
563 # may set last_syntax_error again if a SyntaxError is raised
564 self.safe_execfile(err.filename,self.user_ns)
565 except:
566 self.showtraceback()
567 else:
568 try:
569 f = open(err.filename)
570 try:
571 # This should be inside a display_trap block and I
572 # think it is.
573 sys.displayhook(f.read())
574 finally:
575 f.close()
576 except:
577 self.showtraceback()
578
579 def _should_recompile(self,e):
580 """Utility routine for edit_syntax_error"""
581
582 if e.filename in ('<ipython console>','<input>','<string>',
583 '<console>','<BackgroundJob compilation>',
584 None):
585
586 return False
587 try:
588 if (self.autoedit_syntax and
589 not self.ask_yes_no('Return to editor to correct syntax error? '
590 '[Y/n] ','y')):
591 return False
592 except EOFError:
593 return False
594
595 def int0(x):
596 try:
597 return int(x)
598 except TypeError:
599 return 0
600 # always pass integer line and offset values to editor hook
601 try:
602 self.hooks.fix_error_editor(e.filename,
603 int0(e.lineno),int0(e.offset),e.msg)
604 except TryNext:
605 warn('Could not open editor')
606 return False
607 return True
608
609 #-------------------------------------------------------------------------
610 # Things related to exiting
611 #-------------------------------------------------------------------------
612
613 def ask_exit(self):
614 """ Ask the shell to exit. Can be overiden and used as a callback. """
615 self.exit_now = True
616
617 def exit(self):
618 """Handle interactive exit.
619
620 This method calls the ask_exit callback."""
621 if self.confirm_exit:
622 if self.ask_yes_no('Do you really want to exit ([y]/n)?','y'):
623 self.ask_exit()
624 else:
625 self.ask_exit()
626
627 #-------------------------------------------------------------------------
628 # Things related to magics
629 #-------------------------------------------------------------------------
630
631 def init_magics(self):
632 super(TerminalInteractiveShell, self).init_magics()
633 self.register_magics(TerminalMagics)
634
635 def showindentationerror(self):
636 super(TerminalInteractiveShell, self).showindentationerror()
637 if not self.using_paste_magics:
638 print("If you want to paste code into IPython, try the "
639 "%paste and %cpaste magic functions.")
640
641
642 InteractiveShellABC.register(TerminalInteractiveShell)
643
[end of IPython/terminal/interactiveshell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| ipython/ipython | 92333e1084ea0d6ff91b55434555e741d2274dc7 | Inspect requests inside a function call should be smarter about what they inspect.
Previously, `func(a, b, <shift-tab>` would give information on `func`, now it gives information on `b`, which is not especially helpful.
This is because we removed logic from the frontend to make it more language agnostic, and we have not yet reimplemented that on the frontend. For 3.1, we should make it at least as smart as 2.x was. The quicky and dirty approach would be a regex; the proper way is tokenising the code.
Ping @mwaskom who brought this up on the mailing list.
| Thanks! I don't actually know how to _use_ any of these packages, so I rely on what IPython tells me they'll do :)
Should note here too that the help also seems to be displaying the `__repr__` for, at least, pandas DataFrames slightly differently in 3.0.rc1, which yields a help popup that is garbled and hides the important bits.
The dataframe reprs sounds like a separate thing - can you file an issue for it? Preferably with screenshots? Thanks.
Done: #7817
More related to this issue:
While implementing a smarter inspector, it would be _great_ if it would work across line breaks. I'm constantly getting bitten by trying to do
``` python
complex_function(some_arg, another_arg, data_frame.some_transformation(),
a_kwarg=a_value, <shift-TAB>
```
And having it not work.
This did not work on the 2.x series either, AFAICT, but if the inspector is going to be reimplemented it would be awesome if it could be added.
If there's smart, tokenising logic to determine what you're inspecting, there's no reason it shouldn't handle multiple lines. Making it smart enough for that might not be a 3.1 thing, though.
| 2015-02-19T20:14:23Z | <patch>
diff --git a/IPython/utils/tokenutil.py b/IPython/utils/tokenutil.py
--- a/IPython/utils/tokenutil.py
+++ b/IPython/utils/tokenutil.py
@@ -58,6 +58,9 @@ def token_at_cursor(cell, cursor_pos=0):
Used for introspection.
+ Function calls are prioritized, so the token for the callable will be returned
+ if the cursor is anywhere inside the call.
+
Parameters
----------
@@ -70,6 +73,7 @@ def token_at_cursor(cell, cursor_pos=0):
names = []
tokens = []
offset = 0
+ call_names = []
for tup in generate_tokens(StringIO(cell).readline):
tok = Token(*tup)
@@ -93,6 +97,11 @@ def token_at_cursor(cell, cursor_pos=0):
if tok.text == '=' and names:
# don't inspect the lhs of an assignment
names.pop(-1)
+ if tok.text == '(' and names:
+ # if we are inside a function call, inspect the function
+ call_names.append(names[-1])
+ elif tok.text == ')' and call_names:
+ call_names.pop(-1)
if offset + end_col > cursor_pos:
# we found the cursor, stop reading
@@ -102,7 +111,9 @@ def token_at_cursor(cell, cursor_pos=0):
if tok.token == tokenize2.NEWLINE:
offset += len(tok.line)
- if names:
+ if call_names:
+ return call_names[-1]
+ elif names:
return names[-1]
else:
return ''
</patch> | [] | [] | |||
docker__compose-2878 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Merge build args when using multiple compose files (or when extending services)
Based on the behavior of `environment` and `labels`, as well as `build.image`, `build.context` etc, I would also expect `build.args` to be merged, instead of being replaced.
To give an example:
## Input
**docker-compose.yml:**
``` yaml
version: "2"
services:
my_service:
build:
context: my-app
args:
SOME_VARIABLE: "42"
```
**docker-compose.override.yml:**
``` yaml
version: "2"
services:
my_service:
build:
args:
HTTP_PROXY: http://proxy.somewhere:80
HTTPS_PROXY: http://proxy.somewhere:80
NO_PROXY: somewhere,localhost
```
**my-app/Dockerfile**
``` Dockerfile
# Just needed to be able to use `build:`
FROM busybox:latest
ARG SOME_VARIABLE=xyz
RUN echo "$SOME_VARIABLE" > /etc/example
```
## Current Output
``` bash
$ docker-compose config
networks: {}
services:
my_service:
build:
args:
HTTPS_PROXY: http://proxy.somewhere:80
HTTP_PROXY: http://proxy.somewhere:80
NO_PROXY: somewhere,localhost
context: <project-dir>\my-app
version: '2.0'
volumes: {}
```
## Expected Output
``` bash
$ docker-compose config
networks: {}
services:
my_service:
build:
args:
SOME_VARIABLE: 42 # Note the merged variable here
HTTPS_PROXY: http://proxy.somewhere:80
HTTP_PROXY: http://proxy.somewhere:80
NO_PROXY: somewhere,localhost
context: <project-dir>\my-app
version: '2.0'
volumes: {}
```
## Version Information
``` bash
$ docker-compose version
docker-compose version 1.6.0, build cdb920a
docker-py version: 1.7.0
CPython version: 2.7.11
OpenSSL version: OpenSSL 1.0.2d 9 Jul 2015
```
# Implementation proposal
I mainly want to get clarification on what the desired behavior is, so that I can possibly help implementing it, maybe even for `1.6.1`.
Personally, I'd like the behavior to be to merge the `build.args` key (as outlined above), for a couple of reasons:
- Principle of least surprise/consistency with `environment`, `labels`, `ports` and so on.
- It enables scenarios like the one outlined above, where the images require some transient configuration to build, in addition to other build variables which actually have an influence on the final image.
The scenario that one wants to replace all build args at once is not very likely IMO; why would you define base build variables in the first place if you're going to replace them anyway?
# Alternative behavior: Output a warning
If the behavior should stay the same as it is now, i.e. to fully replaced the `build.args` keys, then `docker-compose` should at least output a warning IMO. It took me some time to figure out that `docker-compose` was ignoring the build args in the base `docker-compose.yml` file.
</issue>
<code>
[start of README.md]
1 Docker Compose
2 ==============
3 ![Docker Compose](logo.png?raw=true "Docker Compose Logo")
4
5 Compose is a tool for defining and running multi-container Docker applications.
6 With Compose, you use a Compose file to configure your application's services.
7 Then, using a single command, you create and start all the services
8 from your configuration. To learn more about all the features of Compose
9 see [the list of features](https://github.com/docker/compose/blob/release/docs/overview.md#features).
10
11 Compose is great for development, testing, and staging environments, as well as
12 CI workflows. You can learn more about each case in
13 [Common Use Cases](https://github.com/docker/compose/blob/release/docs/overview.md#common-use-cases).
14
15 Using Compose is basically a three-step process.
16
17 1. Define your app's environment with a `Dockerfile` so it can be
18 reproduced anywhere.
19 2. Define the services that make up your app in `docker-compose.yml` so
20 they can be run together in an isolated environment:
21 3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
22
23 A `docker-compose.yml` looks like this:
24
25 web:
26 build: .
27 ports:
28 - "5000:5000"
29 volumes:
30 - .:/code
31 links:
32 - redis
33 redis:
34 image: redis
35
36 For more information about the Compose file, see the
37 [Compose file reference](https://github.com/docker/compose/blob/release/docs/compose-file.md)
38
39 Compose has commands for managing the whole lifecycle of your application:
40
41 * Start, stop and rebuild services
42 * View the status of running services
43 * Stream the log output of running services
44 * Run a one-off command on a service
45
46 Installation and documentation
47 ------------------------------
48
49 - Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
50 - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
51 - Code repository for Compose is on [Github](https://github.com/docker/compose)
52 - If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new)
53
54 Contributing
55 ------------
56
57 [![Build Status](http://jenkins.dockerproject.org/buildStatus/icon?job=Compose%20Master)](http://jenkins.dockerproject.org/job/Compose%20Master/)
58
59 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
60
61 Releasing
62 ---------
63
64 Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/project/RELEASE-PROCESS.md).
65
[end of README.md]
[start of compose/cli/main.py]
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 import contextlib
6 import json
7 import logging
8 import re
9 import sys
10 from inspect import getdoc
11 from operator import attrgetter
12
13 from docker.errors import APIError
14 from requests.exceptions import ReadTimeout
15
16 from . import signals
17 from .. import __version__
18 from ..config import config
19 from ..config import ConfigurationError
20 from ..config import parse_environment
21 from ..config.serialize import serialize_config
22 from ..const import DEFAULT_TIMEOUT
23 from ..const import HTTP_TIMEOUT
24 from ..const import IS_WINDOWS_PLATFORM
25 from ..progress_stream import StreamOutputError
26 from ..project import NoSuchService
27 from ..service import BuildError
28 from ..service import ConvergenceStrategy
29 from ..service import ImageType
30 from ..service import NeedsBuildError
31 from .command import friendly_error_message
32 from .command import get_config_path_from_options
33 from .command import project_from_options
34 from .docopt_command import DocoptCommand
35 from .docopt_command import NoSuchCommand
36 from .errors import UserError
37 from .formatter import ConsoleWarningFormatter
38 from .formatter import Formatter
39 from .log_printer import LogPrinter
40 from .utils import get_version_info
41 from .utils import yesno
42
43
44 if not IS_WINDOWS_PLATFORM:
45 from dockerpty.pty import PseudoTerminal, RunOperation
46
47 log = logging.getLogger(__name__)
48 console_handler = logging.StreamHandler(sys.stderr)
49
50
51 def main():
52 setup_logging()
53 try:
54 command = TopLevelCommand()
55 command.sys_dispatch()
56 except KeyboardInterrupt:
57 log.error("Aborting.")
58 sys.exit(1)
59 except (UserError, NoSuchService, ConfigurationError) as e:
60 log.error(e.msg)
61 sys.exit(1)
62 except NoSuchCommand as e:
63 commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand)))
64 log.error("No such command: %s\n\n%s", e.command, commands)
65 sys.exit(1)
66 except APIError as e:
67 log.error(e.explanation)
68 sys.exit(1)
69 except BuildError as e:
70 log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
71 sys.exit(1)
72 except StreamOutputError as e:
73 log.error(e)
74 sys.exit(1)
75 except NeedsBuildError as e:
76 log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name)
77 sys.exit(1)
78 except ReadTimeout as e:
79 log.error(
80 "An HTTP request took too long to complete. Retry with --verbose to obtain debug information.\n"
81 "If you encounter this issue regularly because of slow network conditions, consider setting "
82 "COMPOSE_HTTP_TIMEOUT to a higher value (current value: %s)." % HTTP_TIMEOUT
83 )
84 sys.exit(1)
85
86
87 def setup_logging():
88 root_logger = logging.getLogger()
89 root_logger.addHandler(console_handler)
90 root_logger.setLevel(logging.DEBUG)
91
92 # Disable requests logging
93 logging.getLogger("requests").propagate = False
94
95
96 def setup_console_handler(handler, verbose):
97 if handler.stream.isatty():
98 format_class = ConsoleWarningFormatter
99 else:
100 format_class = logging.Formatter
101
102 if verbose:
103 handler.setFormatter(format_class('%(name)s.%(funcName)s: %(message)s'))
104 handler.setLevel(logging.DEBUG)
105 else:
106 handler.setFormatter(format_class())
107 handler.setLevel(logging.INFO)
108
109
110 # stolen from docopt master
111 def parse_doc_section(name, source):
112 pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)',
113 re.IGNORECASE | re.MULTILINE)
114 return [s.strip() for s in pattern.findall(source)]
115
116
117 class TopLevelCommand(DocoptCommand):
118 """Define and run multi-container applications with Docker.
119
120 Usage:
121 docker-compose [-f=<arg>...] [options] [COMMAND] [ARGS...]
122 docker-compose -h|--help
123
124 Options:
125 -f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
126 -p, --project-name NAME Specify an alternate project name (default: directory name)
127 --verbose Show more output
128 -v, --version Print version and exit
129
130 Commands:
131 build Build or rebuild services
132 config Validate and view the compose file
133 create Create services
134 down Stop and remove containers, networks, images, and volumes
135 events Receive real time events from containers
136 help Get help on a command
137 kill Kill containers
138 logs View output from containers
139 pause Pause services
140 port Print the public port for a port binding
141 ps List containers
142 pull Pulls service images
143 restart Restart services
144 rm Remove stopped containers
145 run Run a one-off command
146 scale Set number of containers for a service
147 start Start services
148 stop Stop services
149 unpause Unpause services
150 up Create and start containers
151 version Show the Docker-Compose version information
152 """
153 base_dir = '.'
154
155 def docopt_options(self):
156 options = super(TopLevelCommand, self).docopt_options()
157 options['version'] = get_version_info('compose')
158 return options
159
160 def perform_command(self, options, handler, command_options):
161 setup_console_handler(console_handler, options.get('--verbose'))
162
163 if options['COMMAND'] in ('help', 'version'):
164 # Skip looking up the compose file.
165 handler(None, command_options)
166 return
167
168 if options['COMMAND'] == 'config':
169 handler(options, command_options)
170 return
171
172 project = project_from_options(self.base_dir, options)
173 with friendly_error_message():
174 handler(project, command_options)
175
176 def build(self, project, options):
177 """
178 Build or rebuild services.
179
180 Services are built once and then tagged as `project_service`,
181 e.g. `composetest_db`. If you change a service's `Dockerfile` or the
182 contents of its build directory, you can run `docker-compose build` to rebuild it.
183
184 Usage: build [options] [SERVICE...]
185
186 Options:
187 --force-rm Always remove intermediate containers.
188 --no-cache Do not use cache when building the image.
189 --pull Always attempt to pull a newer version of the image.
190 """
191 project.build(
192 service_names=options['SERVICE'],
193 no_cache=bool(options.get('--no-cache', False)),
194 pull=bool(options.get('--pull', False)),
195 force_rm=bool(options.get('--force-rm', False)))
196
197 def config(self, config_options, options):
198 """
199 Validate and view the compose file.
200
201 Usage: config [options]
202
203 Options:
204 -q, --quiet Only validate the configuration, don't print
205 anything.
206 --services Print the service names, one per line.
207
208 """
209 config_path = get_config_path_from_options(config_options)
210 compose_config = config.load(config.find(self.base_dir, config_path))
211
212 if options['--quiet']:
213 return
214
215 if options['--services']:
216 print('\n'.join(service['name'] for service in compose_config.services))
217 return
218
219 print(serialize_config(compose_config))
220
221 def create(self, project, options):
222 """
223 Creates containers for a service.
224
225 Usage: create [options] [SERVICE...]
226
227 Options:
228 --force-recreate Recreate containers even if their configuration and
229 image haven't changed. Incompatible with --no-recreate.
230 --no-recreate If containers already exist, don't recreate them.
231 Incompatible with --force-recreate.
232 --no-build Don't build an image, even if it's missing
233 """
234 service_names = options['SERVICE']
235
236 project.create(
237 service_names=service_names,
238 strategy=convergence_strategy_from_opts(options),
239 do_build=not options['--no-build']
240 )
241
242 def down(self, project, options):
243 """
244 Stop containers and remove containers, networks, volumes, and images
245 created by `up`. Only containers and networks are removed by default.
246
247 Usage: down [options]
248
249 Options:
250 --rmi type Remove images, type may be one of: 'all' to remove
251 all images, or 'local' to remove only images that
252 don't have an custom name set by the `image` field
253 -v, --volumes Remove data volumes
254 """
255 image_type = image_type_from_opt('--rmi', options['--rmi'])
256 project.down(image_type, options['--volumes'])
257
258 def events(self, project, options):
259 """
260 Receive real time events from containers.
261
262 Usage: events [options] [SERVICE...]
263
264 Options:
265 --json Output events as a stream of json objects
266 """
267 def format_event(event):
268 attributes = ["%s=%s" % item for item in event['attributes'].items()]
269 return ("{time} {type} {action} {id} ({attrs})").format(
270 attrs=", ".join(sorted(attributes)),
271 **event)
272
273 def json_format_event(event):
274 event['time'] = event['time'].isoformat()
275 return json.dumps(event)
276
277 for event in project.events():
278 formatter = json_format_event if options['--json'] else format_event
279 print(formatter(event))
280 sys.stdout.flush()
281
282 def help(self, project, options):
283 """
284 Get help on a command.
285
286 Usage: help COMMAND
287 """
288 handler = self.get_handler(options['COMMAND'])
289 raise SystemExit(getdoc(handler))
290
291 def kill(self, project, options):
292 """
293 Force stop service containers.
294
295 Usage: kill [options] [SERVICE...]
296
297 Options:
298 -s SIGNAL SIGNAL to send to the container.
299 Default signal is SIGKILL.
300 """
301 signal = options.get('-s', 'SIGKILL')
302
303 project.kill(service_names=options['SERVICE'], signal=signal)
304
305 def logs(self, project, options):
306 """
307 View output from containers.
308
309 Usage: logs [options] [SERVICE...]
310
311 Options:
312 --no-color Produce monochrome output.
313 """
314 containers = project.containers(service_names=options['SERVICE'], stopped=True)
315
316 monochrome = options['--no-color']
317 print("Attaching to", list_containers(containers))
318 LogPrinter(containers, monochrome=monochrome).run()
319
320 def pause(self, project, options):
321 """
322 Pause services.
323
324 Usage: pause [SERVICE...]
325 """
326 containers = project.pause(service_names=options['SERVICE'])
327 exit_if(not containers, 'No containers to pause', 1)
328
329 def port(self, project, options):
330 """
331 Print the public port for a port binding.
332
333 Usage: port [options] SERVICE PRIVATE_PORT
334
335 Options:
336 --protocol=proto tcp or udp [default: tcp]
337 --index=index index of the container if there are multiple
338 instances of a service [default: 1]
339 """
340 index = int(options.get('--index'))
341 service = project.get_service(options['SERVICE'])
342 try:
343 container = service.get_container(number=index)
344 except ValueError as e:
345 raise UserError(str(e))
346 print(container.get_local_port(
347 options['PRIVATE_PORT'],
348 protocol=options.get('--protocol') or 'tcp') or '')
349
350 def ps(self, project, options):
351 """
352 List containers.
353
354 Usage: ps [options] [SERVICE...]
355
356 Options:
357 -q Only display IDs
358 """
359 containers = sorted(
360 project.containers(service_names=options['SERVICE'], stopped=True) +
361 project.containers(service_names=options['SERVICE'], one_off=True),
362 key=attrgetter('name'))
363
364 if options['-q']:
365 for container in containers:
366 print(container.id)
367 else:
368 headers = [
369 'Name',
370 'Command',
371 'State',
372 'Ports',
373 ]
374 rows = []
375 for container in containers:
376 command = container.human_readable_command
377 if len(command) > 30:
378 command = '%s ...' % command[:26]
379 rows.append([
380 container.name,
381 command,
382 container.human_readable_state,
383 container.human_readable_ports,
384 ])
385 print(Formatter().table(headers, rows))
386
387 def pull(self, project, options):
388 """
389 Pulls images for services.
390
391 Usage: pull [options] [SERVICE...]
392
393 Options:
394 --ignore-pull-failures Pull what it can and ignores images with pull failures.
395 """
396 project.pull(
397 service_names=options['SERVICE'],
398 ignore_pull_failures=options.get('--ignore-pull-failures')
399 )
400
401 def rm(self, project, options):
402 """
403 Remove stopped service containers.
404
405 By default, volumes attached to containers will not be removed. You can see all
406 volumes with `docker volume ls`.
407
408 Any data which is not in a volume will be lost.
409
410 Usage: rm [options] [SERVICE...]
411
412 Options:
413 -f, --force Don't ask to confirm removal
414 -v Remove volumes associated with containers
415 """
416 all_containers = project.containers(service_names=options['SERVICE'], stopped=True)
417 stopped_containers = [c for c in all_containers if not c.is_running]
418
419 if len(stopped_containers) > 0:
420 print("Going to remove", list_containers(stopped_containers))
421 if options.get('--force') \
422 or yesno("Are you sure? [yN] ", default=False):
423 project.remove_stopped(
424 service_names=options['SERVICE'],
425 v=options.get('-v', False)
426 )
427 else:
428 print("No stopped containers")
429
430 def run(self, project, options):
431 """
432 Run a one-off command on a service.
433
434 For example:
435
436 $ docker-compose run web python manage.py shell
437
438 By default, linked services will be started, unless they are already
439 running. If you do not want to start linked services, use
440 `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`.
441
442 Usage: run [options] [-p PORT...] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
443
444 Options:
445 -d Detached mode: Run container in the background, print
446 new container name.
447 --name NAME Assign a name to the container
448 --entrypoint CMD Override the entrypoint of the image.
449 -e KEY=VAL Set an environment variable (can be used multiple times)
450 -u, --user="" Run as specified username or uid
451 --no-deps Don't start linked services.
452 --rm Remove container after run. Ignored in detached mode.
453 -p, --publish=[] Publish a container's port(s) to the host
454 --service-ports Run command with the service's ports enabled and mapped
455 to the host.
456 -T Disable pseudo-tty allocation. By default `docker-compose run`
457 allocates a TTY.
458 """
459 service = project.get_service(options['SERVICE'])
460 detach = options['-d']
461
462 if IS_WINDOWS_PLATFORM and not detach:
463 raise UserError(
464 "Interactive mode is not yet supported on Windows.\n"
465 "Please pass the -d flag when using `docker-compose run`."
466 )
467
468 if options['COMMAND']:
469 command = [options['COMMAND']] + options['ARGS']
470 else:
471 command = service.options.get('command')
472
473 container_options = {
474 'command': command,
475 'tty': not (detach or options['-T'] or not sys.stdin.isatty()),
476 'stdin_open': not detach,
477 'detach': detach,
478 }
479
480 if options['-e']:
481 container_options['environment'] = parse_environment(options['-e'])
482
483 if options['--entrypoint']:
484 container_options['entrypoint'] = options.get('--entrypoint')
485
486 if options['--rm']:
487 container_options['restart'] = None
488
489 if options['--user']:
490 container_options['user'] = options.get('--user')
491
492 if not options['--service-ports']:
493 container_options['ports'] = []
494
495 if options['--publish']:
496 container_options['ports'] = options.get('--publish')
497
498 if options['--publish'] and options['--service-ports']:
499 raise UserError(
500 'Service port mapping and manual port mapping '
501 'can not be used togather'
502 )
503
504 if options['--name']:
505 container_options['name'] = options['--name']
506
507 run_one_off_container(container_options, project, service, options)
508
509 def scale(self, project, options):
510 """
511 Set number of containers to run for a service.
512
513 Numbers are specified in the form `service=num` as arguments.
514 For example:
515
516 $ docker-compose scale web=2 worker=3
517
518 Usage: scale [options] [SERVICE=NUM...]
519
520 Options:
521 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
522 (default: 10)
523 """
524 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
525
526 for s in options['SERVICE=NUM']:
527 if '=' not in s:
528 raise UserError('Arguments to scale should be in the form service=num')
529 service_name, num = s.split('=', 1)
530 try:
531 num = int(num)
532 except ValueError:
533 raise UserError('Number of containers for service "%s" is not a '
534 'number' % service_name)
535 project.get_service(service_name).scale(num, timeout=timeout)
536
537 def start(self, project, options):
538 """
539 Start existing containers.
540
541 Usage: start [SERVICE...]
542 """
543 containers = project.start(service_names=options['SERVICE'])
544 exit_if(not containers, 'No containers to start', 1)
545
546 def stop(self, project, options):
547 """
548 Stop running containers without removing them.
549
550 They can be started again with `docker-compose start`.
551
552 Usage: stop [options] [SERVICE...]
553
554 Options:
555 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
556 (default: 10)
557 """
558 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
559 project.stop(service_names=options['SERVICE'], timeout=timeout)
560
561 def restart(self, project, options):
562 """
563 Restart running containers.
564
565 Usage: restart [options] [SERVICE...]
566
567 Options:
568 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
569 (default: 10)
570 """
571 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
572 containers = project.restart(service_names=options['SERVICE'], timeout=timeout)
573 exit_if(not containers, 'No containers to restart', 1)
574
575 def unpause(self, project, options):
576 """
577 Unpause services.
578
579 Usage: unpause [SERVICE...]
580 """
581 containers = project.unpause(service_names=options['SERVICE'])
582 exit_if(not containers, 'No containers to unpause', 1)
583
584 def up(self, project, options):
585 """
586 Builds, (re)creates, starts, and attaches to containers for a service.
587
588 Unless they are already running, this command also starts any linked services.
589
590 The `docker-compose up` command aggregates the output of each container. When
591 the command exits, all containers are stopped. Running `docker-compose up -d`
592 starts the containers in the background and leaves them running.
593
594 If there are existing containers for a service, and the service's configuration
595 or image was changed after the container's creation, `docker-compose up` picks
596 up the changes by stopping and recreating the containers (preserving mounted
597 volumes). To prevent Compose from picking up changes, use the `--no-recreate`
598 flag.
599
600 If you want to force Compose to stop and recreate all containers, use the
601 `--force-recreate` flag.
602
603 Usage: up [options] [SERVICE...]
604
605 Options:
606 -d Detached mode: Run containers in the background,
607 print new container names.
608 Incompatible with --abort-on-container-exit.
609 --no-color Produce monochrome output.
610 --no-deps Don't start linked services.
611 --force-recreate Recreate containers even if their configuration
612 and image haven't changed.
613 Incompatible with --no-recreate.
614 --no-recreate If containers already exist, don't recreate them.
615 Incompatible with --force-recreate.
616 --no-build Don't build an image, even if it's missing
617 --abort-on-container-exit Stops all containers if any container was stopped.
618 Incompatible with -d.
619 -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown
620 when attached or when containers are already
621 running. (default: 10)
622 """
623 monochrome = options['--no-color']
624 start_deps = not options['--no-deps']
625 cascade_stop = options['--abort-on-container-exit']
626 service_names = options['SERVICE']
627 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
628 detached = options.get('-d')
629
630 if detached and cascade_stop:
631 raise UserError("--abort-on-container-exit and -d cannot be combined.")
632
633 with up_shutdown_context(project, service_names, timeout, detached):
634 to_attach = project.up(
635 service_names=service_names,
636 start_deps=start_deps,
637 strategy=convergence_strategy_from_opts(options),
638 do_build=not options['--no-build'],
639 timeout=timeout,
640 detached=detached)
641
642 if detached:
643 return
644 log_printer = build_log_printer(to_attach, service_names, monochrome, cascade_stop)
645 print("Attaching to", list_containers(log_printer.containers))
646 log_printer.run()
647
648 def version(self, project, options):
649 """
650 Show version informations
651
652 Usage: version [--short]
653
654 Options:
655 --short Shows only Compose's version number.
656 """
657 if options['--short']:
658 print(__version__)
659 else:
660 print(get_version_info('full'))
661
662
663 def convergence_strategy_from_opts(options):
664 no_recreate = options['--no-recreate']
665 force_recreate = options['--force-recreate']
666 if force_recreate and no_recreate:
667 raise UserError("--force-recreate and --no-recreate cannot be combined.")
668
669 if force_recreate:
670 return ConvergenceStrategy.always
671
672 if no_recreate:
673 return ConvergenceStrategy.never
674
675 return ConvergenceStrategy.changed
676
677
678 def image_type_from_opt(flag, value):
679 if not value:
680 return ImageType.none
681 try:
682 return ImageType[value]
683 except KeyError:
684 raise UserError("%s flag must be one of: all, local" % flag)
685
686
687 def run_one_off_container(container_options, project, service, options):
688 if not options['--no-deps']:
689 deps = service.get_linked_service_names()
690 if deps:
691 project.up(
692 service_names=deps,
693 start_deps=True,
694 strategy=ConvergenceStrategy.never)
695
696 project.initialize()
697
698 container = service.create_container(
699 quiet=True,
700 one_off=True,
701 **container_options)
702
703 if options['-d']:
704 service.start_container(container)
705 print(container.name)
706 return
707
708 def remove_container(force=False):
709 if options['--rm']:
710 project.client.remove_container(container.id, force=True)
711
712 signals.set_signal_handler_to_shutdown()
713 try:
714 try:
715 operation = RunOperation(
716 project.client,
717 container.id,
718 interactive=not options['-T'],
719 logs=False,
720 )
721 pty = PseudoTerminal(project.client, operation)
722 sockets = pty.sockets()
723 service.start_container(container)
724 pty.start(sockets)
725 exit_code = container.wait()
726 except signals.ShutdownException:
727 project.client.stop(container.id)
728 exit_code = 1
729 except signals.ShutdownException:
730 project.client.kill(container.id)
731 remove_container(force=True)
732 sys.exit(2)
733
734 remove_container()
735 sys.exit(exit_code)
736
737
738 def build_log_printer(containers, service_names, monochrome, cascade_stop):
739 if service_names:
740 containers = [
741 container
742 for container in containers if container.service in service_names
743 ]
744 return LogPrinter(containers, monochrome=monochrome, cascade_stop=cascade_stop)
745
746
747 @contextlib.contextmanager
748 def up_shutdown_context(project, service_names, timeout, detached):
749 if detached:
750 yield
751 return
752
753 signals.set_signal_handler_to_shutdown()
754 try:
755 try:
756 yield
757 except signals.ShutdownException:
758 print("Gracefully stopping... (press Ctrl+C again to force)")
759 project.stop(service_names=service_names, timeout=timeout)
760 except signals.ShutdownException:
761 project.kill(service_names=service_names)
762 sys.exit(2)
763
764
765 def list_containers(containers):
766 return ", ".join(c.name for c in containers)
767
768
769 def exit_if(condition, message, exit_code):
770 if condition:
771 log.error(message)
772 raise SystemExit(exit_code)
773
[end of compose/cli/main.py]
[start of compose/cli/utils.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import unicode_literals
4
5 import os
6 import platform
7 import ssl
8 import subprocess
9
10 import docker
11 from six.moves import input
12
13 import compose
14
15
16 def yesno(prompt, default=None):
17 """
18 Prompt the user for a yes or no.
19
20 Can optionally specify a default value, which will only be
21 used if they enter a blank line.
22
23 Unrecognised input (anything other than "y", "n", "yes",
24 "no" or "") will return None.
25 """
26 answer = input(prompt).strip().lower()
27
28 if answer == "y" or answer == "yes":
29 return True
30 elif answer == "n" or answer == "no":
31 return False
32 elif answer == "":
33 return default
34 else:
35 return None
36
37
38 def call_silently(*args, **kwargs):
39 """
40 Like subprocess.call(), but redirects stdout and stderr to /dev/null.
41 """
42 with open(os.devnull, 'w') as shutup:
43 try:
44 return subprocess.call(*args, stdout=shutup, stderr=shutup, **kwargs)
45 except WindowsError:
46 # On Windows, subprocess.call() can still raise exceptions. Normalize
47 # to POSIXy behaviour by returning a nonzero exit code.
48 return 1
49
50
51 def is_mac():
52 return platform.system() == 'Darwin'
53
54
55 def is_ubuntu():
56 return platform.system() == 'Linux' and platform.linux_distribution()[0] == 'Ubuntu'
57
58
59 def get_version_info(scope):
60 versioninfo = 'docker-compose version {}, build {}'.format(
61 compose.__version__,
62 get_build_version())
63
64 if scope == 'compose':
65 return versioninfo
66 if scope == 'full':
67 return (
68 "{}\n"
69 "docker-py version: {}\n"
70 "{} version: {}\n"
71 "OpenSSL version: {}"
72 ).format(
73 versioninfo,
74 docker.version,
75 platform.python_implementation(),
76 platform.python_version(),
77 ssl.OPENSSL_VERSION)
78
79 raise ValueError("{} is not a valid version scope".format(scope))
80
81
82 def get_build_version():
83 filename = os.path.join(os.path.dirname(compose.__file__), 'GITSHA')
84 if not os.path.exists(filename):
85 return 'unknown'
86
87 with open(filename) as fh:
88 return fh.read().strip()
89
[end of compose/cli/utils.py]
[start of compose/config/config.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import codecs
5 import functools
6 import logging
7 import operator
8 import os
9 import string
10 import sys
11 from collections import namedtuple
12
13 import six
14 import yaml
15 from cached_property import cached_property
16
17 from ..const import COMPOSEFILE_V1 as V1
18 from ..const import COMPOSEFILE_V2_0 as V2_0
19 from .errors import CircularReference
20 from .errors import ComposeFileNotFound
21 from .errors import ConfigurationError
22 from .errors import VERSION_EXPLANATION
23 from .interpolation import interpolate_environment_variables
24 from .sort_services import get_container_name_from_network_mode
25 from .sort_services import get_service_name_from_network_mode
26 from .sort_services import sort_service_dicts
27 from .types import parse_extra_hosts
28 from .types import parse_restart_spec
29 from .types import ServiceLink
30 from .types import VolumeFromSpec
31 from .types import VolumeSpec
32 from .validation import match_named_volumes
33 from .validation import validate_against_fields_schema
34 from .validation import validate_against_service_schema
35 from .validation import validate_depends_on
36 from .validation import validate_extends_file_path
37 from .validation import validate_network_mode
38 from .validation import validate_top_level_object
39 from .validation import validate_top_level_service_objects
40 from .validation import validate_ulimits
41
42
43 DOCKER_CONFIG_KEYS = [
44 'cap_add',
45 'cap_drop',
46 'cgroup_parent',
47 'command',
48 'cpu_quota',
49 'cpu_shares',
50 'cpuset',
51 'detach',
52 'devices',
53 'dns',
54 'dns_search',
55 'domainname',
56 'entrypoint',
57 'env_file',
58 'environment',
59 'extra_hosts',
60 'hostname',
61 'image',
62 'ipc',
63 'labels',
64 'links',
65 'mac_address',
66 'mem_limit',
67 'memswap_limit',
68 'net',
69 'pid',
70 'ports',
71 'privileged',
72 'read_only',
73 'restart',
74 'security_opt',
75 'stdin_open',
76 'stop_signal',
77 'tty',
78 'user',
79 'volume_driver',
80 'volumes',
81 'volumes_from',
82 'working_dir',
83 ]
84
85 ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
86 'build',
87 'container_name',
88 'dockerfile',
89 'logging',
90 'network_mode',
91 ]
92
93 DOCKER_VALID_URL_PREFIXES = (
94 'http://',
95 'https://',
96 'git://',
97 'github.com/',
98 'git@',
99 )
100
101 SUPPORTED_FILENAMES = [
102 'docker-compose.yml',
103 'docker-compose.yaml',
104 ]
105
106 DEFAULT_OVERRIDE_FILENAME = 'docker-compose.override.yml'
107
108
109 log = logging.getLogger(__name__)
110
111
112 class ConfigDetails(namedtuple('_ConfigDetails', 'working_dir config_files')):
113 """
114 :param working_dir: the directory to use for relative paths in the config
115 :type working_dir: string
116 :param config_files: list of configuration files to load
117 :type config_files: list of :class:`ConfigFile`
118 """
119
120
121 class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
122 """
123 :param filename: filename of the config file
124 :type filename: string
125 :param config: contents of the config file
126 :type config: :class:`dict`
127 """
128
129 @classmethod
130 def from_filename(cls, filename):
131 return cls(filename, load_yaml(filename))
132
133 @cached_property
134 def version(self):
135 if 'version' not in self.config:
136 return V1
137
138 version = self.config['version']
139
140 if isinstance(version, dict):
141 log.warn('Unexpected type for "version" key in "{}". Assuming '
142 '"version" is the name of a service, and defaulting to '
143 'Compose file version 1.'.format(self.filename))
144 return V1
145
146 if not isinstance(version, six.string_types):
147 raise ConfigurationError(
148 'Version in "{}" is invalid - it should be a string.'
149 .format(self.filename))
150
151 if version == '1':
152 raise ConfigurationError(
153 'Version in "{}" is invalid. {}'
154 .format(self.filename, VERSION_EXPLANATION))
155
156 if version == '2':
157 version = V2_0
158
159 if version != V2_0:
160 raise ConfigurationError(
161 'Version in "{}" is unsupported. {}'
162 .format(self.filename, VERSION_EXPLANATION))
163
164 return version
165
166 def get_service(self, name):
167 return self.get_service_dicts()[name]
168
169 def get_service_dicts(self):
170 return self.config if self.version == V1 else self.config.get('services', {})
171
172 def get_volumes(self):
173 return {} if self.version == V1 else self.config.get('volumes', {})
174
175 def get_networks(self):
176 return {} if self.version == V1 else self.config.get('networks', {})
177
178
179 class Config(namedtuple('_Config', 'version services volumes networks')):
180 """
181 :param version: configuration version
182 :type version: int
183 :param services: List of service description dictionaries
184 :type services: :class:`list`
185 :param volumes: Dictionary mapping volume names to description dictionaries
186 :type volumes: :class:`dict`
187 :param networks: Dictionary mapping network names to description dictionaries
188 :type networks: :class:`dict`
189 """
190
191
192 class ServiceConfig(namedtuple('_ServiceConfig', 'working_dir filename name config')):
193
194 @classmethod
195 def with_abs_paths(cls, working_dir, filename, name, config):
196 if not working_dir:
197 raise ValueError("No working_dir for ServiceConfig.")
198
199 return cls(
200 os.path.abspath(working_dir),
201 os.path.abspath(filename) if filename else filename,
202 name,
203 config)
204
205
206 def find(base_dir, filenames):
207 if filenames == ['-']:
208 return ConfigDetails(
209 os.getcwd(),
210 [ConfigFile(None, yaml.safe_load(sys.stdin))])
211
212 if filenames:
213 filenames = [os.path.join(base_dir, f) for f in filenames]
214 else:
215 filenames = get_default_config_files(base_dir)
216
217 log.debug("Using configuration files: {}".format(",".join(filenames)))
218 return ConfigDetails(
219 os.path.dirname(filenames[0]),
220 [ConfigFile.from_filename(f) for f in filenames])
221
222
223 def validate_config_version(config_files):
224 main_file = config_files[0]
225 validate_top_level_object(main_file)
226 for next_file in config_files[1:]:
227 validate_top_level_object(next_file)
228
229 if main_file.version != next_file.version:
230 raise ConfigurationError(
231 "Version mismatch: file {0} specifies version {1} but "
232 "extension file {2} uses version {3}".format(
233 main_file.filename,
234 main_file.version,
235 next_file.filename,
236 next_file.version))
237
238
239 def get_default_config_files(base_dir):
240 (candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, base_dir)
241
242 if not candidates:
243 raise ComposeFileNotFound(SUPPORTED_FILENAMES)
244
245 winner = candidates[0]
246
247 if len(candidates) > 1:
248 log.warn("Found multiple config files with supported names: %s", ", ".join(candidates))
249 log.warn("Using %s\n", winner)
250
251 return [os.path.join(path, winner)] + get_default_override_file(path)
252
253
254 def get_default_override_file(path):
255 override_filename = os.path.join(path, DEFAULT_OVERRIDE_FILENAME)
256 return [override_filename] if os.path.exists(override_filename) else []
257
258
259 def find_candidates_in_parent_dirs(filenames, path):
260 """
261 Given a directory path to start, looks for filenames in the
262 directory, and then each parent directory successively,
263 until found.
264
265 Returns tuple (candidates, path).
266 """
267 candidates = [filename for filename in filenames
268 if os.path.exists(os.path.join(path, filename))]
269
270 if not candidates:
271 parent_dir = os.path.join(path, '..')
272 if os.path.abspath(parent_dir) != os.path.abspath(path):
273 return find_candidates_in_parent_dirs(filenames, parent_dir)
274
275 return (candidates, path)
276
277
278 def load(config_details):
279 """Load the configuration from a working directory and a list of
280 configuration files. Files are loaded in order, and merged on top
281 of each other to create the final configuration.
282
283 Return a fully interpolated, extended and validated configuration.
284 """
285 validate_config_version(config_details.config_files)
286
287 processed_files = [
288 process_config_file(config_file)
289 for config_file in config_details.config_files
290 ]
291 config_details = config_details._replace(config_files=processed_files)
292
293 main_file = config_details.config_files[0]
294 volumes = load_mapping(config_details.config_files, 'get_volumes', 'Volume')
295 networks = load_mapping(config_details.config_files, 'get_networks', 'Network')
296 service_dicts = load_services(
297 config_details.working_dir,
298 main_file,
299 [file.get_service_dicts() for file in config_details.config_files])
300
301 if main_file.version != V1:
302 for service_dict in service_dicts:
303 match_named_volumes(service_dict, volumes)
304
305 return Config(main_file.version, service_dicts, volumes, networks)
306
307
308 def load_mapping(config_files, get_func, entity_type):
309 mapping = {}
310
311 for config_file in config_files:
312 for name, config in getattr(config_file, get_func)().items():
313 mapping[name] = config or {}
314 if not config:
315 continue
316
317 external = config.get('external')
318 if external:
319 if len(config.keys()) > 1:
320 raise ConfigurationError(
321 '{} {} declared as external but specifies'
322 ' additional attributes ({}). '.format(
323 entity_type,
324 name,
325 ', '.join([k for k in config.keys() if k != 'external'])
326 )
327 )
328 if isinstance(external, dict):
329 config['external_name'] = external.get('name')
330 else:
331 config['external_name'] = name
332
333 mapping[name] = config
334
335 return mapping
336
337
338 def load_services(working_dir, config_file, service_configs):
339 def build_service(service_name, service_dict, service_names):
340 service_config = ServiceConfig.with_abs_paths(
341 working_dir,
342 config_file.filename,
343 service_name,
344 service_dict)
345 resolver = ServiceExtendsResolver(service_config, config_file)
346 service_dict = process_service(resolver.run())
347
348 service_config = service_config._replace(config=service_dict)
349 validate_service(service_config, service_names, config_file.version)
350 service_dict = finalize_service(
351 service_config,
352 service_names,
353 config_file.version)
354 return service_dict
355
356 def build_services(service_config):
357 service_names = service_config.keys()
358 return sort_service_dicts([
359 build_service(name, service_dict, service_names)
360 for name, service_dict in service_config.items()
361 ])
362
363 def merge_services(base, override):
364 all_service_names = set(base) | set(override)
365 return {
366 name: merge_service_dicts_from_files(
367 base.get(name, {}),
368 override.get(name, {}),
369 config_file.version)
370 for name in all_service_names
371 }
372
373 service_config = service_configs[0]
374 for next_config in service_configs[1:]:
375 service_config = merge_services(service_config, next_config)
376
377 return build_services(service_config)
378
379
380 def process_config_file(config_file, service_name=None):
381 service_dicts = config_file.get_service_dicts()
382 validate_top_level_service_objects(config_file.filename, service_dicts)
383
384 interpolated_config = interpolate_environment_variables(service_dicts, 'service')
385
386 if config_file.version == V2_0:
387 processed_config = dict(config_file.config)
388 processed_config['services'] = services = interpolated_config
389 processed_config['volumes'] = interpolate_environment_variables(
390 config_file.get_volumes(), 'volume')
391 processed_config['networks'] = interpolate_environment_variables(
392 config_file.get_networks(), 'network')
393
394 if config_file.version == V1:
395 processed_config = services = interpolated_config
396
397 config_file = config_file._replace(config=processed_config)
398 validate_against_fields_schema(config_file)
399
400 if service_name and service_name not in services:
401 raise ConfigurationError(
402 "Cannot extend service '{}' in {}: Service not found".format(
403 service_name, config_file.filename))
404
405 return config_file
406
407
408 class ServiceExtendsResolver(object):
409 def __init__(self, service_config, config_file, already_seen=None):
410 self.service_config = service_config
411 self.working_dir = service_config.working_dir
412 self.already_seen = already_seen or []
413 self.config_file = config_file
414
415 @property
416 def signature(self):
417 return self.service_config.filename, self.service_config.name
418
419 def detect_cycle(self):
420 if self.signature in self.already_seen:
421 raise CircularReference(self.already_seen + [self.signature])
422
423 def run(self):
424 self.detect_cycle()
425
426 if 'extends' in self.service_config.config:
427 service_dict = self.resolve_extends(*self.validate_and_construct_extends())
428 return self.service_config._replace(config=service_dict)
429
430 return self.service_config
431
432 def validate_and_construct_extends(self):
433 extends = self.service_config.config['extends']
434 if not isinstance(extends, dict):
435 extends = {'service': extends}
436
437 config_path = self.get_extended_config_path(extends)
438 service_name = extends['service']
439
440 extends_file = ConfigFile.from_filename(config_path)
441 validate_config_version([self.config_file, extends_file])
442 extended_file = process_config_file(
443 extends_file,
444 service_name=service_name)
445 service_config = extended_file.get_service(service_name)
446
447 return config_path, service_config, service_name
448
449 def resolve_extends(self, extended_config_path, service_dict, service_name):
450 resolver = ServiceExtendsResolver(
451 ServiceConfig.with_abs_paths(
452 os.path.dirname(extended_config_path),
453 extended_config_path,
454 service_name,
455 service_dict),
456 self.config_file,
457 already_seen=self.already_seen + [self.signature])
458
459 service_config = resolver.run()
460 other_service_dict = process_service(service_config)
461 validate_extended_service_dict(
462 other_service_dict,
463 extended_config_path,
464 service_name)
465
466 return merge_service_dicts(
467 other_service_dict,
468 self.service_config.config,
469 self.config_file.version)
470
471 def get_extended_config_path(self, extends_options):
472 """Service we are extending either has a value for 'file' set, which we
473 need to obtain a full path too or we are extending from a service
474 defined in our own file.
475 """
476 filename = self.service_config.filename
477 validate_extends_file_path(
478 self.service_config.name,
479 extends_options,
480 filename)
481 if 'file' in extends_options:
482 return expand_path(self.working_dir, extends_options['file'])
483 return filename
484
485
486 def resolve_environment(service_dict):
487 """Unpack any environment variables from an env_file, if set.
488 Interpolate environment values if set.
489 """
490 env = {}
491 for env_file in service_dict.get('env_file', []):
492 env.update(env_vars_from_file(env_file))
493
494 env.update(parse_environment(service_dict.get('environment')))
495 return dict(filter(None, (resolve_env_var(k, v) for k, v in six.iteritems(env))))
496
497
498 def resolve_build_args(build):
499 args = parse_build_arguments(build.get('args'))
500 return dict(filter(None, (resolve_env_var(k, v) for k, v in six.iteritems(args))))
501
502
503 def validate_extended_service_dict(service_dict, filename, service):
504 error_prefix = "Cannot extend service '%s' in %s:" % (service, filename)
505
506 if 'links' in service_dict:
507 raise ConfigurationError(
508 "%s services with 'links' cannot be extended" % error_prefix)
509
510 if 'volumes_from' in service_dict:
511 raise ConfigurationError(
512 "%s services with 'volumes_from' cannot be extended" % error_prefix)
513
514 if 'net' in service_dict:
515 if get_container_name_from_network_mode(service_dict['net']):
516 raise ConfigurationError(
517 "%s services with 'net: container' cannot be extended" % error_prefix)
518
519 if 'network_mode' in service_dict:
520 if get_service_name_from_network_mode(service_dict['network_mode']):
521 raise ConfigurationError(
522 "%s services with 'network_mode: service' cannot be extended" % error_prefix)
523
524 if 'depends_on' in service_dict:
525 raise ConfigurationError(
526 "%s services with 'depends_on' cannot be extended" % error_prefix)
527
528
529 def validate_service(service_config, service_names, version):
530 service_dict, service_name = service_config.config, service_config.name
531 validate_against_service_schema(service_dict, service_name, version)
532 validate_paths(service_dict)
533
534 validate_ulimits(service_config)
535 validate_network_mode(service_config, service_names)
536 validate_depends_on(service_config, service_names)
537
538 if not service_dict.get('image') and has_uppercase(service_name):
539 raise ConfigurationError(
540 "Service '{name}' contains uppercase characters which are not valid "
541 "as part of an image name. Either use a lowercase service name or "
542 "use the `image` field to set a custom name for the service image."
543 .format(name=service_name))
544
545
546 def process_service(service_config):
547 working_dir = service_config.working_dir
548 service_dict = dict(service_config.config)
549
550 if 'env_file' in service_dict:
551 service_dict['env_file'] = [
552 expand_path(working_dir, path)
553 for path in to_list(service_dict['env_file'])
554 ]
555
556 if 'build' in service_dict:
557 if isinstance(service_dict['build'], six.string_types):
558 service_dict['build'] = resolve_build_path(working_dir, service_dict['build'])
559 elif isinstance(service_dict['build'], dict) and 'context' in service_dict['build']:
560 path = service_dict['build']['context']
561 service_dict['build']['context'] = resolve_build_path(working_dir, path)
562
563 if 'volumes' in service_dict and service_dict.get('volume_driver') is None:
564 service_dict['volumes'] = resolve_volume_paths(working_dir, service_dict)
565
566 if 'labels' in service_dict:
567 service_dict['labels'] = parse_labels(service_dict['labels'])
568
569 if 'extra_hosts' in service_dict:
570 service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts'])
571
572 for field in ['dns', 'dns_search']:
573 if field in service_dict:
574 service_dict[field] = to_list(service_dict[field])
575
576 return service_dict
577
578
579 def finalize_service(service_config, service_names, version):
580 service_dict = dict(service_config.config)
581
582 if 'environment' in service_dict or 'env_file' in service_dict:
583 service_dict['environment'] = resolve_environment(service_dict)
584 service_dict.pop('env_file', None)
585
586 if 'volumes_from' in service_dict:
587 service_dict['volumes_from'] = [
588 VolumeFromSpec.parse(vf, service_names, version)
589 for vf in service_dict['volumes_from']
590 ]
591
592 if 'volumes' in service_dict:
593 service_dict['volumes'] = [
594 VolumeSpec.parse(v) for v in service_dict['volumes']]
595
596 if 'net' in service_dict:
597 network_mode = service_dict.pop('net')
598 container_name = get_container_name_from_network_mode(network_mode)
599 if container_name and container_name in service_names:
600 service_dict['network_mode'] = 'service:{}'.format(container_name)
601 else:
602 service_dict['network_mode'] = network_mode
603
604 if 'restart' in service_dict:
605 service_dict['restart'] = parse_restart_spec(service_dict['restart'])
606
607 normalize_build(service_dict, service_config.working_dir)
608
609 service_dict['name'] = service_config.name
610 return normalize_v1_service_format(service_dict)
611
612
613 def normalize_v1_service_format(service_dict):
614 if 'log_driver' in service_dict or 'log_opt' in service_dict:
615 if 'logging' not in service_dict:
616 service_dict['logging'] = {}
617 if 'log_driver' in service_dict:
618 service_dict['logging']['driver'] = service_dict['log_driver']
619 del service_dict['log_driver']
620 if 'log_opt' in service_dict:
621 service_dict['logging']['options'] = service_dict['log_opt']
622 del service_dict['log_opt']
623
624 if 'dockerfile' in service_dict:
625 service_dict['build'] = service_dict.get('build', {})
626 service_dict['build'].update({
627 'dockerfile': service_dict.pop('dockerfile')
628 })
629
630 return service_dict
631
632
633 def merge_service_dicts_from_files(base, override, version):
634 """When merging services from multiple files we need to merge the `extends`
635 field. This is not handled by `merge_service_dicts()` which is used to
636 perform the `extends`.
637 """
638 new_service = merge_service_dicts(base, override, version)
639 if 'extends' in override:
640 new_service['extends'] = override['extends']
641 elif 'extends' in base:
642 new_service['extends'] = base['extends']
643 return new_service
644
645
646 class MergeDict(dict):
647 """A dict-like object responsible for merging two dicts into one."""
648
649 def __init__(self, base, override):
650 self.base = base
651 self.override = override
652
653 def needs_merge(self, field):
654 return field in self.base or field in self.override
655
656 def merge_field(self, field, merge_func, default=None):
657 if not self.needs_merge(field):
658 return
659
660 self[field] = merge_func(
661 self.base.get(field, default),
662 self.override.get(field, default))
663
664 def merge_mapping(self, field, parse_func):
665 if not self.needs_merge(field):
666 return
667
668 self[field] = parse_func(self.base.get(field))
669 self[field].update(parse_func(self.override.get(field)))
670
671 def merge_sequence(self, field, parse_func):
672 def parse_sequence_func(seq):
673 return to_mapping((parse_func(item) for item in seq), 'merge_field')
674
675 if not self.needs_merge(field):
676 return
677
678 merged = parse_sequence_func(self.base.get(field, []))
679 merged.update(parse_sequence_func(self.override.get(field, [])))
680 self[field] = [item.repr() for item in merged.values()]
681
682 def merge_scalar(self, field):
683 if self.needs_merge(field):
684 self[field] = self.override.get(field, self.base.get(field))
685
686
687 def merge_service_dicts(base, override, version):
688 md = MergeDict(base, override)
689
690 md.merge_mapping('environment', parse_environment)
691 md.merge_mapping('labels', parse_labels)
692 md.merge_mapping('ulimits', parse_ulimits)
693 md.merge_sequence('links', ServiceLink.parse)
694
695 for field in ['volumes', 'devices']:
696 md.merge_field(field, merge_path_mappings)
697
698 for field in [
699 'depends_on',
700 'expose',
701 'external_links',
702 'networks',
703 'ports',
704 'volumes_from',
705 ]:
706 md.merge_field(field, operator.add, default=[])
707
708 for field in ['dns', 'dns_search', 'env_file']:
709 md.merge_field(field, merge_list_or_string)
710
711 for field in set(ALLOWED_KEYS) - set(md):
712 md.merge_scalar(field)
713
714 if version == V1:
715 legacy_v1_merge_image_or_build(md, base, override)
716 else:
717 merge_build(md, base, override)
718
719 return dict(md)
720
721
722 def merge_build(output, base, override):
723 build = {}
724
725 if 'build' in base:
726 if isinstance(base['build'], six.string_types):
727 build['context'] = base['build']
728 else:
729 build.update(base['build'])
730
731 if 'build' in override:
732 if isinstance(override['build'], six.string_types):
733 build['context'] = override['build']
734 else:
735 build.update(override['build'])
736
737 if build:
738 output['build'] = build
739
740
741 def legacy_v1_merge_image_or_build(output, base, override):
742 output.pop('image', None)
743 output.pop('build', None)
744 if 'image' in override:
745 output['image'] = override['image']
746 elif 'build' in override:
747 output['build'] = override['build']
748 elif 'image' in base:
749 output['image'] = base['image']
750 elif 'build' in base:
751 output['build'] = base['build']
752
753
754 def merge_environment(base, override):
755 env = parse_environment(base)
756 env.update(parse_environment(override))
757 return env
758
759
760 def split_env(env):
761 if isinstance(env, six.binary_type):
762 env = env.decode('utf-8', 'replace')
763 if '=' in env:
764 return env.split('=', 1)
765 else:
766 return env, None
767
768
769 def split_label(label):
770 if '=' in label:
771 return label.split('=', 1)
772 else:
773 return label, ''
774
775
776 def parse_dict_or_list(split_func, type_name, arguments):
777 if not arguments:
778 return {}
779
780 if isinstance(arguments, list):
781 return dict(split_func(e) for e in arguments)
782
783 if isinstance(arguments, dict):
784 return dict(arguments)
785
786 raise ConfigurationError(
787 "%s \"%s\" must be a list or mapping," %
788 (type_name, arguments)
789 )
790
791
792 parse_build_arguments = functools.partial(parse_dict_or_list, split_env, 'build arguments')
793 parse_environment = functools.partial(parse_dict_or_list, split_env, 'environment')
794 parse_labels = functools.partial(parse_dict_or_list, split_label, 'labels')
795
796
797 def parse_ulimits(ulimits):
798 if not ulimits:
799 return {}
800
801 if isinstance(ulimits, dict):
802 return dict(ulimits)
803
804
805 def resolve_env_var(key, val):
806 if val is not None:
807 return key, val
808 elif key in os.environ:
809 return key, os.environ[key]
810 else:
811 return ()
812
813
814 def env_vars_from_file(filename):
815 """
816 Read in a line delimited file of environment variables.
817 """
818 if not os.path.exists(filename):
819 raise ConfigurationError("Couldn't find env file: %s" % filename)
820 env = {}
821 for line in codecs.open(filename, 'r', 'utf-8'):
822 line = line.strip()
823 if line and not line.startswith('#'):
824 k, v = split_env(line)
825 env[k] = v
826 return env
827
828
829 def resolve_volume_paths(working_dir, service_dict):
830 return [
831 resolve_volume_path(working_dir, volume)
832 for volume in service_dict['volumes']
833 ]
834
835
836 def resolve_volume_path(working_dir, volume):
837 container_path, host_path = split_path_mapping(volume)
838
839 if host_path is not None:
840 if host_path.startswith('.'):
841 host_path = expand_path(working_dir, host_path)
842 host_path = os.path.expanduser(host_path)
843 return u"{}:{}".format(host_path, container_path)
844 else:
845 return container_path
846
847
848 def normalize_build(service_dict, working_dir):
849
850 if 'build' in service_dict:
851 build = {}
852 # Shortcut where specifying a string is treated as the build context
853 if isinstance(service_dict['build'], six.string_types):
854 build['context'] = service_dict.pop('build')
855 else:
856 build.update(service_dict['build'])
857 if 'args' in build:
858 build['args'] = resolve_build_args(build)
859
860 service_dict['build'] = build
861
862
863 def resolve_build_path(working_dir, build_path):
864 if is_url(build_path):
865 return build_path
866 return expand_path(working_dir, build_path)
867
868
869 def is_url(build_path):
870 return build_path.startswith(DOCKER_VALID_URL_PREFIXES)
871
872
873 def validate_paths(service_dict):
874 if 'build' in service_dict:
875 build = service_dict.get('build', {})
876
877 if isinstance(build, six.string_types):
878 build_path = build
879 elif isinstance(build, dict) and 'context' in build:
880 build_path = build['context']
881
882 if (
883 not is_url(build_path) and
884 (not os.path.exists(build_path) or not os.access(build_path, os.R_OK))
885 ):
886 raise ConfigurationError(
887 "build path %s either does not exist, is not accessible, "
888 "or is not a valid URL." % build_path)
889
890
891 def merge_path_mappings(base, override):
892 d = dict_from_path_mappings(base)
893 d.update(dict_from_path_mappings(override))
894 return path_mappings_from_dict(d)
895
896
897 def dict_from_path_mappings(path_mappings):
898 if path_mappings:
899 return dict(split_path_mapping(v) for v in path_mappings)
900 else:
901 return {}
902
903
904 def path_mappings_from_dict(d):
905 return [join_path_mapping(v) for v in d.items()]
906
907
908 def split_path_mapping(volume_path):
909 """
910 Ascertain if the volume_path contains a host path as well as a container
911 path. Using splitdrive so windows absolute paths won't cause issues with
912 splitting on ':'.
913 """
914 # splitdrive has limitations when it comes to relative paths, so when it's
915 # relative, handle special case to set the drive to ''
916 if volume_path.startswith('.') or volume_path.startswith('~'):
917 drive, volume_config = '', volume_path
918 else:
919 drive, volume_config = os.path.splitdrive(volume_path)
920
921 if ':' in volume_config:
922 (host, container) = volume_config.split(':', 1)
923 return (container, drive + host)
924 else:
925 return (volume_path, None)
926
927
928 def join_path_mapping(pair):
929 (container, host) = pair
930 if host is None:
931 return container
932 else:
933 return ":".join((host, container))
934
935
936 def expand_path(working_dir, path):
937 return os.path.abspath(os.path.join(working_dir, os.path.expanduser(path)))
938
939
940 def merge_list_or_string(base, override):
941 return to_list(base) + to_list(override)
942
943
944 def to_list(value):
945 if value is None:
946 return []
947 elif isinstance(value, six.string_types):
948 return [value]
949 else:
950 return value
951
952
953 def to_mapping(sequence, key_field):
954 return {getattr(item, key_field): item for item in sequence}
955
956
957 def has_uppercase(name):
958 return any(char in string.ascii_uppercase for char in name)
959
960
961 def load_yaml(filename):
962 try:
963 with open(filename, 'r') as fh:
964 return yaml.safe_load(fh)
965 except (IOError, yaml.YAMLError) as e:
966 error_name = getattr(e, '__module__', '') + '.' + e.__class__.__name__
967 raise ConfigurationError(u"{}: {}".format(error_name, e))
968
[end of compose/config/config.py]
[start of compose/config/errors.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4
5 VERSION_EXPLANATION = (
6 'Either specify a version of "2" (or "2.0") and place your service '
7 'definitions under the `services` key, or omit the `version` key and place '
8 'your service definitions at the root of the file to use version 1.\n'
9 'For more on the Compose file format versions, see '
10 'https://docs.docker.com/compose/compose-file/')
11
12
13 class ConfigurationError(Exception):
14 def __init__(self, msg):
15 self.msg = msg
16
17 def __str__(self):
18 return self.msg
19
20
21 class DependencyError(ConfigurationError):
22 pass
23
24
25 class CircularReference(ConfigurationError):
26 def __init__(self, trail):
27 self.trail = trail
28
29 @property
30 def msg(self):
31 lines = [
32 "{} in {}".format(service_name, filename)
33 for (filename, service_name) in self.trail
34 ]
35 return "Circular reference:\n {}".format("\n extends ".join(lines))
36
37
38 class ComposeFileNotFound(ConfigurationError):
39 def __init__(self, supported_filenames):
40 super(ComposeFileNotFound, self).__init__("""
41 Can't find a suitable configuration file in this directory or any parent. Are you in the right directory?
42
43 Supported filenames: %s
44 """ % ", ".join(supported_filenames))
45
[end of compose/config/errors.py]
[start of compose/config/serialize.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import six
5 import yaml
6
7 from compose.config import types
8
9
10 def serialize_config_type(dumper, data):
11 representer = dumper.represent_str if six.PY3 else dumper.represent_unicode
12 return representer(data.repr())
13
14
15 yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type)
16 yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type)
17
18
19 def serialize_config(config):
20 output = {
21 'version': config.version,
22 'services': {service.pop('name'): service for service in config.services},
23 'networks': config.networks,
24 'volumes': config.volumes,
25 }
26 return yaml.safe_dump(
27 output,
28 default_flow_style=False,
29 indent=2,
30 width=80)
31
[end of compose/config/serialize.py]
[start of compose/config/types.py]
1 """
2 Types for objects parsed from the configuration.
3 """
4 from __future__ import absolute_import
5 from __future__ import unicode_literals
6
7 import os
8 from collections import namedtuple
9
10 from compose.config.config import V1
11 from compose.config.errors import ConfigurationError
12 from compose.const import IS_WINDOWS_PLATFORM
13
14
15 class VolumeFromSpec(namedtuple('_VolumeFromSpec', 'source mode type')):
16
17 # TODO: drop service_names arg when v1 is removed
18 @classmethod
19 def parse(cls, volume_from_config, service_names, version):
20 func = cls.parse_v1 if version == V1 else cls.parse_v2
21 return func(service_names, volume_from_config)
22
23 @classmethod
24 def parse_v1(cls, service_names, volume_from_config):
25 parts = volume_from_config.split(':')
26 if len(parts) > 2:
27 raise ConfigurationError(
28 "volume_from {} has incorrect format, should be "
29 "service[:mode]".format(volume_from_config))
30
31 if len(parts) == 1:
32 source = parts[0]
33 mode = 'rw'
34 else:
35 source, mode = parts
36
37 type = 'service' if source in service_names else 'container'
38 return cls(source, mode, type)
39
40 @classmethod
41 def parse_v2(cls, service_names, volume_from_config):
42 parts = volume_from_config.split(':')
43 if len(parts) > 3:
44 raise ConfigurationError(
45 "volume_from {} has incorrect format, should be one of "
46 "'<service name>[:<mode>]' or "
47 "'container:<container name>[:<mode>]'".format(volume_from_config))
48
49 if len(parts) == 1:
50 source = parts[0]
51 return cls(source, 'rw', 'service')
52
53 if len(parts) == 2:
54 if parts[0] == 'container':
55 type, source = parts
56 return cls(source, 'rw', type)
57
58 source, mode = parts
59 return cls(source, mode, 'service')
60
61 if len(parts) == 3:
62 type, source, mode = parts
63 if type not in ('service', 'container'):
64 raise ConfigurationError(
65 "Unknown volumes_from type '{}' in '{}'".format(
66 type,
67 volume_from_config))
68
69 return cls(source, mode, type)
70
71 def repr(self):
72 return '{v.type}:{v.source}:{v.mode}'.format(v=self)
73
74
75 def parse_restart_spec(restart_config):
76 if not restart_config:
77 return None
78 parts = restart_config.split(':')
79 if len(parts) > 2:
80 raise ConfigurationError(
81 "Restart %s has incorrect format, should be "
82 "mode[:max_retry]" % restart_config)
83 if len(parts) == 2:
84 name, max_retry_count = parts
85 else:
86 name, = parts
87 max_retry_count = 0
88
89 return {'Name': name, 'MaximumRetryCount': int(max_retry_count)}
90
91
92 def parse_extra_hosts(extra_hosts_config):
93 if not extra_hosts_config:
94 return {}
95
96 if isinstance(extra_hosts_config, dict):
97 return dict(extra_hosts_config)
98
99 if isinstance(extra_hosts_config, list):
100 extra_hosts_dict = {}
101 for extra_hosts_line in extra_hosts_config:
102 # TODO: validate string contains ':' ?
103 host, ip = extra_hosts_line.split(':', 1)
104 extra_hosts_dict[host.strip()] = ip.strip()
105 return extra_hosts_dict
106
107
108 def normalize_paths_for_engine(external_path, internal_path):
109 """Windows paths, c:\my\path\shiny, need to be changed to be compatible with
110 the Engine. Volume paths are expected to be linux style /c/my/path/shiny/
111 """
112 if not IS_WINDOWS_PLATFORM:
113 return external_path, internal_path
114
115 if external_path:
116 drive, tail = os.path.splitdrive(external_path)
117
118 if drive:
119 external_path = '/' + drive.lower().rstrip(':') + tail
120
121 external_path = external_path.replace('\\', '/')
122
123 return external_path, internal_path.replace('\\', '/')
124
125
126 class VolumeSpec(namedtuple('_VolumeSpec', 'external internal mode')):
127
128 @classmethod
129 def parse(cls, volume_config):
130 """Parse a volume_config path and split it into external:internal[:mode]
131 parts to be returned as a valid VolumeSpec.
132 """
133 if IS_WINDOWS_PLATFORM:
134 # relative paths in windows expand to include the drive, eg C:\
135 # so we join the first 2 parts back together to count as one
136 drive, tail = os.path.splitdrive(volume_config)
137 parts = tail.split(":")
138
139 if drive:
140 parts[0] = drive + parts[0]
141 else:
142 parts = volume_config.split(':')
143
144 if len(parts) > 3:
145 raise ConfigurationError(
146 "Volume %s has incorrect format, should be "
147 "external:internal[:mode]" % volume_config)
148
149 if len(parts) == 1:
150 external, internal = normalize_paths_for_engine(
151 None,
152 os.path.normpath(parts[0]))
153 else:
154 external, internal = normalize_paths_for_engine(
155 os.path.normpath(parts[0]),
156 os.path.normpath(parts[1]))
157
158 mode = 'rw'
159 if len(parts) == 3:
160 mode = parts[2]
161
162 return cls(external, internal, mode)
163
164 def repr(self):
165 external = self.external + ':' if self.external else ''
166 return '{ext}{v.internal}:{v.mode}'.format(ext=external, v=self)
167
168 @property
169 def is_named_volume(self):
170 return self.external and not self.external.startswith(('.', '/', '~'))
171
172
173 class ServiceLink(namedtuple('_ServiceLink', 'target alias')):
174
175 @classmethod
176 def parse(cls, link_spec):
177 target, _, alias = link_spec.partition(':')
178 if not alias:
179 alias = target
180 return cls(target, alias)
181
182 def repr(self):
183 if self.target == self.alias:
184 return self.target
185 return '{s.target}:{s.alias}'.format(s=self)
186
187 @property
188 def merge_field(self):
189 return self.alias
190
[end of compose/config/types.py]
[start of compose/config/validation.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import json
5 import logging
6 import os
7 import re
8 import sys
9
10 import six
11 from docker.utils.ports import split_port
12 from jsonschema import Draft4Validator
13 from jsonschema import FormatChecker
14 from jsonschema import RefResolver
15 from jsonschema import ValidationError
16
17 from .errors import ConfigurationError
18 from .errors import VERSION_EXPLANATION
19 from .sort_services import get_service_name_from_network_mode
20
21
22 log = logging.getLogger(__name__)
23
24
25 DOCKER_CONFIG_HINTS = {
26 'cpu_share': 'cpu_shares',
27 'add_host': 'extra_hosts',
28 'hosts': 'extra_hosts',
29 'extra_host': 'extra_hosts',
30 'device': 'devices',
31 'link': 'links',
32 'memory_swap': 'memswap_limit',
33 'port': 'ports',
34 'privilege': 'privileged',
35 'priviliged': 'privileged',
36 'privilige': 'privileged',
37 'volume': 'volumes',
38 'workdir': 'working_dir',
39 }
40
41
42 VALID_NAME_CHARS = '[a-zA-Z0-9\._\-]'
43 VALID_EXPOSE_FORMAT = r'^\d+(\-\d+)?(\/[a-zA-Z]+)?$'
44
45
46 @FormatChecker.cls_checks(format="ports", raises=ValidationError)
47 def format_ports(instance):
48 try:
49 split_port(instance)
50 except ValueError as e:
51 raise ValidationError(six.text_type(e))
52 return True
53
54
55 @FormatChecker.cls_checks(format="expose", raises=ValidationError)
56 def format_expose(instance):
57 if isinstance(instance, six.string_types):
58 if not re.match(VALID_EXPOSE_FORMAT, instance):
59 raise ValidationError(
60 "should be of the format 'PORT[/PROTOCOL]'")
61
62 return True
63
64
65 @FormatChecker.cls_checks(format="bool-value-in-mapping")
66 def format_boolean_in_environment(instance):
67 """
68 Check if there is a boolean in the environment and display a warning.
69 Always return True here so the validation won't raise an error.
70 """
71 if isinstance(instance, bool):
72 log.warn(
73 "There is a boolean value in the 'environment' key.\n"
74 "Environment variables can only be strings.\n"
75 "Please add quotes to any boolean values to make them string "
76 "(eg, 'True', 'yes', 'N').\n"
77 "This warning will become an error in a future release. \r\n"
78 )
79 return True
80
81
82 def match_named_volumes(service_dict, project_volumes):
83 service_volumes = service_dict.get('volumes', [])
84 for volume_spec in service_volumes:
85 if volume_spec.is_named_volume and volume_spec.external not in project_volumes:
86 raise ConfigurationError(
87 'Named volume "{0}" is used in service "{1}" but no'
88 ' declaration was found in the volumes section.'.format(
89 volume_spec.repr(), service_dict.get('name')
90 )
91 )
92
93
94 def validate_top_level_service_objects(filename, service_dicts):
95 """Perform some high level validation of the service name and value.
96
97 This validation must happen before interpolation, which must happen
98 before the rest of validation, which is why it's separate from the
99 rest of the service validation.
100 """
101 for service_name, service_dict in service_dicts.items():
102 if not isinstance(service_name, six.string_types):
103 raise ConfigurationError(
104 "In file '{}' service name: {} needs to be a string, eg '{}'".format(
105 filename,
106 service_name,
107 service_name))
108
109 if not isinstance(service_dict, dict):
110 raise ConfigurationError(
111 "In file '{}' service '{}' doesn\'t have any configuration options. "
112 "All top level keys in your docker-compose.yml must map "
113 "to a dictionary of configuration options.".format(
114 filename, service_name
115 )
116 )
117
118
119 def validate_top_level_object(config_file):
120 if not isinstance(config_file.config, dict):
121 raise ConfigurationError(
122 "Top level object in '{}' needs to be an object not '{}'.".format(
123 config_file.filename,
124 type(config_file.config)))
125
126
127 def validate_ulimits(service_config):
128 ulimit_config = service_config.config.get('ulimits', {})
129 for limit_name, soft_hard_values in six.iteritems(ulimit_config):
130 if isinstance(soft_hard_values, dict):
131 if not soft_hard_values['soft'] <= soft_hard_values['hard']:
132 raise ConfigurationError(
133 "Service '{s.name}' has invalid ulimit '{ulimit}'. "
134 "'soft' value can not be greater than 'hard' value ".format(
135 s=service_config,
136 ulimit=ulimit_config))
137
138
139 def validate_extends_file_path(service_name, extends_options, filename):
140 """
141 The service to be extended must either be defined in the config key 'file',
142 or within 'filename'.
143 """
144 error_prefix = "Invalid 'extends' configuration for %s:" % service_name
145
146 if 'file' not in extends_options and filename is None:
147 raise ConfigurationError(
148 "%s you need to specify a 'file', e.g. 'file: something.yml'" % error_prefix
149 )
150
151
152 def validate_network_mode(service_config, service_names):
153 network_mode = service_config.config.get('network_mode')
154 if not network_mode:
155 return
156
157 if 'networks' in service_config.config:
158 raise ConfigurationError("'network_mode' and 'networks' cannot be combined")
159
160 dependency = get_service_name_from_network_mode(network_mode)
161 if not dependency:
162 return
163
164 if dependency not in service_names:
165 raise ConfigurationError(
166 "Service '{s.name}' uses the network stack of service '{dep}' which "
167 "is undefined.".format(s=service_config, dep=dependency))
168
169
170 def validate_depends_on(service_config, service_names):
171 for dependency in service_config.config.get('depends_on', []):
172 if dependency not in service_names:
173 raise ConfigurationError(
174 "Service '{s.name}' depends on service '{dep}' which is "
175 "undefined.".format(s=service_config, dep=dependency))
176
177
178 def get_unsupported_config_msg(path, error_key):
179 msg = "Unsupported config option for {}: '{}'".format(path_string(path), error_key)
180 if error_key in DOCKER_CONFIG_HINTS:
181 msg += " (did you mean '{}'?)".format(DOCKER_CONFIG_HINTS[error_key])
182 return msg
183
184
185 def anglicize_validator(validator):
186 if validator in ["array", "object"]:
187 return 'an ' + validator
188 return 'a ' + validator
189
190
191 def is_service_dict_schema(schema_id):
192 return schema_id == 'fields_schema_v1.json' or schema_id == '#/properties/services'
193
194
195 def handle_error_for_schema_with_id(error, path):
196 schema_id = error.schema['id']
197
198 if is_service_dict_schema(schema_id) and error.validator == 'additionalProperties':
199 return "Invalid service name '{}' - only {} characters are allowed".format(
200 # The service_name is the key to the json object
201 list(error.instance)[0],
202 VALID_NAME_CHARS)
203
204 if schema_id == '#/definitions/constraints':
205 # Build context could in 'build' or 'build.context' and dockerfile could be
206 # in 'dockerfile' or 'build.dockerfile'
207 context = False
208 dockerfile = 'dockerfile' in error.instance
209 if 'build' in error.instance:
210 if isinstance(error.instance['build'], six.string_types):
211 context = True
212 else:
213 context = 'context' in error.instance['build']
214 dockerfile = dockerfile or 'dockerfile' in error.instance['build']
215
216 # TODO: only applies to v1
217 if 'image' in error.instance and context:
218 return (
219 "{} has both an image and build path specified. "
220 "A service can either be built to image or use an existing "
221 "image, not both.".format(path_string(path)))
222 if 'image' not in error.instance and not context:
223 return (
224 "{} has neither an image nor a build path specified. "
225 "At least one must be provided.".format(path_string(path)))
226 # TODO: only applies to v1
227 if 'image' in error.instance and dockerfile:
228 return (
229 "{} has both an image and alternate Dockerfile. "
230 "A service can either be built to image or use an existing "
231 "image, not both.".format(path_string(path)))
232
233 if error.validator == 'additionalProperties':
234 if schema_id == '#/definitions/service':
235 invalid_config_key = parse_key_from_error_msg(error)
236 return get_unsupported_config_msg(path, invalid_config_key)
237
238 if not error.path:
239 return '{}\n{}'.format(error.message, VERSION_EXPLANATION)
240
241
242 def handle_generic_service_error(error, path):
243 msg_format = None
244 error_msg = error.message
245
246 if error.validator == 'oneOf':
247 msg_format = "{path} {msg}"
248 config_key, error_msg = _parse_oneof_validator(error)
249 if config_key:
250 path.append(config_key)
251
252 elif error.validator == 'type':
253 msg_format = "{path} contains an invalid type, it should be {msg}"
254 error_msg = _parse_valid_types_from_validator(error.validator_value)
255
256 # TODO: no test case for this branch, there are no config options
257 # which exercise this branch
258 elif error.validator == 'required':
259 msg_format = "{path} is invalid, {msg}"
260
261 elif error.validator == 'dependencies':
262 config_key = list(error.validator_value.keys())[0]
263 required_keys = ",".join(error.validator_value[config_key])
264
265 msg_format = "{path} is invalid: {msg}"
266 path.append(config_key)
267 error_msg = "when defining '{}' you must set '{}' as well".format(
268 config_key,
269 required_keys)
270
271 elif error.cause:
272 error_msg = six.text_type(error.cause)
273 msg_format = "{path} is invalid: {msg}"
274
275 elif error.path:
276 msg_format = "{path} value {msg}"
277
278 if msg_format:
279 return msg_format.format(path=path_string(path), msg=error_msg)
280
281 return error.message
282
283
284 def parse_key_from_error_msg(error):
285 return error.message.split("'")[1]
286
287
288 def path_string(path):
289 return ".".join(c for c in path if isinstance(c, six.string_types))
290
291
292 def _parse_valid_types_from_validator(validator):
293 """A validator value can be either an array of valid types or a string of
294 a valid type. Parse the valid types and prefix with the correct article.
295 """
296 if not isinstance(validator, list):
297 return anglicize_validator(validator)
298
299 if len(validator) == 1:
300 return anglicize_validator(validator[0])
301
302 return "{}, or {}".format(
303 ", ".join([anglicize_validator(validator[0])] + validator[1:-1]),
304 anglicize_validator(validator[-1]))
305
306
307 def _parse_oneof_validator(error):
308 """oneOf has multiple schemas, so we need to reason about which schema, sub
309 schema or constraint the validation is failing on.
310 Inspecting the context value of a ValidationError gives us information about
311 which sub schema failed and which kind of error it is.
312 """
313 types = []
314 for context in error.context:
315
316 if context.validator == 'required':
317 return (None, context.message)
318
319 if context.validator == 'additionalProperties':
320 invalid_config_key = parse_key_from_error_msg(context)
321 return (None, "contains unsupported option: '{}'".format(invalid_config_key))
322
323 if context.path:
324 return (
325 path_string(context.path),
326 "contains {}, which is an invalid type, it should be {}".format(
327 json.dumps(context.instance),
328 _parse_valid_types_from_validator(context.validator_value)),
329 )
330
331 if context.validator == 'uniqueItems':
332 return (
333 None,
334 "contains non unique items, please remove duplicates from {}".format(
335 context.instance),
336 )
337
338 if context.validator == 'type':
339 types.append(context.validator_value)
340
341 valid_types = _parse_valid_types_from_validator(types)
342 return (None, "contains an invalid type, it should be {}".format(valid_types))
343
344
345 def process_errors(errors, path_prefix=None):
346 """jsonschema gives us an error tree full of information to explain what has
347 gone wrong. Process each error and pull out relevant information and re-write
348 helpful error messages that are relevant.
349 """
350 path_prefix = path_prefix or []
351
352 def format_error_message(error):
353 path = path_prefix + list(error.path)
354
355 if 'id' in error.schema:
356 error_msg = handle_error_for_schema_with_id(error, path)
357 if error_msg:
358 return error_msg
359
360 return handle_generic_service_error(error, path)
361
362 return '\n'.join(format_error_message(error) for error in errors)
363
364
365 def validate_against_fields_schema(config_file):
366 schema_filename = "fields_schema_v{0}.json".format(config_file.version)
367 _validate_against_schema(
368 config_file.config,
369 schema_filename,
370 format_checker=["ports", "expose", "bool-value-in-mapping"],
371 filename=config_file.filename)
372
373
374 def validate_against_service_schema(config, service_name, version):
375 _validate_against_schema(
376 config,
377 "service_schema_v{0}.json".format(version),
378 format_checker=["ports"],
379 path_prefix=[service_name])
380
381
382 def _validate_against_schema(
383 config,
384 schema_filename,
385 format_checker=(),
386 path_prefix=None,
387 filename=None):
388 config_source_dir = os.path.dirname(os.path.abspath(__file__))
389
390 if sys.platform == "win32":
391 file_pre_fix = "///"
392 config_source_dir = config_source_dir.replace('\\', '/')
393 else:
394 file_pre_fix = "//"
395
396 resolver_full_path = "file:{}{}/".format(file_pre_fix, config_source_dir)
397 schema_file = os.path.join(config_source_dir, schema_filename)
398
399 with open(schema_file, "r") as schema_fh:
400 schema = json.load(schema_fh)
401
402 resolver = RefResolver(resolver_full_path, schema)
403 validation_output = Draft4Validator(
404 schema,
405 resolver=resolver,
406 format_checker=FormatChecker(format_checker))
407
408 errors = [error for error in sorted(validation_output.iter_errors(config), key=str)]
409 if not errors:
410 return
411
412 error_msg = process_errors(errors, path_prefix=path_prefix)
413 file_msg = " in file '{}'".format(filename) if filename else ''
414 raise ConfigurationError("Validation failed{}, reason(s):\n{}".format(
415 file_msg,
416 error_msg))
417
[end of compose/config/validation.py]
[start of compose/container.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 from functools import reduce
5
6 import six
7
8 from .const import LABEL_CONTAINER_NUMBER
9 from .const import LABEL_PROJECT
10 from .const import LABEL_SERVICE
11
12
13 class Container(object):
14 """
15 Represents a Docker container, constructed from the output of
16 GET /containers/:id:/json.
17 """
18 def __init__(self, client, dictionary, has_been_inspected=False):
19 self.client = client
20 self.dictionary = dictionary
21 self.has_been_inspected = has_been_inspected
22 self.log_stream = None
23
24 @classmethod
25 def from_ps(cls, client, dictionary, **kwargs):
26 """
27 Construct a container object from the output of GET /containers/json.
28 """
29 name = get_container_name(dictionary)
30 if name is None:
31 return None
32
33 new_dictionary = {
34 'Id': dictionary['Id'],
35 'Image': dictionary['Image'],
36 'Name': '/' + name,
37 }
38 return cls(client, new_dictionary, **kwargs)
39
40 @classmethod
41 def from_id(cls, client, id):
42 return cls(client, client.inspect_container(id))
43
44 @classmethod
45 def create(cls, client, **options):
46 response = client.create_container(**options)
47 return cls.from_id(client, response['Id'])
48
49 @property
50 def id(self):
51 return self.dictionary['Id']
52
53 @property
54 def image(self):
55 return self.dictionary['Image']
56
57 @property
58 def image_config(self):
59 return self.client.inspect_image(self.image)
60
61 @property
62 def short_id(self):
63 return self.id[:12]
64
65 @property
66 def name(self):
67 return self.dictionary['Name'][1:]
68
69 @property
70 def service(self):
71 return self.labels.get(LABEL_SERVICE)
72
73 @property
74 def name_without_project(self):
75 project = self.labels.get(LABEL_PROJECT)
76
77 if self.name.startswith('{0}_{1}'.format(project, self.service)):
78 return '{0}_{1}'.format(self.service, self.number)
79 else:
80 return self.name
81
82 @property
83 def number(self):
84 number = self.labels.get(LABEL_CONTAINER_NUMBER)
85 if not number:
86 raise ValueError("Container {0} does not have a {1} label".format(
87 self.short_id, LABEL_CONTAINER_NUMBER))
88 return int(number)
89
90 @property
91 def ports(self):
92 self.inspect_if_not_inspected()
93 return self.get('NetworkSettings.Ports') or {}
94
95 @property
96 def human_readable_ports(self):
97 def format_port(private, public):
98 if not public:
99 return private
100 return '{HostIp}:{HostPort}->{private}'.format(
101 private=private, **public[0])
102
103 return ', '.join(format_port(*item)
104 for item in sorted(six.iteritems(self.ports)))
105
106 @property
107 def labels(self):
108 return self.get('Config.Labels') or {}
109
110 @property
111 def stop_signal(self):
112 return self.get('Config.StopSignal')
113
114 @property
115 def log_config(self):
116 return self.get('HostConfig.LogConfig') or None
117
118 @property
119 def human_readable_state(self):
120 if self.is_paused:
121 return 'Paused'
122 if self.is_restarting:
123 return 'Restarting'
124 if self.is_running:
125 return 'Ghost' if self.get('State.Ghost') else 'Up'
126 else:
127 return 'Exit %s' % self.get('State.ExitCode')
128
129 @property
130 def human_readable_command(self):
131 entrypoint = self.get('Config.Entrypoint') or []
132 cmd = self.get('Config.Cmd') or []
133 return ' '.join(entrypoint + cmd)
134
135 @property
136 def environment(self):
137 return dict(var.split("=", 1) for var in self.get('Config.Env') or [])
138
139 @property
140 def exit_code(self):
141 return self.get('State.ExitCode')
142
143 @property
144 def is_running(self):
145 return self.get('State.Running')
146
147 @property
148 def is_restarting(self):
149 return self.get('State.Restarting')
150
151 @property
152 def is_paused(self):
153 return self.get('State.Paused')
154
155 @property
156 def log_driver(self):
157 return self.get('HostConfig.LogConfig.Type')
158
159 @property
160 def has_api_logs(self):
161 log_type = self.log_driver
162 return not log_type or log_type != 'none'
163
164 def attach_log_stream(self):
165 """A log stream can only be attached if the container uses a json-file
166 log driver.
167 """
168 if self.has_api_logs:
169 self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
170
171 def get(self, key):
172 """Return a value from the container or None if the value is not set.
173
174 :param key: a string using dotted notation for nested dictionary
175 lookups
176 """
177 self.inspect_if_not_inspected()
178
179 def get_value(dictionary, key):
180 return (dictionary or {}).get(key)
181
182 return reduce(get_value, key.split('.'), self.dictionary)
183
184 def get_local_port(self, port, protocol='tcp'):
185 port = self.ports.get("%s/%s" % (port, protocol))
186 return "{HostIp}:{HostPort}".format(**port[0]) if port else None
187
188 def get_mount(self, mount_dest):
189 for mount in self.get('Mounts'):
190 if mount['Destination'] == mount_dest:
191 return mount
192 return None
193
194 def start(self, **options):
195 return self.client.start(self.id, **options)
196
197 def stop(self, **options):
198 return self.client.stop(self.id, **options)
199
200 def pause(self, **options):
201 return self.client.pause(self.id, **options)
202
203 def unpause(self, **options):
204 return self.client.unpause(self.id, **options)
205
206 def kill(self, **options):
207 return self.client.kill(self.id, **options)
208
209 def restart(self, **options):
210 return self.client.restart(self.id, **options)
211
212 def remove(self, **options):
213 return self.client.remove_container(self.id, **options)
214
215 def rename_to_tmp_name(self):
216 """Rename the container to a hopefully unique temporary container name
217 by prepending the short id.
218 """
219 self.client.rename(
220 self.id,
221 '%s_%s' % (self.short_id, self.name)
222 )
223
224 def inspect_if_not_inspected(self):
225 if not self.has_been_inspected:
226 self.inspect()
227
228 def wait(self):
229 return self.client.wait(self.id)
230
231 def logs(self, *args, **kwargs):
232 return self.client.logs(self.id, *args, **kwargs)
233
234 def inspect(self):
235 self.dictionary = self.client.inspect_container(self.id)
236 self.has_been_inspected = True
237 return self.dictionary
238
239 def attach(self, *args, **kwargs):
240 return self.client.attach(self.id, *args, **kwargs)
241
242 def __repr__(self):
243 return '<Container: %s (%s)>' % (self.name, self.id[:6])
244
245 def __eq__(self, other):
246 if type(self) != type(other):
247 return False
248 return self.id == other.id
249
250 def __hash__(self):
251 return self.id.__hash__()
252
253
254 def get_container_name(container):
255 if not container.get('Name') and not container.get('Names'):
256 return None
257 # inspect
258 if 'Name' in container:
259 return container['Name']
260 # ps
261 shortest_name = min(container['Names'], key=lambda n: len(n.split('/')))
262 return shortest_name.split('/')[-1]
263
[end of compose/container.py]
[start of contrib/migration/migrate-compose-file-v1-to-v2.py]
1 #!/usr/bin/env python
2 """
3 Migrate a Compose file from the V1 format in Compose 1.5 to the V2 format
4 supported by Compose 1.6+
5 """
6 from __future__ import absolute_import
7 from __future__ import unicode_literals
8
9 import argparse
10 import logging
11 import sys
12
13 import ruamel.yaml
14
15 from compose.config.types import VolumeSpec
16
17
18 log = logging.getLogger('migrate')
19
20
21 def migrate(content):
22 data = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)
23
24 service_names = data.keys()
25
26 for name, service in data.items():
27 warn_for_links(name, service)
28 warn_for_external_links(name, service)
29 rewrite_net(service, service_names)
30 rewrite_build(service)
31 rewrite_logging(service)
32 rewrite_volumes_from(service, service_names)
33
34 services = {name: data.pop(name) for name in data.keys()}
35
36 data['version'] = 2
37 data['services'] = services
38 create_volumes_section(data)
39
40 return data
41
42
43 def warn_for_links(name, service):
44 links = service.get('links')
45 if links:
46 example_service = links[0].partition(':')[0]
47 log.warn(
48 "Service {name} has links, which no longer create environment "
49 "variables such as {example_service_upper}_PORT. "
50 "If you are using those in your application code, you should "
51 "instead connect directly to the hostname, e.g. "
52 "'{example_service}'."
53 .format(name=name, example_service=example_service,
54 example_service_upper=example_service.upper()))
55
56
57 def warn_for_external_links(name, service):
58 external_links = service.get('external_links')
59 if external_links:
60 log.warn(
61 "Service {name} has external_links: {ext}, which now work "
62 "slightly differently. In particular, two containers must be "
63 "connected to at least one network in common in order to "
64 "communicate, even if explicitly linked together.\n\n"
65 "Either connect the external container to your app's default "
66 "network, or connect both the external container and your "
67 "service's containers to a pre-existing network. See "
68 "https://docs.docker.com/compose/networking/ "
69 "for more on how to do this."
70 .format(name=name, ext=external_links))
71
72
73 def rewrite_net(service, service_names):
74 if 'net' in service:
75 network_mode = service.pop('net')
76
77 # "container:<service name>" is now "service:<service name>"
78 if network_mode.startswith('container:'):
79 name = network_mode.partition(':')[2]
80 if name in service_names:
81 network_mode = 'service:{}'.format(name)
82
83 service['network_mode'] = network_mode
84
85
86 def rewrite_build(service):
87 if 'dockerfile' in service:
88 service['build'] = {
89 'context': service.pop('build'),
90 'dockerfile': service.pop('dockerfile'),
91 }
92
93
94 def rewrite_logging(service):
95 if 'log_driver' in service:
96 service['logging'] = {'driver': service.pop('log_driver')}
97 if 'log_opt' in service:
98 service['logging']['options'] = service.pop('log_opt')
99
100
101 def rewrite_volumes_from(service, service_names):
102 for idx, volume_from in enumerate(service.get('volumes_from', [])):
103 if volume_from.split(':', 1)[0] not in service_names:
104 service['volumes_from'][idx] = 'container:%s' % volume_from
105
106
107 def create_volumes_section(data):
108 named_volumes = get_named_volumes(data['services'])
109 if named_volumes:
110 log.warn(
111 "Named volumes ({names}) must be explicitly declared. Creating a "
112 "'volumes' section with declarations.\n\n"
113 "For backwards-compatibility, they've been declared as external. "
114 "If you don't mind the volume names being prefixed with the "
115 "project name, you can remove the 'external' option from each one."
116 .format(names=', '.join(list(named_volumes))))
117
118 data['volumes'] = named_volumes
119
120
121 def get_named_volumes(services):
122 volume_specs = [
123 VolumeSpec.parse(volume)
124 for service in services.values()
125 for volume in service.get('volumes', [])
126 ]
127 names = {
128 spec.external
129 for spec in volume_specs
130 if spec.is_named_volume
131 }
132 return {name: {'external': True} for name in names}
133
134
135 def write(stream, new_format, indent, width):
136 ruamel.yaml.dump(
137 new_format,
138 stream,
139 Dumper=ruamel.yaml.RoundTripDumper,
140 indent=indent,
141 width=width)
142
143
144 def parse_opts(args):
145 parser = argparse.ArgumentParser()
146 parser.add_argument("filename", help="Compose file filename.")
147 parser.add_argument("-i", "--in-place", action='store_true')
148 parser.add_argument(
149 "--indent", type=int, default=2,
150 help="Number of spaces used to indent the output yaml.")
151 parser.add_argument(
152 "--width", type=int, default=80,
153 help="Number of spaces used as the output width.")
154 return parser.parse_args()
155
156
157 def main(args):
158 logging.basicConfig(format='\033[33m%(levelname)s:\033[37m %(message)s\n')
159
160 opts = parse_opts(args)
161
162 with open(opts.filename, 'r') as fh:
163 new_format = migrate(fh.read())
164
165 if opts.in_place:
166 output = open(opts.filename, 'w')
167 else:
168 output = sys.stdout
169 write(output, new_format, opts.indent, opts.width)
170
171
172 if __name__ == "__main__":
173 main(sys.argv)
174
[end of contrib/migration/migrate-compose-file-v1-to-v2.py]
[start of script/versions.py]
1 #!/usr/bin/env python
2 """
3 Query the github API for the git tags of a project, and return a list of
4 version tags for recent releases, or the default release.
5
6 The default release is the most recent non-RC version.
7
8 Recent is a list of unqiue major.minor versions, where each is the most
9 recent version in the series.
10
11 For example, if the list of versions is:
12
13 1.8.0-rc2
14 1.8.0-rc1
15 1.7.1
16 1.7.0
17 1.7.0-rc1
18 1.6.2
19 1.6.1
20
21 `default` would return `1.7.1` and
22 `recent -n 3` would return `1.8.0-rc2 1.7.1 1.6.2`
23 """
24 from __future__ import absolute_import
25 from __future__ import print_function
26 from __future__ import unicode_literals
27
28 import argparse
29 import itertools
30 import operator
31 from collections import namedtuple
32
33 import requests
34
35
36 GITHUB_API = 'https://api.github.com/repos'
37
38
39 class Version(namedtuple('_Version', 'major minor patch rc')):
40
41 @classmethod
42 def parse(cls, version):
43 version = version.lstrip('v')
44 version, _, rc = version.partition('-')
45 major, minor, patch = version.split('.', 3)
46 return cls(int(major), int(minor), int(patch), rc)
47
48 @property
49 def major_minor(self):
50 return self.major, self.minor
51
52 @property
53 def order(self):
54 """Return a representation that allows this object to be sorted
55 correctly with the default comparator.
56 """
57 # rc releases should appear before official releases
58 rc = (0, self.rc) if self.rc else (1, )
59 return (self.major, self.minor, self.patch) + rc
60
61 def __str__(self):
62 rc = '-{}'.format(self.rc) if self.rc else ''
63 return '.'.join(map(str, self[:3])) + rc
64
65
66 def group_versions(versions):
67 """Group versions by `major.minor` releases.
68
69 Example:
70
71 >>> group_versions([
72 Version(1, 0, 0),
73 Version(2, 0, 0, 'rc1'),
74 Version(2, 0, 0),
75 Version(2, 1, 0),
76 ])
77
78 [
79 [Version(1, 0, 0)],
80 [Version(2, 0, 0), Version(2, 0, 0, 'rc1')],
81 [Version(2, 1, 0)],
82 ]
83 """
84 return list(
85 list(releases)
86 for _, releases
87 in itertools.groupby(versions, operator.attrgetter('major_minor'))
88 )
89
90
91 def get_latest_versions(versions, num=1):
92 """Return a list of the most recent versions for each major.minor version
93 group.
94 """
95 versions = group_versions(versions)
96 return [versions[index][0] for index in range(num)]
97
98
99 def get_default(versions):
100 """Return a :class:`Version` for the latest non-rc version."""
101 for version in versions:
102 if not version.rc:
103 return version
104
105
106 def get_github_releases(project):
107 """Query the Github API for a list of version tags and return them in
108 sorted order.
109
110 See https://developer.github.com/v3/repos/#list-tags
111 """
112 url = '{}/{}/tags'.format(GITHUB_API, project)
113 response = requests.get(url)
114 response.raise_for_status()
115 versions = [Version.parse(tag['name']) for tag in response.json()]
116 return sorted(versions, reverse=True, key=operator.attrgetter('order'))
117
118
119 def parse_args(argv):
120 parser = argparse.ArgumentParser(description=__doc__)
121 parser.add_argument('project', help="Github project name (ex: docker/docker)")
122 parser.add_argument('command', choices=['recent', 'default'])
123 parser.add_argument('-n', '--num', type=int, default=2,
124 help="Number of versions to return from `recent`")
125 return parser.parse_args(argv)
126
127
128 def main(argv=None):
129 args = parse_args(argv)
130 versions = get_github_releases(args.project)
131
132 if args.command == 'recent':
133 print(' '.join(map(str, get_latest_versions(versions, args.num))))
134 elif args.command == 'default':
135 print(get_default(versions))
136 else:
137 raise ValueError("Unknown command {}".format(args.command))
138
139
140 if __name__ == "__main__":
141 main()
142
[end of script/versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| docker/compose | 7b5bad6050e337ca41d8f1a0e80b44787534e92f | Merge build args when using multiple compose files (or when extending services)
Based on the behavior of `environment` and `labels`, as well as `build.image`, `build.context` etc, I would also expect `build.args` to be merged, instead of being replaced.
To give an example:
## Input
**docker-compose.yml:**
``` yaml
version: "2"
services:
my_service:
build:
context: my-app
args:
SOME_VARIABLE: "42"
```
**docker-compose.override.yml:**
``` yaml
version: "2"
services:
my_service:
build:
args:
HTTP_PROXY: http://proxy.somewhere:80
HTTPS_PROXY: http://proxy.somewhere:80
NO_PROXY: somewhere,localhost
```
**my-app/Dockerfile**
``` Dockerfile
# Just needed to be able to use `build:`
FROM busybox:latest
ARG SOME_VARIABLE=xyz
RUN echo "$SOME_VARIABLE" > /etc/example
```
## Current Output
``` bash
$ docker-compose config
networks: {}
services:
my_service:
build:
args:
HTTPS_PROXY: http://proxy.somewhere:80
HTTP_PROXY: http://proxy.somewhere:80
NO_PROXY: somewhere,localhost
context: <project-dir>\my-app
version: '2.0'
volumes: {}
```
## Expected Output
``` bash
$ docker-compose config
networks: {}
services:
my_service:
build:
args:
SOME_VARIABLE: 42 # Note the merged variable here
HTTPS_PROXY: http://proxy.somewhere:80
HTTP_PROXY: http://proxy.somewhere:80
NO_PROXY: somewhere,localhost
context: <project-dir>\my-app
version: '2.0'
volumes: {}
```
## Version Information
``` bash
$ docker-compose version
docker-compose version 1.6.0, build cdb920a
docker-py version: 1.7.0
CPython version: 2.7.11
OpenSSL version: OpenSSL 1.0.2d 9 Jul 2015
```
# Implementation proposal
I mainly want to get clarification on what the desired behavior is, so that I can possibly help implementing it, maybe even for `1.6.1`.
Personally, I'd like the behavior to be to merge the `build.args` key (as outlined above), for a couple of reasons:
- Principle of least surprise/consistency with `environment`, `labels`, `ports` and so on.
- It enables scenarios like the one outlined above, where the images require some transient configuration to build, in addition to other build variables which actually have an influence on the final image.
The scenario that one wants to replace all build args at once is not very likely IMO; why would you define base build variables in the first place if you're going to replace them anyway?
# Alternative behavior: Output a warning
If the behavior should stay the same as it is now, i.e. to fully replaced the `build.args` keys, then `docker-compose` should at least output a warning IMO. It took me some time to figure out that `docker-compose` was ignoring the build args in the base `docker-compose.yml` file.
| I think we should merge build args. It was probably just overlooked since this is the first time we have nested configuration that we actually want to merge (other nested config like `logging` is not merged by design, because changing one option likely invalidates the rest).
I think the implementation would be to use the new `MergeDict()` object in `merge_build()`. Currently we just use `update()`.
A PR for this would be great!
I'm going to pick this up since it can be fixed at the same time as #2874
| 2016-02-10T18:55:23Z | <patch>
diff --git a/compose/config/config.py b/compose/config/config.py
--- a/compose/config/config.py
+++ b/compose/config/config.py
@@ -713,29 +713,24 @@ def merge_service_dicts(base, override, version):
if version == V1:
legacy_v1_merge_image_or_build(md, base, override)
- else:
- merge_build(md, base, override)
+ elif md.needs_merge('build'):
+ md['build'] = merge_build(md, base, override)
return dict(md)
def merge_build(output, base, override):
- build = {}
-
- if 'build' in base:
- if isinstance(base['build'], six.string_types):
- build['context'] = base['build']
- else:
- build.update(base['build'])
-
- if 'build' in override:
- if isinstance(override['build'], six.string_types):
- build['context'] = override['build']
- else:
- build.update(override['build'])
-
- if build:
- output['build'] = build
+ def to_dict(service):
+ build_config = service.get('build', {})
+ if isinstance(build_config, six.string_types):
+ return {'context': build_config}
+ return build_config
+
+ md = MergeDict(to_dict(base), to_dict(override))
+ md.merge_scalar('context')
+ md.merge_scalar('dockerfile')
+ md.merge_mapping('args', parse_build_arguments)
+ return dict(md)
def legacy_v1_merge_image_or_build(output, base, override):
</patch> | [] | [] | |||
ipython__ipython-13417 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add line number to error messages
As suggested in #13169, it adds line number to error messages, in order to make them more friendly.
![image](https://user-images.githubusercontent.com/20190646/139513782-ea8d42ab-9c73-4452-b607-5c54ca50a125.png)
That was the file used in the test
![image](https://user-images.githubusercontent.com/20190646/139513827-0aa4bed3-682f-40ee-a8ea-4f0e6e3fbc34.png)
</issue>
<code>
[start of README.rst]
1 .. image:: https://codecov.io/github/ipython/ipython/coverage.svg?branch=master
2 :target: https://codecov.io/github/ipython/ipython?branch=master
3
4 .. image:: https://img.shields.io/pypi/v/IPython.svg
5 :target: https://pypi.python.org/pypi/ipython
6
7 .. image:: https://github.com/ipython/ipython/actions/workflows/test.yml/badge.svg
8 :target: https://github.com/ipython/ipython/actions/workflows/test.yml)
9
10 .. image:: https://www.codetriage.com/ipython/ipython/badges/users.svg
11 :target: https://www.codetriage.com/ipython/ipython/
12
13 .. image:: https://raster.shields.io/badge/Follows-NEP29-brightgreen.png
14 :target: https://numpy.org/neps/nep-0029-deprecation_policy.html
15
16
17 ===========================================
18 IPython: Productive Interactive Computing
19 ===========================================
20
21 Overview
22 ========
23
24 Welcome to IPython. Our full documentation is available on `ipython.readthedocs.io
25 <https://ipython.readthedocs.io/en/stable/>`_ and contains information on how to install, use, and
26 contribute to the project.
27 IPython (Interactive Python) is a command shell for interactive computing in multiple programming languages, originally developed for the Python programming language, that offers introspection, rich media, shell syntax, tab completion, and history.
28
29 **IPython versions and Python Support**
30
31 Starting with IPython 7.10, IPython follows `NEP 29 <https://numpy.org/neps/nep-0029-deprecation_policy.html>`_
32
33 **IPython 7.17+** requires Python version 3.7 and above.
34
35 **IPython 7.10+** requires Python version 3.6 and above.
36
37 **IPython 7.0** requires Python version 3.5 and above.
38
39 **IPython 6.x** requires Python version 3.3 and above.
40
41 **IPython 5.x LTS** is the compatible release for Python 2.7.
42 If you require Python 2 support, you **must** use IPython 5.x LTS. Please
43 update your project configurations and requirements as necessary.
44
45
46 The Notebook, Qt console and a number of other pieces are now parts of *Jupyter*.
47 See the `Jupyter installation docs <https://jupyter.readthedocs.io/en/latest/install.html>`__
48 if you want to use these.
49
50 Main features of IPython
51 ========================
52 Comprehensive object introspection.
53
54 Input history, persistent across sessions.
55
56 Caching of output results during a session with automatically generated references.
57
58 Extensible tab completion, with support by default for completion of python variables and keywords, filenames and function keywords.
59
60 Extensible system of ‘magic’ commands for controlling the environment and performing many tasks related to IPython or the operating system.
61
62 A rich configuration system with easy switching between different setups (simpler than changing $PYTHONSTARTUP environment variables every time).
63
64 Session logging and reloading.
65
66 Extensible syntax processing for special purpose situations.
67
68 Access to the system shell with user-extensible alias system.
69
70 Easily embeddable in other Python programs and GUIs.
71
72 Integrated access to the pdb debugger and the Python profiler.
73
74
75 Development and Instant running
76 ===============================
77
78 You can find the latest version of the development documentation on `readthedocs
79 <https://ipython.readthedocs.io/en/latest/>`_.
80
81 You can run IPython from this directory without even installing it system-wide
82 by typing at the terminal::
83
84 $ python -m IPython
85
86 Or see the `development installation docs
87 <https://ipython.readthedocs.io/en/latest/install/install.html#installing-the-development-version>`_
88 for the latest revision on read the docs.
89
90 Documentation and installation instructions for older version of IPython can be
91 found on the `IPython website <https://ipython.org/documentation.html>`_
92
93
94
95 IPython requires Python version 3 or above
96 ==========================================
97
98 Starting with version 6.0, IPython does not support Python 2.7, 3.0, 3.1, or
99 3.2.
100
101 For a version compatible with Python 2.7, please install the 5.x LTS Long Term
102 Support version.
103
104 If you are encountering this error message you are likely trying to install or
105 use IPython from source. You need to checkout the remote 5.x branch. If you are
106 using git the following should work::
107
108 $ git fetch origin
109 $ git checkout 5.x
110
111 If you encounter this error message with a regular install of IPython, then you
112 likely need to update your package manager, for example if you are using `pip`
113 check the version of pip with::
114
115 $ pip --version
116
117 You will need to update pip to the version 9.0.1 or greater. If you are not using
118 pip, please inquiry with the maintainers of the package for your package
119 manager.
120
121 For more information see one of our blog posts:
122
123 https://blog.jupyter.org/release-of-ipython-5-0-8ce60b8d2e8e
124
125 As well as the following Pull-Request for discussion:
126
127 https://github.com/ipython/ipython/pull/9900
128
129 This error does also occur if you are invoking ``setup.py`` directly – which you
130 should not – or are using ``easy_install`` If this is the case, use ``pip
131 install .`` instead of ``setup.py install`` , and ``pip install -e .`` instead
132 of ``setup.py develop`` If you are depending on IPython as a dependency you may
133 also want to have a conditional dependency on IPython depending on the Python
134 version::
135
136 install_req = ['ipython']
137 if sys.version_info[0] < 3 and 'bdist_wheel' not in sys.argv:
138 install_req.remove('ipython')
139 install_req.append('ipython<6')
140
141 setup(
142 ...
143 install_requires=install_req
144 )
145
146 Alternatives to IPython
147 =======================
148
149 IPython may not be to your taste; if that's the case there might be similar
150 project that you might want to use:
151
152 - The classic Python REPL.
153 - `bpython <https://bpython-interpreter.org/>`_
154 - `mypython <https://www.asmeurer.com/mypython/>`_
155 - `ptpython and ptipython <https://pypi.org/project/ptpython/>`_
156 - `Xonsh <https://xon.sh/>`_
157
158 Ignoring commits with git blame.ignoreRevsFile
159 ==============================================
160
161 As of git 2.23, it is possible to make formatting changes without breaking
162 ``git blame``. See the `git documentation
163 <https://git-scm.com/docs/git-config#Documentation/git-config.txt-blameignoreRevsFile>`_
164 for more details.
165
166 To use this feature you must:
167
168 - Install git >= 2.23
169 - Configure your local git repo by running:
170 - POSIX: ``tools\configure-git-blame-ignore-revs.sh``
171 - Windows: ``tools\configure-git-blame-ignore-revs.bat``
172
[end of README.rst]
[start of IPython/core/display.py]
1 # -*- coding: utf-8 -*-
2 """Top-level display functions for displaying object in different formats."""
3
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
6
7
8 from binascii import b2a_base64, hexlify
9 import html
10 import json
11 import mimetypes
12 import os
13 import struct
14 import warnings
15 from copy import deepcopy
16 from os.path import splitext
17 from pathlib import Path, PurePath
18
19 from IPython.utils.py3compat import cast_unicode
20 from IPython.testing.skipdoctest import skip_doctest
21 from . import display_functions
22
23
24 __all__ = ['display_pretty', 'display_html', 'display_markdown',
25 'display_svg', 'display_png', 'display_jpeg', 'display_latex', 'display_json',
26 'display_javascript', 'display_pdf', 'DisplayObject', 'TextDisplayObject',
27 'Pretty', 'HTML', 'Markdown', 'Math', 'Latex', 'SVG', 'ProgressBar', 'JSON',
28 'GeoJSON', 'Javascript', 'Image', 'set_matplotlib_formats',
29 'set_matplotlib_close',
30 'Video']
31
32 _deprecated_names = ["display", "clear_output", "publish_display_data", "update_display", "DisplayHandle"]
33
34 __all__ = __all__ + _deprecated_names
35
36
37 # ----- warn to import from IPython.display -----
38
39 from warnings import warn
40
41
42 def __getattr__(name):
43 if name in _deprecated_names:
44 warn(f"Importing {name} from IPython.core.display is deprecated since IPython 7.14, please import from IPython display", DeprecationWarning, stacklevel=2)
45 return getattr(display_functions, name)
46
47 if name in globals().keys():
48 return globals()[name]
49 else:
50 raise AttributeError(f"module {__name__} has no attribute {name}")
51
52
53 #-----------------------------------------------------------------------------
54 # utility functions
55 #-----------------------------------------------------------------------------
56
57 def _safe_exists(path):
58 """Check path, but don't let exceptions raise"""
59 try:
60 return os.path.exists(path)
61 except Exception:
62 return False
63
64
65 def _display_mimetype(mimetype, objs, raw=False, metadata=None):
66 """internal implementation of all display_foo methods
67
68 Parameters
69 ----------
70 mimetype : str
71 The mimetype to be published (e.g. 'image/png')
72 *objs : object
73 The Python objects to display, or if raw=True raw text data to
74 display.
75 raw : bool
76 Are the data objects raw data or Python objects that need to be
77 formatted before display? [default: False]
78 metadata : dict (optional)
79 Metadata to be associated with the specific mimetype output.
80 """
81 if metadata:
82 metadata = {mimetype: metadata}
83 if raw:
84 # turn list of pngdata into list of { 'image/png': pngdata }
85 objs = [ {mimetype: obj} for obj in objs ]
86 display(*objs, raw=raw, metadata=metadata, include=[mimetype])
87
88 #-----------------------------------------------------------------------------
89 # Main functions
90 #-----------------------------------------------------------------------------
91
92
93 def display_pretty(*objs, **kwargs):
94 """Display the pretty (default) representation of an object.
95
96 Parameters
97 ----------
98 *objs : object
99 The Python objects to display, or if raw=True raw text data to
100 display.
101 raw : bool
102 Are the data objects raw data or Python objects that need to be
103 formatted before display? [default: False]
104 metadata : dict (optional)
105 Metadata to be associated with the specific mimetype output.
106 """
107 _display_mimetype('text/plain', objs, **kwargs)
108
109
110 def display_html(*objs, **kwargs):
111 """Display the HTML representation of an object.
112
113 Note: If raw=False and the object does not have a HTML
114 representation, no HTML will be shown.
115
116 Parameters
117 ----------
118 *objs : object
119 The Python objects to display, or if raw=True raw HTML data to
120 display.
121 raw : bool
122 Are the data objects raw data or Python objects that need to be
123 formatted before display? [default: False]
124 metadata : dict (optional)
125 Metadata to be associated with the specific mimetype output.
126 """
127 _display_mimetype('text/html', objs, **kwargs)
128
129
130 def display_markdown(*objs, **kwargs):
131 """Displays the Markdown representation of an object.
132
133 Parameters
134 ----------
135 *objs : object
136 The Python objects to display, or if raw=True raw markdown data to
137 display.
138 raw : bool
139 Are the data objects raw data or Python objects that need to be
140 formatted before display? [default: False]
141 metadata : dict (optional)
142 Metadata to be associated with the specific mimetype output.
143 """
144
145 _display_mimetype('text/markdown', objs, **kwargs)
146
147
148 def display_svg(*objs, **kwargs):
149 """Display the SVG representation of an object.
150
151 Parameters
152 ----------
153 *objs : object
154 The Python objects to display, or if raw=True raw svg data to
155 display.
156 raw : bool
157 Are the data objects raw data or Python objects that need to be
158 formatted before display? [default: False]
159 metadata : dict (optional)
160 Metadata to be associated with the specific mimetype output.
161 """
162 _display_mimetype('image/svg+xml', objs, **kwargs)
163
164
165 def display_png(*objs, **kwargs):
166 """Display the PNG representation of an object.
167
168 Parameters
169 ----------
170 *objs : object
171 The Python objects to display, or if raw=True raw png data to
172 display.
173 raw : bool
174 Are the data objects raw data or Python objects that need to be
175 formatted before display? [default: False]
176 metadata : dict (optional)
177 Metadata to be associated with the specific mimetype output.
178 """
179 _display_mimetype('image/png', objs, **kwargs)
180
181
182 def display_jpeg(*objs, **kwargs):
183 """Display the JPEG representation of an object.
184
185 Parameters
186 ----------
187 *objs : object
188 The Python objects to display, or if raw=True raw JPEG data to
189 display.
190 raw : bool
191 Are the data objects raw data or Python objects that need to be
192 formatted before display? [default: False]
193 metadata : dict (optional)
194 Metadata to be associated with the specific mimetype output.
195 """
196 _display_mimetype('image/jpeg', objs, **kwargs)
197
198
199 def display_latex(*objs, **kwargs):
200 """Display the LaTeX representation of an object.
201
202 Parameters
203 ----------
204 *objs : object
205 The Python objects to display, or if raw=True raw latex data to
206 display.
207 raw : bool
208 Are the data objects raw data or Python objects that need to be
209 formatted before display? [default: False]
210 metadata : dict (optional)
211 Metadata to be associated with the specific mimetype output.
212 """
213 _display_mimetype('text/latex', objs, **kwargs)
214
215
216 def display_json(*objs, **kwargs):
217 """Display the JSON representation of an object.
218
219 Note that not many frontends support displaying JSON.
220
221 Parameters
222 ----------
223 *objs : object
224 The Python objects to display, or if raw=True raw json data to
225 display.
226 raw : bool
227 Are the data objects raw data or Python objects that need to be
228 formatted before display? [default: False]
229 metadata : dict (optional)
230 Metadata to be associated with the specific mimetype output.
231 """
232 _display_mimetype('application/json', objs, **kwargs)
233
234
235 def display_javascript(*objs, **kwargs):
236 """Display the Javascript representation of an object.
237
238 Parameters
239 ----------
240 *objs : object
241 The Python objects to display, or if raw=True raw javascript data to
242 display.
243 raw : bool
244 Are the data objects raw data or Python objects that need to be
245 formatted before display? [default: False]
246 metadata : dict (optional)
247 Metadata to be associated with the specific mimetype output.
248 """
249 _display_mimetype('application/javascript', objs, **kwargs)
250
251
252 def display_pdf(*objs, **kwargs):
253 """Display the PDF representation of an object.
254
255 Parameters
256 ----------
257 *objs : object
258 The Python objects to display, or if raw=True raw javascript data to
259 display.
260 raw : bool
261 Are the data objects raw data or Python objects that need to be
262 formatted before display? [default: False]
263 metadata : dict (optional)
264 Metadata to be associated with the specific mimetype output.
265 """
266 _display_mimetype('application/pdf', objs, **kwargs)
267
268
269 #-----------------------------------------------------------------------------
270 # Smart classes
271 #-----------------------------------------------------------------------------
272
273
274 class DisplayObject(object):
275 """An object that wraps data to be displayed."""
276
277 _read_flags = 'r'
278 _show_mem_addr = False
279 metadata = None
280
281 def __init__(self, data=None, url=None, filename=None, metadata=None):
282 """Create a display object given raw data.
283
284 When this object is returned by an expression or passed to the
285 display function, it will result in the data being displayed
286 in the frontend. The MIME type of the data should match the
287 subclasses used, so the Png subclass should be used for 'image/png'
288 data. If the data is a URL, the data will first be downloaded
289 and then displayed. If
290
291 Parameters
292 ----------
293 data : unicode, str or bytes
294 The raw data or a URL or file to load the data from
295 url : unicode
296 A URL to download the data from.
297 filename : unicode
298 Path to a local file to load the data from.
299 metadata : dict
300 Dict of metadata associated to be the object when displayed
301 """
302 if isinstance(data, (Path, PurePath)):
303 data = str(data)
304
305 if data is not None and isinstance(data, str):
306 if data.startswith('http') and url is None:
307 url = data
308 filename = None
309 data = None
310 elif _safe_exists(data) and filename is None:
311 url = None
312 filename = data
313 data = None
314
315 self.url = url
316 self.filename = filename
317 # because of @data.setter methods in
318 # subclasses ensure url and filename are set
319 # before assigning to self.data
320 self.data = data
321
322 if metadata is not None:
323 self.metadata = metadata
324 elif self.metadata is None:
325 self.metadata = {}
326
327 self.reload()
328 self._check_data()
329
330 def __repr__(self):
331 if not self._show_mem_addr:
332 cls = self.__class__
333 r = "<%s.%s object>" % (cls.__module__, cls.__name__)
334 else:
335 r = super(DisplayObject, self).__repr__()
336 return r
337
338 def _check_data(self):
339 """Override in subclasses if there's something to check."""
340 pass
341
342 def _data_and_metadata(self):
343 """shortcut for returning metadata with shape information, if defined"""
344 if self.metadata:
345 return self.data, deepcopy(self.metadata)
346 else:
347 return self.data
348
349 def reload(self):
350 """Reload the raw data from file or URL."""
351 if self.filename is not None:
352 with open(self.filename, self._read_flags) as f:
353 self.data = f.read()
354 elif self.url is not None:
355 # Deferred import
356 from urllib.request import urlopen
357 response = urlopen(self.url)
358 data = response.read()
359 # extract encoding from header, if there is one:
360 encoding = None
361 if 'content-type' in response.headers:
362 for sub in response.headers['content-type'].split(';'):
363 sub = sub.strip()
364 if sub.startswith('charset'):
365 encoding = sub.split('=')[-1].strip()
366 break
367 if 'content-encoding' in response.headers:
368 # TODO: do deflate?
369 if 'gzip' in response.headers['content-encoding']:
370 import gzip
371 from io import BytesIO
372 with gzip.open(BytesIO(data), 'rt', encoding=encoding) as fp:
373 encoding = None
374 data = fp.read()
375
376 # decode data, if an encoding was specified
377 # We only touch self.data once since
378 # subclasses such as SVG have @data.setter methods
379 # that transform self.data into ... well svg.
380 if encoding:
381 self.data = data.decode(encoding, 'replace')
382 else:
383 self.data = data
384
385
386 class TextDisplayObject(DisplayObject):
387 """Validate that display data is text"""
388 def _check_data(self):
389 if self.data is not None and not isinstance(self.data, str):
390 raise TypeError("%s expects text, not %r" % (self.__class__.__name__, self.data))
391
392 class Pretty(TextDisplayObject):
393
394 def _repr_pretty_(self, pp, cycle):
395 return pp.text(self.data)
396
397
398 class HTML(TextDisplayObject):
399
400 def __init__(self, data=None, url=None, filename=None, metadata=None):
401 def warn():
402 if not data:
403 return False
404
405 #
406 # Avoid calling lower() on the entire data, because it could be a
407 # long string and we're only interested in its beginning and end.
408 #
409 prefix = data[:10].lower()
410 suffix = data[-10:].lower()
411 return prefix.startswith("<iframe ") and suffix.endswith("</iframe>")
412
413 if warn():
414 warnings.warn("Consider using IPython.display.IFrame instead")
415 super(HTML, self).__init__(data=data, url=url, filename=filename, metadata=metadata)
416
417 def _repr_html_(self):
418 return self._data_and_metadata()
419
420 def __html__(self):
421 """
422 This method exists to inform other HTML-using modules (e.g. Markupsafe,
423 htmltag, etc) that this object is HTML and does not need things like
424 special characters (<>&) escaped.
425 """
426 return self._repr_html_()
427
428
429 class Markdown(TextDisplayObject):
430
431 def _repr_markdown_(self):
432 return self._data_and_metadata()
433
434
435 class Math(TextDisplayObject):
436
437 def _repr_latex_(self):
438 s = r"$\displaystyle %s$" % self.data.strip('$')
439 if self.metadata:
440 return s, deepcopy(self.metadata)
441 else:
442 return s
443
444
445 class Latex(TextDisplayObject):
446
447 def _repr_latex_(self):
448 return self._data_and_metadata()
449
450
451 class SVG(DisplayObject):
452 """Embed an SVG into the display.
453
454 Note if you just want to view a svg image via a URL use `:class:Image` with
455 a url=URL keyword argument.
456 """
457
458 _read_flags = 'rb'
459 # wrap data in a property, which extracts the <svg> tag, discarding
460 # document headers
461 _data = None
462
463 @property
464 def data(self):
465 return self._data
466
467 @data.setter
468 def data(self, svg):
469 if svg is None:
470 self._data = None
471 return
472 # parse into dom object
473 from xml.dom import minidom
474 x = minidom.parseString(svg)
475 # get svg tag (should be 1)
476 found_svg = x.getElementsByTagName('svg')
477 if found_svg:
478 svg = found_svg[0].toxml()
479 else:
480 # fallback on the input, trust the user
481 # but this is probably an error.
482 pass
483 svg = cast_unicode(svg)
484 self._data = svg
485
486 def _repr_svg_(self):
487 return self._data_and_metadata()
488
489 class ProgressBar(DisplayObject):
490 """Progressbar supports displaying a progressbar like element
491 """
492 def __init__(self, total):
493 """Creates a new progressbar
494
495 Parameters
496 ----------
497 total : int
498 maximum size of the progressbar
499 """
500 self.total = total
501 self._progress = 0
502 self.html_width = '60ex'
503 self.text_width = 60
504 self._display_id = hexlify(os.urandom(8)).decode('ascii')
505
506 def __repr__(self):
507 fraction = self.progress / self.total
508 filled = '=' * int(fraction * self.text_width)
509 rest = ' ' * (self.text_width - len(filled))
510 return '[{}{}] {}/{}'.format(
511 filled, rest,
512 self.progress, self.total,
513 )
514
515 def _repr_html_(self):
516 return "<progress style='width:{}' max='{}' value='{}'></progress>".format(
517 self.html_width, self.total, self.progress)
518
519 def display(self):
520 display(self, display_id=self._display_id)
521
522 def update(self):
523 display(self, display_id=self._display_id, update=True)
524
525 @property
526 def progress(self):
527 return self._progress
528
529 @progress.setter
530 def progress(self, value):
531 self._progress = value
532 self.update()
533
534 def __iter__(self):
535 self.display()
536 self._progress = -1 # First iteration is 0
537 return self
538
539 def __next__(self):
540 """Returns current value and increments display by one."""
541 self.progress += 1
542 if self.progress < self.total:
543 return self.progress
544 else:
545 raise StopIteration()
546
547 class JSON(DisplayObject):
548 """JSON expects a JSON-able dict or list
549
550 not an already-serialized JSON string.
551
552 Scalar types (None, number, string) are not allowed, only dict or list containers.
553 """
554 # wrap data in a property, which warns about passing already-serialized JSON
555 _data = None
556 def __init__(self, data=None, url=None, filename=None, expanded=False, metadata=None, root='root', **kwargs):
557 """Create a JSON display object given raw data.
558
559 Parameters
560 ----------
561 data : dict or list
562 JSON data to display. Not an already-serialized JSON string.
563 Scalar types (None, number, string) are not allowed, only dict
564 or list containers.
565 url : unicode
566 A URL to download the data from.
567 filename : unicode
568 Path to a local file to load the data from.
569 expanded : boolean
570 Metadata to control whether a JSON display component is expanded.
571 metadata : dict
572 Specify extra metadata to attach to the json display object.
573 root : str
574 The name of the root element of the JSON tree
575 """
576 self.metadata = {
577 'expanded': expanded,
578 'root': root,
579 }
580 if metadata:
581 self.metadata.update(metadata)
582 if kwargs:
583 self.metadata.update(kwargs)
584 super(JSON, self).__init__(data=data, url=url, filename=filename)
585
586 def _check_data(self):
587 if self.data is not None and not isinstance(self.data, (dict, list)):
588 raise TypeError("%s expects JSONable dict or list, not %r" % (self.__class__.__name__, self.data))
589
590 @property
591 def data(self):
592 return self._data
593
594 @data.setter
595 def data(self, data):
596 if isinstance(data, (Path, PurePath)):
597 data = str(data)
598
599 if isinstance(data, str):
600 if self.filename is None and self.url is None:
601 warnings.warn("JSON expects JSONable dict or list, not JSON strings")
602 data = json.loads(data)
603 self._data = data
604
605 def _data_and_metadata(self):
606 return self.data, self.metadata
607
608 def _repr_json_(self):
609 return self._data_and_metadata()
610
611 _css_t = """var link = document.createElement("link");
612 link.ref = "stylesheet";
613 link.type = "text/css";
614 link.href = "%s";
615 document.head.appendChild(link);
616 """
617
618 _lib_t1 = """new Promise(function(resolve, reject) {
619 var script = document.createElement("script");
620 script.onload = resolve;
621 script.onerror = reject;
622 script.src = "%s";
623 document.head.appendChild(script);
624 }).then(() => {
625 """
626
627 _lib_t2 = """
628 });"""
629
630 class GeoJSON(JSON):
631 """GeoJSON expects JSON-able dict
632
633 not an already-serialized JSON string.
634
635 Scalar types (None, number, string) are not allowed, only dict containers.
636 """
637
638 def __init__(self, *args, **kwargs):
639 """Create a GeoJSON display object given raw data.
640
641 Parameters
642 ----------
643 data : dict or list
644 VegaLite data. Not an already-serialized JSON string.
645 Scalar types (None, number, string) are not allowed, only dict
646 or list containers.
647 url_template : string
648 Leaflet TileLayer URL template: http://leafletjs.com/reference.html#url-template
649 layer_options : dict
650 Leaflet TileLayer options: http://leafletjs.com/reference.html#tilelayer-options
651 url : unicode
652 A URL to download the data from.
653 filename : unicode
654 Path to a local file to load the data from.
655 metadata : dict
656 Specify extra metadata to attach to the json display object.
657
658 Examples
659 --------
660 The following will display an interactive map of Mars with a point of
661 interest on frontend that do support GeoJSON display.
662
663 >>> from IPython.display import GeoJSON
664
665 >>> GeoJSON(data={
666 ... "type": "Feature",
667 ... "geometry": {
668 ... "type": "Point",
669 ... "coordinates": [-81.327, 296.038]
670 ... }
671 ... },
672 ... url_template="http://s3-eu-west-1.amazonaws.com/whereonmars.cartodb.net/{basemap_id}/{z}/{x}/{y}.png",
673 ... layer_options={
674 ... "basemap_id": "celestia_mars-shaded-16k_global",
675 ... "attribution" : "Celestia/praesepe",
676 ... "minZoom" : 0,
677 ... "maxZoom" : 18,
678 ... })
679 <IPython.core.display.GeoJSON object>
680
681 In the terminal IPython, you will only see the text representation of
682 the GeoJSON object.
683
684 """
685
686 super(GeoJSON, self).__init__(*args, **kwargs)
687
688
689 def _ipython_display_(self):
690 bundle = {
691 'application/geo+json': self.data,
692 'text/plain': '<IPython.display.GeoJSON object>'
693 }
694 metadata = {
695 'application/geo+json': self.metadata
696 }
697 display(bundle, metadata=metadata, raw=True)
698
699 class Javascript(TextDisplayObject):
700
701 def __init__(self, data=None, url=None, filename=None, lib=None, css=None):
702 """Create a Javascript display object given raw data.
703
704 When this object is returned by an expression or passed to the
705 display function, it will result in the data being displayed
706 in the frontend. If the data is a URL, the data will first be
707 downloaded and then displayed.
708
709 In the Notebook, the containing element will be available as `element`,
710 and jQuery will be available. Content appended to `element` will be
711 visible in the output area.
712
713 Parameters
714 ----------
715 data : unicode, str or bytes
716 The Javascript source code or a URL to download it from.
717 url : unicode
718 A URL to download the data from.
719 filename : unicode
720 Path to a local file to load the data from.
721 lib : list or str
722 A sequence of Javascript library URLs to load asynchronously before
723 running the source code. The full URLs of the libraries should
724 be given. A single Javascript library URL can also be given as a
725 string.
726 css : list or str
727 A sequence of css files to load before running the source code.
728 The full URLs of the css files should be given. A single css URL
729 can also be given as a string.
730 """
731 if isinstance(lib, str):
732 lib = [lib]
733 elif lib is None:
734 lib = []
735 if isinstance(css, str):
736 css = [css]
737 elif css is None:
738 css = []
739 if not isinstance(lib, (list,tuple)):
740 raise TypeError('expected sequence, got: %r' % lib)
741 if not isinstance(css, (list,tuple)):
742 raise TypeError('expected sequence, got: %r' % css)
743 self.lib = lib
744 self.css = css
745 super(Javascript, self).__init__(data=data, url=url, filename=filename)
746
747 def _repr_javascript_(self):
748 r = ''
749 for c in self.css:
750 r += _css_t % c
751 for l in self.lib:
752 r += _lib_t1 % l
753 r += self.data
754 r += _lib_t2*len(self.lib)
755 return r
756
757 # constants for identifying png/jpeg data
758 _PNG = b'\x89PNG\r\n\x1a\n'
759 _JPEG = b'\xff\xd8'
760
761 def _pngxy(data):
762 """read the (width, height) from a PNG header"""
763 ihdr = data.index(b'IHDR')
764 # next 8 bytes are width/height
765 return struct.unpack('>ii', data[ihdr+4:ihdr+12])
766
767 def _jpegxy(data):
768 """read the (width, height) from a JPEG header"""
769 # adapted from http://www.64lines.com/jpeg-width-height
770
771 idx = 4
772 while True:
773 block_size = struct.unpack('>H', data[idx:idx+2])[0]
774 idx = idx + block_size
775 if data[idx:idx+2] == b'\xFF\xC0':
776 # found Start of Frame
777 iSOF = idx
778 break
779 else:
780 # read another block
781 idx += 2
782
783 h, w = struct.unpack('>HH', data[iSOF+5:iSOF+9])
784 return w, h
785
786 def _gifxy(data):
787 """read the (width, height) from a GIF header"""
788 return struct.unpack('<HH', data[6:10])
789
790
791 class Image(DisplayObject):
792
793 _read_flags = 'rb'
794 _FMT_JPEG = u'jpeg'
795 _FMT_PNG = u'png'
796 _FMT_GIF = u'gif'
797 _ACCEPTABLE_EMBEDDINGS = [_FMT_JPEG, _FMT_PNG, _FMT_GIF]
798 _MIMETYPES = {
799 _FMT_PNG: 'image/png',
800 _FMT_JPEG: 'image/jpeg',
801 _FMT_GIF: 'image/gif',
802 }
803
804 def __init__(
805 self,
806 data=None,
807 url=None,
808 filename=None,
809 format=None,
810 embed=None,
811 width=None,
812 height=None,
813 retina=False,
814 unconfined=False,
815 metadata=None,
816 alt=None,
817 ):
818 """Create a PNG/JPEG/GIF image object given raw data.
819
820 When this object is returned by an input cell or passed to the
821 display function, it will result in the image being displayed
822 in the frontend.
823
824 Parameters
825 ----------
826 data : unicode, str or bytes
827 The raw image data or a URL or filename to load the data from.
828 This always results in embedded image data.
829
830 url : unicode
831 A URL to download the data from. If you specify `url=`,
832 the image data will not be embedded unless you also specify `embed=True`.
833
834 filename : unicode
835 Path to a local file to load the data from.
836 Images from a file are always embedded.
837
838 format : unicode
839 The format of the image data (png/jpeg/jpg/gif). If a filename or URL is given
840 for format will be inferred from the filename extension.
841
842 embed : bool
843 Should the image data be embedded using a data URI (True) or be
844 loaded using an <img> tag. Set this to True if you want the image
845 to be viewable later with no internet connection in the notebook.
846
847 Default is `True`, unless the keyword argument `url` is set, then
848 default value is `False`.
849
850 Note that QtConsole is not able to display images if `embed` is set to `False`
851
852 width : int
853 Width in pixels to which to constrain the image in html
854
855 height : int
856 Height in pixels to which to constrain the image in html
857
858 retina : bool
859 Automatically set the width and height to half of the measured
860 width and height.
861 This only works for embedded images because it reads the width/height
862 from image data.
863 For non-embedded images, you can just set the desired display width
864 and height directly.
865
866 unconfined : bool
867 Set unconfined=True to disable max-width confinement of the image.
868
869 metadata : dict
870 Specify extra metadata to attach to the image.
871
872 alt : unicode
873 Alternative text for the image, for use by screen readers.
874
875 Examples
876 --------
877 embedded image data, works in qtconsole and notebook
878 when passed positionally, the first arg can be any of raw image data,
879 a URL, or a filename from which to load image data.
880 The result is always embedding image data for inline images.
881
882 >>> Image('http://www.google.fr/images/srpr/logo3w.png')
883 <IPython.core.display.Image object>
884
885 >>> Image('/path/to/image.jpg')
886 <IPython.core.display.Image object>
887
888 >>> Image(b'RAW_PNG_DATA...')
889 <IPython.core.display.Image object>
890
891 Specifying Image(url=...) does not embed the image data,
892 it only generates ``<img>`` tag with a link to the source.
893 This will not work in the qtconsole or offline.
894
895 >>> Image(url='http://www.google.fr/images/srpr/logo3w.png')
896 <IPython.core.display.Image object>
897
898 """
899 if isinstance(data, (Path, PurePath)):
900 data = str(data)
901
902 if filename is not None:
903 ext = self._find_ext(filename)
904 elif url is not None:
905 ext = self._find_ext(url)
906 elif data is None:
907 raise ValueError("No image data found. Expecting filename, url, or data.")
908 elif isinstance(data, str) and (
909 data.startswith('http') or _safe_exists(data)
910 ):
911 ext = self._find_ext(data)
912 else:
913 ext = None
914
915 if format is None:
916 if ext is not None:
917 if ext == u'jpg' or ext == u'jpeg':
918 format = self._FMT_JPEG
919 elif ext == u'png':
920 format = self._FMT_PNG
921 elif ext == u'gif':
922 format = self._FMT_GIF
923 else:
924 format = ext.lower()
925 elif isinstance(data, bytes):
926 # infer image type from image data header,
927 # only if format has not been specified.
928 if data[:2] == _JPEG:
929 format = self._FMT_JPEG
930
931 # failed to detect format, default png
932 if format is None:
933 format = self._FMT_PNG
934
935 if format.lower() == 'jpg':
936 # jpg->jpeg
937 format = self._FMT_JPEG
938
939 self.format = format.lower()
940 self.embed = embed if embed is not None else (url is None)
941
942 if self.embed and self.format not in self._ACCEPTABLE_EMBEDDINGS:
943 raise ValueError("Cannot embed the '%s' image format" % (self.format))
944 if self.embed:
945 self._mimetype = self._MIMETYPES.get(self.format)
946
947 self.width = width
948 self.height = height
949 self.retina = retina
950 self.unconfined = unconfined
951 self.alt = alt
952 super(Image, self).__init__(data=data, url=url, filename=filename,
953 metadata=metadata)
954
955 if self.width is None and self.metadata.get('width', {}):
956 self.width = metadata['width']
957
958 if self.height is None and self.metadata.get('height', {}):
959 self.height = metadata['height']
960
961 if self.alt is None and self.metadata.get("alt", {}):
962 self.alt = metadata["alt"]
963
964 if retina:
965 self._retina_shape()
966
967
968 def _retina_shape(self):
969 """load pixel-doubled width and height from image data"""
970 if not self.embed:
971 return
972 if self.format == self._FMT_PNG:
973 w, h = _pngxy(self.data)
974 elif self.format == self._FMT_JPEG:
975 w, h = _jpegxy(self.data)
976 elif self.format == self._FMT_GIF:
977 w, h = _gifxy(self.data)
978 else:
979 # retina only supports png
980 return
981 self.width = w // 2
982 self.height = h // 2
983
984 def reload(self):
985 """Reload the raw data from file or URL."""
986 if self.embed:
987 super(Image,self).reload()
988 if self.retina:
989 self._retina_shape()
990
991 def _repr_html_(self):
992 if not self.embed:
993 width = height = klass = alt = ""
994 if self.width:
995 width = ' width="%d"' % self.width
996 if self.height:
997 height = ' height="%d"' % self.height
998 if self.unconfined:
999 klass = ' class="unconfined"'
1000 if self.alt:
1001 alt = ' alt="%s"' % html.escape(self.alt)
1002 return '<img src="{url}"{width}{height}{klass}{alt}/>'.format(
1003 url=self.url,
1004 width=width,
1005 height=height,
1006 klass=klass,
1007 alt=alt,
1008 )
1009
1010 def _repr_mimebundle_(self, include=None, exclude=None):
1011 """Return the image as a mimebundle
1012
1013 Any new mimetype support should be implemented here.
1014 """
1015 if self.embed:
1016 mimetype = self._mimetype
1017 data, metadata = self._data_and_metadata(always_both=True)
1018 if metadata:
1019 metadata = {mimetype: metadata}
1020 return {mimetype: data}, metadata
1021 else:
1022 return {'text/html': self._repr_html_()}
1023
1024 def _data_and_metadata(self, always_both=False):
1025 """shortcut for returning metadata with shape information, if defined"""
1026 try:
1027 b64_data = b2a_base64(self.data).decode('ascii')
1028 except TypeError as e:
1029 raise FileNotFoundError(
1030 "No such file or directory: '%s'" % (self.data)) from e
1031 md = {}
1032 if self.metadata:
1033 md.update(self.metadata)
1034 if self.width:
1035 md['width'] = self.width
1036 if self.height:
1037 md['height'] = self.height
1038 if self.unconfined:
1039 md['unconfined'] = self.unconfined
1040 if self.alt:
1041 md["alt"] = self.alt
1042 if md or always_both:
1043 return b64_data, md
1044 else:
1045 return b64_data
1046
1047 def _repr_png_(self):
1048 if self.embed and self.format == self._FMT_PNG:
1049 return self._data_and_metadata()
1050
1051 def _repr_jpeg_(self):
1052 if self.embed and self.format == self._FMT_JPEG:
1053 return self._data_and_metadata()
1054
1055 def _find_ext(self, s):
1056 base, ext = splitext(s)
1057
1058 if not ext:
1059 return base
1060
1061 # `splitext` includes leading period, so we skip it
1062 return ext[1:].lower()
1063
1064
1065 class Video(DisplayObject):
1066
1067 def __init__(self, data=None, url=None, filename=None, embed=False,
1068 mimetype=None, width=None, height=None, html_attributes="controls"):
1069 """Create a video object given raw data or an URL.
1070
1071 When this object is returned by an input cell or passed to the
1072 display function, it will result in the video being displayed
1073 in the frontend.
1074
1075 Parameters
1076 ----------
1077 data : unicode, str or bytes
1078 The raw video data or a URL or filename to load the data from.
1079 Raw data will require passing ``embed=True``.
1080
1081 url : unicode
1082 A URL for the video. If you specify ``url=``,
1083 the image data will not be embedded.
1084
1085 filename : unicode
1086 Path to a local file containing the video.
1087 Will be interpreted as a local URL unless ``embed=True``.
1088
1089 embed : bool
1090 Should the video be embedded using a data URI (True) or be
1091 loaded using a <video> tag (False).
1092
1093 Since videos are large, embedding them should be avoided, if possible.
1094 You must confirm embedding as your intention by passing ``embed=True``.
1095
1096 Local files can be displayed with URLs without embedding the content, via::
1097
1098 Video('./video.mp4')
1099
1100 mimetype : unicode
1101 Specify the mimetype for embedded videos.
1102 Default will be guessed from file extension, if available.
1103
1104 width : int
1105 Width in pixels to which to constrain the video in HTML.
1106 If not supplied, defaults to the width of the video.
1107
1108 height : int
1109 Height in pixels to which to constrain the video in html.
1110 If not supplied, defaults to the height of the video.
1111
1112 html_attributes : str
1113 Attributes for the HTML ``<video>`` block.
1114 Default: ``"controls"`` to get video controls.
1115 Other examples: ``"controls muted"`` for muted video with controls,
1116 ``"loop autoplay"`` for looping autoplaying video without controls.
1117
1118 Examples
1119 --------
1120 ::
1121
1122 Video('https://archive.org/download/Sita_Sings_the_Blues/Sita_Sings_the_Blues_small.mp4')
1123 Video('path/to/video.mp4')
1124 Video('path/to/video.mp4', embed=True)
1125 Video('path/to/video.mp4', embed=True, html_attributes="controls muted autoplay")
1126 Video(b'raw-videodata', embed=True)
1127 """
1128 if isinstance(data, (Path, PurePath)):
1129 data = str(data)
1130
1131 if url is None and isinstance(data, str) and data.startswith(('http:', 'https:')):
1132 url = data
1133 data = None
1134 elif data is not None and os.path.exists(data):
1135 filename = data
1136 data = None
1137
1138 if data and not embed:
1139 msg = ''.join([
1140 "To embed videos, you must pass embed=True ",
1141 "(this may make your notebook files huge)\n",
1142 "Consider passing Video(url='...')",
1143 ])
1144 raise ValueError(msg)
1145
1146 self.mimetype = mimetype
1147 self.embed = embed
1148 self.width = width
1149 self.height = height
1150 self.html_attributes = html_attributes
1151 super(Video, self).__init__(data=data, url=url, filename=filename)
1152
1153 def _repr_html_(self):
1154 width = height = ''
1155 if self.width:
1156 width = ' width="%d"' % self.width
1157 if self.height:
1158 height = ' height="%d"' % self.height
1159
1160 # External URLs and potentially local files are not embedded into the
1161 # notebook output.
1162 if not self.embed:
1163 url = self.url if self.url is not None else self.filename
1164 output = """<video src="{0}" {1} {2} {3}>
1165 Your browser does not support the <code>video</code> element.
1166 </video>""".format(url, self.html_attributes, width, height)
1167 return output
1168
1169 # Embedded videos are base64-encoded.
1170 mimetype = self.mimetype
1171 if self.filename is not None:
1172 if not mimetype:
1173 mimetype, _ = mimetypes.guess_type(self.filename)
1174
1175 with open(self.filename, 'rb') as f:
1176 video = f.read()
1177 else:
1178 video = self.data
1179 if isinstance(video, str):
1180 # unicode input is already b64-encoded
1181 b64_video = video
1182 else:
1183 b64_video = b2a_base64(video).decode('ascii').rstrip()
1184
1185 output = """<video {0} {1} {2}>
1186 <source src="data:{3};base64,{4}" type="{3}">
1187 Your browser does not support the video tag.
1188 </video>""".format(self.html_attributes, width, height, mimetype, b64_video)
1189 return output
1190
1191 def reload(self):
1192 # TODO
1193 pass
1194
1195
1196 @skip_doctest
1197 def set_matplotlib_formats(*formats, **kwargs):
1198 """
1199 .. deprecated:: 7.23
1200
1201 use `matplotlib_inline.backend_inline.set_matplotlib_formats()`
1202
1203 Select figure formats for the inline backend. Optionally pass quality for JPEG.
1204
1205 For example, this enables PNG and JPEG output with a JPEG quality of 90%::
1206
1207 In [1]: set_matplotlib_formats('png', 'jpeg', quality=90)
1208
1209 To set this in your config files use the following::
1210
1211 c.InlineBackend.figure_formats = {'png', 'jpeg'}
1212 c.InlineBackend.print_figure_kwargs.update({'quality' : 90})
1213
1214 Parameters
1215 ----------
1216 *formats : strs
1217 One or more figure formats to enable: 'png', 'retina', 'jpeg', 'svg', 'pdf'.
1218 **kwargs
1219 Keyword args will be relayed to ``figure.canvas.print_figure``.
1220 """
1221 warnings.warn(
1222 "`set_matplotlib_formats` is deprecated since IPython 7.23, directly "
1223 "use `matplotlib_inline.backend_inline.set_matplotlib_formats()`",
1224 DeprecationWarning,
1225 stacklevel=2,
1226 )
1227
1228 from matplotlib_inline.backend_inline import (
1229 set_matplotlib_formats as set_matplotlib_formats_orig,
1230 )
1231
1232 set_matplotlib_formats_orig(*formats, **kwargs)
1233
1234 @skip_doctest
1235 def set_matplotlib_close(close=True):
1236 """
1237 .. deprecated:: 7.23
1238
1239 use `matplotlib_inline.backend_inline.set_matplotlib_close()`
1240
1241 Set whether the inline backend closes all figures automatically or not.
1242
1243 By default, the inline backend used in the IPython Notebook will close all
1244 matplotlib figures automatically after each cell is run. This means that
1245 plots in different cells won't interfere. Sometimes, you may want to make
1246 a plot in one cell and then refine it in later cells. This can be accomplished
1247 by::
1248
1249 In [1]: set_matplotlib_close(False)
1250
1251 To set this in your config files use the following::
1252
1253 c.InlineBackend.close_figures = False
1254
1255 Parameters
1256 ----------
1257 close : bool
1258 Should all matplotlib figures be automatically closed after each cell is
1259 run?
1260 """
1261 warnings.warn(
1262 "`set_matplotlib_close` is deprecated since IPython 7.23, directly "
1263 "use `matplotlib_inline.backend_inline.set_matplotlib_close()`",
1264 DeprecationWarning,
1265 stacklevel=2,
1266 )
1267
1268 from matplotlib_inline.backend_inline import (
1269 set_matplotlib_close as set_matplotlib_close_orig,
1270 )
1271
1272 set_matplotlib_close_orig(close)
1273
[end of IPython/core/display.py]
[start of IPython/core/displaypub.py]
1 """An interface for publishing rich data to frontends.
2
3 There are two components of the display system:
4
5 * Display formatters, which take a Python object and compute the
6 representation of the object in various formats (text, HTML, SVG, etc.).
7 * The display publisher that is used to send the representation data to the
8 various frontends.
9
10 This module defines the logic display publishing. The display publisher uses
11 the ``display_data`` message type that is defined in the IPython messaging
12 spec.
13 """
14
15 # Copyright (c) IPython Development Team.
16 # Distributed under the terms of the Modified BSD License.
17
18
19 import sys
20
21 from traitlets.config.configurable import Configurable
22 from traitlets import List
23
24 # This used to be defined here - it is imported for backwards compatibility
25 from .display_functions import publish_display_data
26
27 #-----------------------------------------------------------------------------
28 # Main payload class
29 #-----------------------------------------------------------------------------
30
31
32 class DisplayPublisher(Configurable):
33 """A traited class that publishes display data to frontends.
34
35 Instances of this class are created by the main IPython object and should
36 be accessed there.
37 """
38
39 def __init__(self, shell=None, *args, **kwargs):
40 self.shell = shell
41 super().__init__(*args, **kwargs)
42
43 def _validate_data(self, data, metadata=None):
44 """Validate the display data.
45
46 Parameters
47 ----------
48 data : dict
49 The formata data dictionary.
50 metadata : dict
51 Any metadata for the data.
52 """
53
54 if not isinstance(data, dict):
55 raise TypeError('data must be a dict, got: %r' % data)
56 if metadata is not None:
57 if not isinstance(metadata, dict):
58 raise TypeError('metadata must be a dict, got: %r' % data)
59
60 # use * to indicate transient, update are keyword-only
61 def publish(self, data, metadata=None, source=None, *, transient=None, update=False, **kwargs) -> None:
62 """Publish data and metadata to all frontends.
63
64 See the ``display_data`` message in the messaging documentation for
65 more details about this message type.
66
67 The following MIME types are currently implemented:
68
69 * text/plain
70 * text/html
71 * text/markdown
72 * text/latex
73 * application/json
74 * application/javascript
75 * image/png
76 * image/jpeg
77 * image/svg+xml
78
79 Parameters
80 ----------
81 data : dict
82 A dictionary having keys that are valid MIME types (like
83 'text/plain' or 'image/svg+xml') and values that are the data for
84 that MIME type. The data itself must be a JSON'able data
85 structure. Minimally all data should have the 'text/plain' data,
86 which can be displayed by all frontends. If more than the plain
87 text is given, it is up to the frontend to decide which
88 representation to use.
89 metadata : dict
90 A dictionary for metadata related to the data. This can contain
91 arbitrary key, value pairs that frontends can use to interpret
92 the data. Metadata specific to each mime-type can be specified
93 in the metadata dict with the same mime-type keys as
94 the data itself.
95 source : str, deprecated
96 Unused.
97 transient : dict, keyword-only
98 A dictionary for transient data.
99 Data in this dictionary should not be persisted as part of saving this output.
100 Examples include 'display_id'.
101 update : bool, keyword-only, default: False
102 If True, only update existing outputs with the same display_id,
103 rather than creating a new output.
104 """
105
106 handlers = {}
107 if self.shell is not None:
108 handlers = getattr(self.shell, 'mime_renderers', {})
109
110 for mime, handler in handlers.items():
111 if mime in data:
112 handler(data[mime], metadata.get(mime, None))
113 return
114
115 if 'text/plain' in data:
116 print(data['text/plain'])
117
118 def clear_output(self, wait=False):
119 """Clear the output of the cell receiving output."""
120 print('\033[2K\r', end='')
121 sys.stdout.flush()
122 print('\033[2K\r', end='')
123 sys.stderr.flush()
124
125
126 class CapturingDisplayPublisher(DisplayPublisher):
127 """A DisplayPublisher that stores"""
128 outputs = List()
129
130 def publish(self, data, metadata=None, source=None, *, transient=None, update=False):
131 self.outputs.append({'data':data, 'metadata':metadata,
132 'transient':transient, 'update':update})
133
134 def clear_output(self, wait=False):
135 super(CapturingDisplayPublisher, self).clear_output(wait)
136
137 # empty the list, *do not* reassign a new list
138 self.outputs.clear()
139
[end of IPython/core/displaypub.py]
[start of IPython/sphinxext/ipython_directive.py]
1 # -*- coding: utf-8 -*-
2 """
3 Sphinx directive to support embedded IPython code.
4
5 IPython provides an extension for `Sphinx <http://www.sphinx-doc.org/>`_ to
6 highlight and run code.
7
8 This directive allows pasting of entire interactive IPython sessions, prompts
9 and all, and their code will actually get re-executed at doc build time, with
10 all prompts renumbered sequentially. It also allows you to input code as a pure
11 python input by giving the argument python to the directive. The output looks
12 like an interactive ipython section.
13
14 Here is an example of how the IPython directive can
15 **run** python code, at build time.
16
17 .. ipython::
18
19 In [1]: 1+1
20
21 In [1]: import datetime
22 ...: datetime.datetime.now()
23
24 It supports IPython construct that plain
25 Python does not understand (like magics):
26
27 .. ipython::
28
29 In [0]: import time
30
31 In [0]: %timeit time.sleep(0.05)
32
33 This will also support top-level async when using IPython 7.0+
34
35 .. ipython::
36
37 In [2]: import asyncio
38 ...: print('before')
39 ...: await asyncio.sleep(1)
40 ...: print('after')
41
42
43 The namespace will persist across multiple code chucks, Let's define a variable:
44
45 .. ipython::
46
47 In [0]: who = "World"
48
49 And now say hello:
50
51 .. ipython::
52
53 In [0]: print('Hello,', who)
54
55 If the current section raises an exception, you can add the ``:okexcept:`` flag
56 to the current block, otherwise the build will fail.
57
58 .. ipython::
59 :okexcept:
60
61 In [1]: 1/0
62
63 IPython Sphinx directive module
64 ===============================
65
66 To enable this directive, simply list it in your Sphinx ``conf.py`` file
67 (making sure the directory where you placed it is visible to sphinx, as is
68 needed for all Sphinx directives). For example, to enable syntax highlighting
69 and the IPython directive::
70
71 extensions = ['IPython.sphinxext.ipython_console_highlighting',
72 'IPython.sphinxext.ipython_directive']
73
74 The IPython directive outputs code-blocks with the language 'ipython'. So
75 if you do not have the syntax highlighting extension enabled as well, then
76 all rendered code-blocks will be uncolored. By default this directive assumes
77 that your prompts are unchanged IPython ones, but this can be customized.
78 The configurable options that can be placed in conf.py are:
79
80 ipython_savefig_dir:
81 The directory in which to save the figures. This is relative to the
82 Sphinx source directory. The default is `html_static_path`.
83 ipython_rgxin:
84 The compiled regular expression to denote the start of IPython input
85 lines. The default is ``re.compile('In \\[(\\d+)\\]:\\s?(.*)\\s*')``. You
86 shouldn't need to change this.
87 ipython_warning_is_error: [default to True]
88 Fail the build if something unexpected happen, for example if a block raise
89 an exception but does not have the `:okexcept:` flag. The exact behavior of
90 what is considered strict, may change between the sphinx directive version.
91 ipython_rgxout:
92 The compiled regular expression to denote the start of IPython output
93 lines. The default is ``re.compile('Out\\[(\\d+)\\]:\\s?(.*)\\s*')``. You
94 shouldn't need to change this.
95 ipython_promptin:
96 The string to represent the IPython input prompt in the generated ReST.
97 The default is ``'In [%d]:'``. This expects that the line numbers are used
98 in the prompt.
99 ipython_promptout:
100 The string to represent the IPython prompt in the generated ReST. The
101 default is ``'Out [%d]:'``. This expects that the line numbers are used
102 in the prompt.
103 ipython_mplbackend:
104 The string which specifies if the embedded Sphinx shell should import
105 Matplotlib and set the backend. The value specifies a backend that is
106 passed to `matplotlib.use()` before any lines in `ipython_execlines` are
107 executed. If not specified in conf.py, then the default value of 'agg' is
108 used. To use the IPython directive without matplotlib as a dependency, set
109 the value to `None`. It may end up that matplotlib is still imported
110 if the user specifies so in `ipython_execlines` or makes use of the
111 @savefig pseudo decorator.
112 ipython_execlines:
113 A list of strings to be exec'd in the embedded Sphinx shell. Typical
114 usage is to make certain packages always available. Set this to an empty
115 list if you wish to have no imports always available. If specified in
116 ``conf.py`` as `None`, then it has the effect of making no imports available.
117 If omitted from conf.py altogether, then the default value of
118 ['import numpy as np', 'import matplotlib.pyplot as plt'] is used.
119 ipython_holdcount
120 When the @suppress pseudo-decorator is used, the execution count can be
121 incremented or not. The default behavior is to hold the execution count,
122 corresponding to a value of `True`. Set this to `False` to increment
123 the execution count after each suppressed command.
124
125 As an example, to use the IPython directive when `matplotlib` is not available,
126 one sets the backend to `None`::
127
128 ipython_mplbackend = None
129
130 An example usage of the directive is:
131
132 .. code-block:: rst
133
134 .. ipython::
135
136 In [1]: x = 1
137
138 In [2]: y = x**2
139
140 In [3]: print(y)
141
142 See http://matplotlib.org/sampledoc/ipython_directive.html for additional
143 documentation.
144
145 Pseudo-Decorators
146 =================
147
148 Note: Only one decorator is supported per input. If more than one decorator
149 is specified, then only the last one is used.
150
151 In addition to the Pseudo-Decorators/options described at the above link,
152 several enhancements have been made. The directive will emit a message to the
153 console at build-time if code-execution resulted in an exception or warning.
154 You can suppress these on a per-block basis by specifying the :okexcept:
155 or :okwarning: options:
156
157 .. code-block:: rst
158
159 .. ipython::
160 :okexcept:
161 :okwarning:
162
163 In [1]: 1/0
164 In [2]: # raise warning.
165
166 To Do
167 =====
168
169 - Turn the ad-hoc test() function into a real test suite.
170 - Break up ipython-specific functionality from matplotlib stuff into better
171 separated code.
172
173 """
174
175 # Authors
176 # =======
177 #
178 # - John D Hunter: original author.
179 # - Fernando Perez: refactoring, documentation, cleanups, port to 0.11.
180 # - VáclavŠmilauer <eudoxos-AT-arcig.cz>: Prompt generalizations.
181 # - Skipper Seabold, refactoring, cleanups, pure python addition
182
183 #-----------------------------------------------------------------------------
184 # Imports
185 #-----------------------------------------------------------------------------
186
187 # Stdlib
188 import atexit
189 import errno
190 import os
191 import pathlib
192 import re
193 import sys
194 import tempfile
195 import ast
196 import warnings
197 import shutil
198 from io import StringIO
199
200 # Third-party
201 from docutils.parsers.rst import directives
202 from docutils.parsers.rst import Directive
203 from sphinx.util import logging
204
205 # Our own
206 from traitlets.config import Config
207 from IPython import InteractiveShell
208 from IPython.core.profiledir import ProfileDir
209
210 use_matplotlib = False
211 try:
212 import matplotlib
213 use_matplotlib = True
214 except Exception:
215 pass
216
217 #-----------------------------------------------------------------------------
218 # Globals
219 #-----------------------------------------------------------------------------
220 # for tokenizing blocks
221 COMMENT, INPUT, OUTPUT = range(3)
222
223 #-----------------------------------------------------------------------------
224 # Functions and class declarations
225 #-----------------------------------------------------------------------------
226
227 def block_parser(part, rgxin, rgxout, fmtin, fmtout):
228 """
229 part is a string of ipython text, comprised of at most one
230 input, one output, comments, and blank lines. The block parser
231 parses the text into a list of::
232
233 blocks = [ (TOKEN0, data0), (TOKEN1, data1), ...]
234
235 where TOKEN is one of [COMMENT | INPUT | OUTPUT ] and
236 data is, depending on the type of token::
237
238 COMMENT : the comment string
239
240 INPUT: the (DECORATOR, INPUT_LINE, REST) where
241 DECORATOR: the input decorator (or None)
242 INPUT_LINE: the input as string (possibly multi-line)
243 REST : any stdout generated by the input line (not OUTPUT)
244
245 OUTPUT: the output string, possibly multi-line
246
247 """
248 block = []
249 lines = part.split('\n')
250 N = len(lines)
251 i = 0
252 decorator = None
253 while 1:
254
255 if i==N:
256 # nothing left to parse -- the last line
257 break
258
259 line = lines[i]
260 i += 1
261 line_stripped = line.strip()
262 if line_stripped.startswith('#'):
263 block.append((COMMENT, line))
264 continue
265
266 if line_stripped.startswith('@'):
267 # Here is where we assume there is, at most, one decorator.
268 # Might need to rethink this.
269 decorator = line_stripped
270 continue
271
272 # does this look like an input line?
273 matchin = rgxin.match(line)
274 if matchin:
275 lineno, inputline = int(matchin.group(1)), matchin.group(2)
276
277 # the ....: continuation string
278 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
279 Nc = len(continuation)
280 # input lines can continue on for more than one line, if
281 # we have a '\' line continuation char or a function call
282 # echo line 'print'. The input line can only be
283 # terminated by the end of the block or an output line, so
284 # we parse out the rest of the input line if it is
285 # multiline as well as any echo text
286
287 rest = []
288 while i<N:
289
290 # look ahead; if the next line is blank, or a comment, or
291 # an output line, we're done
292
293 nextline = lines[i]
294 matchout = rgxout.match(nextline)
295 #print "nextline=%s, continuation=%s, starts=%s"%(nextline, continuation, nextline.startswith(continuation))
296 if matchout or nextline.startswith('#'):
297 break
298 elif nextline.startswith(continuation):
299 # The default ipython_rgx* treat the space following the colon as optional.
300 # However, If the space is there we must consume it or code
301 # employing the cython_magic extension will fail to execute.
302 #
303 # This works with the default ipython_rgx* patterns,
304 # If you modify them, YMMV.
305 nextline = nextline[Nc:]
306 if nextline and nextline[0] == ' ':
307 nextline = nextline[1:]
308
309 inputline += '\n' + nextline
310 else:
311 rest.append(nextline)
312 i+= 1
313
314 block.append((INPUT, (decorator, inputline, '\n'.join(rest))))
315 continue
316
317 # if it looks like an output line grab all the text to the end
318 # of the block
319 matchout = rgxout.match(line)
320 if matchout:
321 lineno, output = int(matchout.group(1)), matchout.group(2)
322 if i<N-1:
323 output = '\n'.join([output] + lines[i:])
324
325 block.append((OUTPUT, output))
326 break
327
328 return block
329
330
331 class EmbeddedSphinxShell(object):
332 """An embedded IPython instance to run inside Sphinx"""
333
334 def __init__(self, exec_lines=None):
335
336 self.cout = StringIO()
337
338 if exec_lines is None:
339 exec_lines = []
340
341 # Create config object for IPython
342 config = Config()
343 config.HistoryManager.hist_file = ':memory:'
344 config.InteractiveShell.autocall = False
345 config.InteractiveShell.autoindent = False
346 config.InteractiveShell.colors = 'NoColor'
347
348 # create a profile so instance history isn't saved
349 tmp_profile_dir = tempfile.mkdtemp(prefix='profile_')
350 profname = 'auto_profile_sphinx_build'
351 pdir = os.path.join(tmp_profile_dir,profname)
352 profile = ProfileDir.create_profile_dir(pdir)
353
354 # Create and initialize global ipython, but don't start its mainloop.
355 # This will persist across different EmbeddedSphinxShell instances.
356 IP = InteractiveShell.instance(config=config, profile_dir=profile)
357 atexit.register(self.cleanup)
358
359 # Store a few parts of IPython we'll need.
360 self.IP = IP
361 self.user_ns = self.IP.user_ns
362 self.user_global_ns = self.IP.user_global_ns
363
364 self.input = ''
365 self.output = ''
366 self.tmp_profile_dir = tmp_profile_dir
367
368 self.is_verbatim = False
369 self.is_doctest = False
370 self.is_suppress = False
371
372 # Optionally, provide more detailed information to shell.
373 # this is assigned by the SetUp method of IPythonDirective
374 # to point at itself.
375 #
376 # So, you can access handy things at self.directive.state
377 self.directive = None
378
379 # on the first call to the savefig decorator, we'll import
380 # pyplot as plt so we can make a call to the plt.gcf().savefig
381 self._pyplot_imported = False
382
383 # Prepopulate the namespace.
384 for line in exec_lines:
385 self.process_input_line(line, store_history=False)
386
387 def cleanup(self):
388 shutil.rmtree(self.tmp_profile_dir, ignore_errors=True)
389
390 def clear_cout(self):
391 self.cout.seek(0)
392 self.cout.truncate(0)
393
394 def process_input_line(self, line, store_history):
395 return self.process_input_lines([line], store_history=store_history)
396
397 def process_input_lines(self, lines, store_history=True):
398 """process the input, capturing stdout"""
399 stdout = sys.stdout
400 source_raw = '\n'.join(lines)
401 try:
402 sys.stdout = self.cout
403 self.IP.run_cell(source_raw, store_history=store_history)
404 finally:
405 sys.stdout = stdout
406
407 def process_image(self, decorator):
408 """
409 # build out an image directive like
410 # .. image:: somefile.png
411 # :width 4in
412 #
413 # from an input like
414 # savefig somefile.png width=4in
415 """
416 savefig_dir = self.savefig_dir
417 source_dir = self.source_dir
418 saveargs = decorator.split(' ')
419 filename = saveargs[1]
420 # insert relative path to image file in source
421 # as absolute path for Sphinx
422 # sphinx expects a posix path, even on Windows
423 path = pathlib.Path(savefig_dir, filename)
424 outfile = '/' + path.relative_to(source_dir).as_posix()
425
426 imagerows = ['.. image:: %s' % outfile]
427
428 for kwarg in saveargs[2:]:
429 arg, val = kwarg.split('=')
430 arg = arg.strip()
431 val = val.strip()
432 imagerows.append(' :%s: %s'%(arg, val))
433
434 image_file = os.path.basename(outfile) # only return file name
435 image_directive = '\n'.join(imagerows)
436 return image_file, image_directive
437
438 # Callbacks for each type of token
439 def process_input(self, data, input_prompt, lineno):
440 """
441 Process data block for INPUT token.
442
443 """
444 decorator, input, rest = data
445 image_file = None
446 image_directive = None
447
448 is_verbatim = decorator=='@verbatim' or self.is_verbatim
449 is_doctest = (decorator is not None and \
450 decorator.startswith('@doctest')) or self.is_doctest
451 is_suppress = decorator=='@suppress' or self.is_suppress
452 is_okexcept = decorator=='@okexcept' or self.is_okexcept
453 is_okwarning = decorator=='@okwarning' or self.is_okwarning
454 is_savefig = decorator is not None and \
455 decorator.startswith('@savefig')
456
457 input_lines = input.split('\n')
458 if len(input_lines) > 1:
459 if input_lines[-1] != "":
460 input_lines.append('') # make sure there's a blank line
461 # so splitter buffer gets reset
462
463 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
464
465 if is_savefig:
466 image_file, image_directive = self.process_image(decorator)
467
468 ret = []
469 is_semicolon = False
470
471 # Hold the execution count, if requested to do so.
472 if is_suppress and self.hold_count:
473 store_history = False
474 else:
475 store_history = True
476
477 # Note: catch_warnings is not thread safe
478 with warnings.catch_warnings(record=True) as ws:
479 if input_lines[0].endswith(';'):
480 is_semicolon = True
481 #for i, line in enumerate(input_lines):
482
483 # process the first input line
484 if is_verbatim:
485 self.process_input_lines([''])
486 self.IP.execution_count += 1 # increment it anyway
487 else:
488 # only submit the line in non-verbatim mode
489 self.process_input_lines(input_lines, store_history=store_history)
490
491 if not is_suppress:
492 for i, line in enumerate(input_lines):
493 if i == 0:
494 formatted_line = '%s %s'%(input_prompt, line)
495 else:
496 formatted_line = '%s %s'%(continuation, line)
497 ret.append(formatted_line)
498
499 if not is_suppress and len(rest.strip()) and is_verbatim:
500 # The "rest" is the standard output of the input. This needs to be
501 # added when in verbatim mode. If there is no "rest", then we don't
502 # add it, as the new line will be added by the processed output.
503 ret.append(rest)
504
505 # Fetch the processed output. (This is not the submitted output.)
506 self.cout.seek(0)
507 processed_output = self.cout.read()
508 if not is_suppress and not is_semicolon:
509 #
510 # In IPythonDirective.run, the elements of `ret` are eventually
511 # combined such that '' entries correspond to newlines. So if
512 # `processed_output` is equal to '', then the adding it to `ret`
513 # ensures that there is a blank line between consecutive inputs
514 # that have no outputs, as in:
515 #
516 # In [1]: x = 4
517 #
518 # In [2]: x = 5
519 #
520 # When there is processed output, it has a '\n' at the tail end. So
521 # adding the output to `ret` will provide the necessary spacing
522 # between consecutive input/output blocks, as in:
523 #
524 # In [1]: x
525 # Out[1]: 5
526 #
527 # In [2]: x
528 # Out[2]: 5
529 #
530 # When there is stdout from the input, it also has a '\n' at the
531 # tail end, and so this ensures proper spacing as well. E.g.:
532 #
533 # In [1]: print x
534 # 5
535 #
536 # In [2]: x = 5
537 #
538 # When in verbatim mode, `processed_output` is empty (because
539 # nothing was passed to IP. Sometimes the submitted code block has
540 # an Out[] portion and sometimes it does not. When it does not, we
541 # need to ensure proper spacing, so we have to add '' to `ret`.
542 # However, if there is an Out[] in the submitted code, then we do
543 # not want to add a newline as `process_output` has stuff to add.
544 # The difficulty is that `process_input` doesn't know if
545 # `process_output` will be called---so it doesn't know if there is
546 # Out[] in the code block. The requires that we include a hack in
547 # `process_block`. See the comments there.
548 #
549 ret.append(processed_output)
550 elif is_semicolon:
551 # Make sure there is a newline after the semicolon.
552 ret.append('')
553
554 # context information
555 filename = "Unknown"
556 lineno = 0
557 if self.directive.state:
558 filename = self.directive.state.document.current_source
559 lineno = self.directive.state.document.current_line
560
561 # Use sphinx logger for warnings
562 logger = logging.getLogger(__name__)
563
564 # output any exceptions raised during execution to stdout
565 # unless :okexcept: has been specified.
566 if not is_okexcept and (
567 ("Traceback" in processed_output) or ("SyntaxError" in processed_output)
568 ):
569 s = "\n>>>" + ("-" * 73) + "\n"
570 s += "Exception in %s at block ending on line %s\n" % (filename, lineno)
571 s += "Specify :okexcept: as an option in the ipython:: block to suppress this message\n"
572 s += processed_output + "\n"
573 s += "<<<" + ("-" * 73)
574 logger.warning(s)
575 if self.warning_is_error:
576 raise RuntimeError('Non Expected exception in `{}` line {}'.format(filename, lineno))
577
578 # output any warning raised during execution to stdout
579 # unless :okwarning: has been specified.
580 if not is_okwarning:
581 for w in ws:
582 s = "\n>>>" + ("-" * 73) + "\n"
583 s += "Warning in %s at block ending on line %s\n" % (filename, lineno)
584 s += "Specify :okwarning: as an option in the ipython:: block to suppress this message\n"
585 s += ("-" * 76) + "\n"
586 s += warnings.formatwarning(
587 w.message, w.category, w.filename, w.lineno, w.line
588 )
589 s += "<<<" + ("-" * 73)
590 logger.warning(s)
591 if self.warning_is_error:
592 raise RuntimeError('Non Expected warning in `{}` line {}'.format(filename, lineno))
593
594 self.clear_cout()
595 return (ret, input_lines, processed_output,
596 is_doctest, decorator, image_file, image_directive)
597
598
599 def process_output(self, data, output_prompt, input_lines, output,
600 is_doctest, decorator, image_file):
601 """
602 Process data block for OUTPUT token.
603
604 """
605 # Recall: `data` is the submitted output, and `output` is the processed
606 # output from `input_lines`.
607
608 TAB = ' ' * 4
609
610 if is_doctest and output is not None:
611
612 found = output # This is the processed output
613 found = found.strip()
614 submitted = data.strip()
615
616 if self.directive is None:
617 source = 'Unavailable'
618 content = 'Unavailable'
619 else:
620 source = self.directive.state.document.current_source
621 content = self.directive.content
622 # Add tabs and join into a single string.
623 content = '\n'.join([TAB + line for line in content])
624
625 # Make sure the output contains the output prompt.
626 ind = found.find(output_prompt)
627 if ind < 0:
628 e = ('output does not contain output prompt\n\n'
629 'Document source: {0}\n\n'
630 'Raw content: \n{1}\n\n'
631 'Input line(s):\n{TAB}{2}\n\n'
632 'Output line(s):\n{TAB}{3}\n\n')
633 e = e.format(source, content, '\n'.join(input_lines),
634 repr(found), TAB=TAB)
635 raise RuntimeError(e)
636 found = found[len(output_prompt):].strip()
637
638 # Handle the actual doctest comparison.
639 if decorator.strip() == '@doctest':
640 # Standard doctest
641 if found != submitted:
642 e = ('doctest failure\n\n'
643 'Document source: {0}\n\n'
644 'Raw content: \n{1}\n\n'
645 'On input line(s):\n{TAB}{2}\n\n'
646 'we found output:\n{TAB}{3}\n\n'
647 'instead of the expected:\n{TAB}{4}\n\n')
648 e = e.format(source, content, '\n'.join(input_lines),
649 repr(found), repr(submitted), TAB=TAB)
650 raise RuntimeError(e)
651 else:
652 self.custom_doctest(decorator, input_lines, found, submitted)
653
654 # When in verbatim mode, this holds additional submitted output
655 # to be written in the final Sphinx output.
656 # https://github.com/ipython/ipython/issues/5776
657 out_data = []
658
659 is_verbatim = decorator=='@verbatim' or self.is_verbatim
660 if is_verbatim and data.strip():
661 # Note that `ret` in `process_block` has '' as its last element if
662 # the code block was in verbatim mode. So if there is no submitted
663 # output, then we will have proper spacing only if we do not add
664 # an additional '' to `out_data`. This is why we condition on
665 # `and data.strip()`.
666
667 # The submitted output has no output prompt. If we want the
668 # prompt and the code to appear, we need to join them now
669 # instead of adding them separately---as this would create an
670 # undesired newline. How we do this ultimately depends on the
671 # format of the output regex. I'll do what works for the default
672 # prompt for now, and we might have to adjust if it doesn't work
673 # in other cases. Finally, the submitted output does not have
674 # a trailing newline, so we must add it manually.
675 out_data.append("{0} {1}\n".format(output_prompt, data))
676
677 return out_data
678
679 def process_comment(self, data):
680 """Process data fPblock for COMMENT token."""
681 if not self.is_suppress:
682 return [data]
683
684 def save_image(self, image_file):
685 """
686 Saves the image file to disk.
687 """
688 self.ensure_pyplot()
689 command = 'plt.gcf().savefig("%s")'%image_file
690 #print 'SAVEFIG', command # dbg
691 self.process_input_line('bookmark ipy_thisdir', store_history=False)
692 self.process_input_line('cd -b ipy_savedir', store_history=False)
693 self.process_input_line(command, store_history=False)
694 self.process_input_line('cd -b ipy_thisdir', store_history=False)
695 self.process_input_line('bookmark -d ipy_thisdir', store_history=False)
696 self.clear_cout()
697
698 def process_block(self, block):
699 """
700 process block from the block_parser and return a list of processed lines
701 """
702 ret = []
703 output = None
704 input_lines = None
705 lineno = self.IP.execution_count
706
707 input_prompt = self.promptin % lineno
708 output_prompt = self.promptout % lineno
709 image_file = None
710 image_directive = None
711
712 found_input = False
713 for token, data in block:
714 if token == COMMENT:
715 out_data = self.process_comment(data)
716 elif token == INPUT:
717 found_input = True
718 (out_data, input_lines, output, is_doctest,
719 decorator, image_file, image_directive) = \
720 self.process_input(data, input_prompt, lineno)
721 elif token == OUTPUT:
722 if not found_input:
723
724 TAB = ' ' * 4
725 linenumber = 0
726 source = 'Unavailable'
727 content = 'Unavailable'
728 if self.directive:
729 linenumber = self.directive.state.document.current_line
730 source = self.directive.state.document.current_source
731 content = self.directive.content
732 # Add tabs and join into a single string.
733 content = '\n'.join([TAB + line for line in content])
734
735 e = ('\n\nInvalid block: Block contains an output prompt '
736 'without an input prompt.\n\n'
737 'Document source: {0}\n\n'
738 'Content begins at line {1}: \n\n{2}\n\n'
739 'Problematic block within content: \n\n{TAB}{3}\n\n')
740 e = e.format(source, linenumber, content, block, TAB=TAB)
741
742 # Write, rather than include in exception, since Sphinx
743 # will truncate tracebacks.
744 sys.stdout.write(e)
745 raise RuntimeError('An invalid block was detected.')
746 out_data = \
747 self.process_output(data, output_prompt, input_lines,
748 output, is_doctest, decorator,
749 image_file)
750 if out_data:
751 # Then there was user submitted output in verbatim mode.
752 # We need to remove the last element of `ret` that was
753 # added in `process_input`, as it is '' and would introduce
754 # an undesirable newline.
755 assert(ret[-1] == '')
756 del ret[-1]
757
758 if out_data:
759 ret.extend(out_data)
760
761 # save the image files
762 if image_file is not None:
763 self.save_image(image_file)
764
765 return ret, image_directive
766
767 def ensure_pyplot(self):
768 """
769 Ensures that pyplot has been imported into the embedded IPython shell.
770
771 Also, makes sure to set the backend appropriately if not set already.
772
773 """
774 # We are here if the @figure pseudo decorator was used. Thus, it's
775 # possible that we could be here even if python_mplbackend were set to
776 # `None`. That's also strange and perhaps worthy of raising an
777 # exception, but for now, we just set the backend to 'agg'.
778
779 if not self._pyplot_imported:
780 if 'matplotlib.backends' not in sys.modules:
781 # Then ipython_matplotlib was set to None but there was a
782 # call to the @figure decorator (and ipython_execlines did
783 # not set a backend).
784 #raise Exception("No backend was set, but @figure was used!")
785 import matplotlib
786 matplotlib.use('agg')
787
788 # Always import pyplot into embedded shell.
789 self.process_input_line('import matplotlib.pyplot as plt',
790 store_history=False)
791 self._pyplot_imported = True
792
793 def process_pure_python(self, content):
794 """
795 content is a list of strings. it is unedited directive content
796
797 This runs it line by line in the InteractiveShell, prepends
798 prompts as needed capturing stderr and stdout, then returns
799 the content as a list as if it were ipython code
800 """
801 output = []
802 savefig = False # keep up with this to clear figure
803 multiline = False # to handle line continuation
804 multiline_start = None
805 fmtin = self.promptin
806
807 ct = 0
808
809 for lineno, line in enumerate(content):
810
811 line_stripped = line.strip()
812 if not len(line):
813 output.append(line)
814 continue
815
816 # handle decorators
817 if line_stripped.startswith('@'):
818 output.extend([line])
819 if 'savefig' in line:
820 savefig = True # and need to clear figure
821 continue
822
823 # handle comments
824 if line_stripped.startswith('#'):
825 output.extend([line])
826 continue
827
828 # deal with lines checking for multiline
829 continuation = u' %s:'% ''.join(['.']*(len(str(ct))+2))
830 if not multiline:
831 modified = u"%s %s" % (fmtin % ct, line_stripped)
832 output.append(modified)
833 ct += 1
834 try:
835 ast.parse(line_stripped)
836 output.append(u'')
837 except Exception: # on a multiline
838 multiline = True
839 multiline_start = lineno
840 else: # still on a multiline
841 modified = u'%s %s' % (continuation, line)
842 output.append(modified)
843
844 # if the next line is indented, it should be part of multiline
845 if len(content) > lineno + 1:
846 nextline = content[lineno + 1]
847 if len(nextline) - len(nextline.lstrip()) > 3:
848 continue
849 try:
850 mod = ast.parse(
851 '\n'.join(content[multiline_start:lineno+1]))
852 if isinstance(mod.body[0], ast.FunctionDef):
853 # check to see if we have the whole function
854 for element in mod.body[0].body:
855 if isinstance(element, ast.Return):
856 multiline = False
857 else:
858 output.append(u'')
859 multiline = False
860 except Exception:
861 pass
862
863 if savefig: # clear figure if plotted
864 self.ensure_pyplot()
865 self.process_input_line('plt.clf()', store_history=False)
866 self.clear_cout()
867 savefig = False
868
869 return output
870
871 def custom_doctest(self, decorator, input_lines, found, submitted):
872 """
873 Perform a specialized doctest.
874
875 """
876 from .custom_doctests import doctests
877
878 args = decorator.split()
879 doctest_type = args[1]
880 if doctest_type in doctests:
881 doctests[doctest_type](self, args, input_lines, found, submitted)
882 else:
883 e = "Invalid option to @doctest: {0}".format(doctest_type)
884 raise Exception(e)
885
886
887 class IPythonDirective(Directive):
888
889 has_content = True
890 required_arguments = 0
891 optional_arguments = 4 # python, suppress, verbatim, doctest
892 final_argumuent_whitespace = True
893 option_spec = { 'python': directives.unchanged,
894 'suppress' : directives.flag,
895 'verbatim' : directives.flag,
896 'doctest' : directives.flag,
897 'okexcept': directives.flag,
898 'okwarning': directives.flag
899 }
900
901 shell = None
902
903 seen_docs = set()
904
905 def get_config_options(self):
906 # contains sphinx configuration variables
907 config = self.state.document.settings.env.config
908
909 # get config variables to set figure output directory
910 savefig_dir = config.ipython_savefig_dir
911 source_dir = self.state.document.settings.env.srcdir
912 savefig_dir = os.path.join(source_dir, savefig_dir)
913
914 # get regex and prompt stuff
915 rgxin = config.ipython_rgxin
916 rgxout = config.ipython_rgxout
917 warning_is_error= config.ipython_warning_is_error
918 promptin = config.ipython_promptin
919 promptout = config.ipython_promptout
920 mplbackend = config.ipython_mplbackend
921 exec_lines = config.ipython_execlines
922 hold_count = config.ipython_holdcount
923
924 return (savefig_dir, source_dir, rgxin, rgxout,
925 promptin, promptout, mplbackend, exec_lines, hold_count, warning_is_error)
926
927 def setup(self):
928 # Get configuration values.
929 (savefig_dir, source_dir, rgxin, rgxout, promptin, promptout,
930 mplbackend, exec_lines, hold_count, warning_is_error) = self.get_config_options()
931
932 try:
933 os.makedirs(savefig_dir)
934 except OSError as e:
935 if e.errno != errno.EEXIST:
936 raise
937
938 if self.shell is None:
939 # We will be here many times. However, when the
940 # EmbeddedSphinxShell is created, its interactive shell member
941 # is the same for each instance.
942
943 if mplbackend and 'matplotlib.backends' not in sys.modules and use_matplotlib:
944 import matplotlib
945 matplotlib.use(mplbackend)
946
947 # Must be called after (potentially) importing matplotlib and
948 # setting its backend since exec_lines might import pylab.
949 self.shell = EmbeddedSphinxShell(exec_lines)
950
951 # Store IPython directive to enable better error messages
952 self.shell.directive = self
953
954 # reset the execution count if we haven't processed this doc
955 #NOTE: this may be borked if there are multiple seen_doc tmp files
956 #check time stamp?
957 if not self.state.document.current_source in self.seen_docs:
958 self.shell.IP.history_manager.reset()
959 self.shell.IP.execution_count = 1
960 self.seen_docs.add(self.state.document.current_source)
961
962 # and attach to shell so we don't have to pass them around
963 self.shell.rgxin = rgxin
964 self.shell.rgxout = rgxout
965 self.shell.promptin = promptin
966 self.shell.promptout = promptout
967 self.shell.savefig_dir = savefig_dir
968 self.shell.source_dir = source_dir
969 self.shell.hold_count = hold_count
970 self.shell.warning_is_error = warning_is_error
971
972 # setup bookmark for saving figures directory
973 self.shell.process_input_line('bookmark ipy_savedir %s'%savefig_dir,
974 store_history=False)
975 self.shell.clear_cout()
976
977 return rgxin, rgxout, promptin, promptout
978
979 def teardown(self):
980 # delete last bookmark
981 self.shell.process_input_line('bookmark -d ipy_savedir',
982 store_history=False)
983 self.shell.clear_cout()
984
985 def run(self):
986 debug = False
987
988 #TODO, any reason block_parser can't be a method of embeddable shell
989 # then we wouldn't have to carry these around
990 rgxin, rgxout, promptin, promptout = self.setup()
991
992 options = self.options
993 self.shell.is_suppress = 'suppress' in options
994 self.shell.is_doctest = 'doctest' in options
995 self.shell.is_verbatim = 'verbatim' in options
996 self.shell.is_okexcept = 'okexcept' in options
997 self.shell.is_okwarning = 'okwarning' in options
998
999 # handle pure python code
1000 if 'python' in self.arguments:
1001 content = self.content
1002 self.content = self.shell.process_pure_python(content)
1003
1004 # parts consists of all text within the ipython-block.
1005 # Each part is an input/output block.
1006 parts = '\n'.join(self.content).split('\n\n')
1007
1008 lines = ['.. code-block:: ipython', '']
1009 figures = []
1010
1011 # Use sphinx logger for warnings
1012 logger = logging.getLogger(__name__)
1013
1014 for part in parts:
1015 block = block_parser(part, rgxin, rgxout, promptin, promptout)
1016 if len(block):
1017 rows, figure = self.shell.process_block(block)
1018 for row in rows:
1019 lines.extend([' {0}'.format(line)
1020 for line in row.split('\n')])
1021
1022 if figure is not None:
1023 figures.append(figure)
1024 else:
1025 message = 'Code input with no code at {}, line {}'\
1026 .format(
1027 self.state.document.current_source,
1028 self.state.document.current_line)
1029 if self.shell.warning_is_error:
1030 raise RuntimeError(message)
1031 else:
1032 logger.warning(message)
1033
1034 for figure in figures:
1035 lines.append('')
1036 lines.extend(figure.split('\n'))
1037 lines.append('')
1038
1039 if len(lines) > 2:
1040 if debug:
1041 print('\n'.join(lines))
1042 else:
1043 # This has to do with input, not output. But if we comment
1044 # these lines out, then no IPython code will appear in the
1045 # final output.
1046 self.state_machine.insert_input(
1047 lines, self.state_machine.input_lines.source(0))
1048
1049 # cleanup
1050 self.teardown()
1051
1052 return []
1053
1054 # Enable as a proper Sphinx directive
1055 def setup(app):
1056 setup.app = app
1057
1058 app.add_directive('ipython', IPythonDirective)
1059 app.add_config_value('ipython_savefig_dir', 'savefig', 'env')
1060 app.add_config_value('ipython_warning_is_error', True, 'env')
1061 app.add_config_value('ipython_rgxin',
1062 re.compile(r'In \[(\d+)\]:\s?(.*)\s*'), 'env')
1063 app.add_config_value('ipython_rgxout',
1064 re.compile(r'Out\[(\d+)\]:\s?(.*)\s*'), 'env')
1065 app.add_config_value('ipython_promptin', 'In [%d]:', 'env')
1066 app.add_config_value('ipython_promptout', 'Out[%d]:', 'env')
1067
1068 # We could just let matplotlib pick whatever is specified as the default
1069 # backend in the matplotlibrc file, but this would cause issues if the
1070 # backend didn't work in headless environments. For this reason, 'agg'
1071 # is a good default backend choice.
1072 app.add_config_value('ipython_mplbackend', 'agg', 'env')
1073
1074 # If the user sets this config value to `None`, then EmbeddedSphinxShell's
1075 # __init__ method will treat it as [].
1076 execlines = ['import numpy as np']
1077 if use_matplotlib:
1078 execlines.append('import matplotlib.pyplot as plt')
1079 app.add_config_value('ipython_execlines', execlines, 'env')
1080
1081 app.add_config_value('ipython_holdcount', True, 'env')
1082
1083 metadata = {'parallel_read_safe': True, 'parallel_write_safe': True}
1084 return metadata
1085
1086 # Simple smoke test, needs to be converted to a proper automatic test.
1087 def test():
1088
1089 examples = [
1090 r"""
1091 In [9]: pwd
1092 Out[9]: '/home/jdhunter/py4science/book'
1093
1094 In [10]: cd bookdata/
1095 /home/jdhunter/py4science/book/bookdata
1096
1097 In [2]: from pylab import *
1098
1099 In [2]: ion()
1100
1101 In [3]: im = imread('stinkbug.png')
1102
1103 @savefig mystinkbug.png width=4in
1104 In [4]: imshow(im)
1105 Out[4]: <matplotlib.image.AxesImage object at 0x39ea850>
1106
1107 """,
1108 r"""
1109
1110 In [1]: x = 'hello world'
1111
1112 # string methods can be
1113 # used to alter the string
1114 @doctest
1115 In [2]: x.upper()
1116 Out[2]: 'HELLO WORLD'
1117
1118 @verbatim
1119 In [3]: x.st<TAB>
1120 x.startswith x.strip
1121 """,
1122 r"""
1123
1124 In [130]: url = 'http://ichart.finance.yahoo.com/table.csv?s=CROX\
1125 .....: &d=9&e=22&f=2009&g=d&a=1&br=8&c=2006&ignore=.csv'
1126
1127 In [131]: print url.split('&')
1128 ['http://ichart.finance.yahoo.com/table.csv?s=CROX', 'd=9', 'e=22', 'f=2009', 'g=d', 'a=1', 'b=8', 'c=2006', 'ignore=.csv']
1129
1130 In [60]: import urllib
1131
1132 """,
1133 r"""\
1134
1135 In [133]: import numpy.random
1136
1137 @suppress
1138 In [134]: numpy.random.seed(2358)
1139
1140 @doctest
1141 In [135]: numpy.random.rand(10,2)
1142 Out[135]:
1143 array([[ 0.64524308, 0.59943846],
1144 [ 0.47102322, 0.8715456 ],
1145 [ 0.29370834, 0.74776844],
1146 [ 0.99539577, 0.1313423 ],
1147 [ 0.16250302, 0.21103583],
1148 [ 0.81626524, 0.1312433 ],
1149 [ 0.67338089, 0.72302393],
1150 [ 0.7566368 , 0.07033696],
1151 [ 0.22591016, 0.77731835],
1152 [ 0.0072729 , 0.34273127]])
1153
1154 """,
1155
1156 r"""
1157 In [106]: print x
1158 jdh
1159
1160 In [109]: for i in range(10):
1161 .....: print i
1162 .....:
1163 .....:
1164 0
1165 1
1166 2
1167 3
1168 4
1169 5
1170 6
1171 7
1172 8
1173 9
1174 """,
1175
1176 r"""
1177
1178 In [144]: from pylab import *
1179
1180 In [145]: ion()
1181
1182 # use a semicolon to suppress the output
1183 @savefig test_hist.png width=4in
1184 In [151]: hist(np.random.randn(10000), 100);
1185
1186
1187 @savefig test_plot.png width=4in
1188 In [151]: plot(np.random.randn(10000), 'o');
1189 """,
1190
1191 r"""
1192 # use a semicolon to suppress the output
1193 In [151]: plt.clf()
1194
1195 @savefig plot_simple.png width=4in
1196 In [151]: plot([1,2,3])
1197
1198 @savefig hist_simple.png width=4in
1199 In [151]: hist(np.random.randn(10000), 100);
1200
1201 """,
1202 r"""
1203 # update the current fig
1204 In [151]: ylabel('number')
1205
1206 In [152]: title('normal distribution')
1207
1208
1209 @savefig hist_with_text.png
1210 In [153]: grid(True)
1211
1212 @doctest float
1213 In [154]: 0.1 + 0.2
1214 Out[154]: 0.3
1215
1216 @doctest float
1217 In [155]: np.arange(16).reshape(4,4)
1218 Out[155]:
1219 array([[ 0, 1, 2, 3],
1220 [ 4, 5, 6, 7],
1221 [ 8, 9, 10, 11],
1222 [12, 13, 14, 15]])
1223
1224 In [1]: x = np.arange(16, dtype=float).reshape(4,4)
1225
1226 In [2]: x[0,0] = np.inf
1227
1228 In [3]: x[0,1] = np.nan
1229
1230 @doctest float
1231 In [4]: x
1232 Out[4]:
1233 array([[ inf, nan, 2., 3.],
1234 [ 4., 5., 6., 7.],
1235 [ 8., 9., 10., 11.],
1236 [ 12., 13., 14., 15.]])
1237
1238
1239 """,
1240 ]
1241 # skip local-file depending first example:
1242 examples = examples[1:]
1243
1244 #ipython_directive.DEBUG = True # dbg
1245 #options = dict(suppress=True) # dbg
1246 options = {}
1247 for example in examples:
1248 content = example.split('\n')
1249 IPythonDirective('debug', arguments=None, options=options,
1250 content=content, lineno=0,
1251 content_offset=None, block_text=None,
1252 state=None, state_machine=None,
1253 )
1254
1255 # Run test suite as a script
1256 if __name__=='__main__':
1257 if not os.path.isdir('_static'):
1258 os.mkdir('_static')
1259 test()
1260 print('All OK? Check figures in _static/')
1261
[end of IPython/sphinxext/ipython_directive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| ipython/ipython | 1d7bb78d04ac5cb8698adb70b2b76528a1b2a0f1 | Add line number to error messages
As suggested in #13169, it adds line number to error messages, in order to make them more friendly.
![image](https://user-images.githubusercontent.com/20190646/139513782-ea8d42ab-9c73-4452-b607-5c54ca50a125.png)
That was the file used in the test
![image](https://user-images.githubusercontent.com/20190646/139513827-0aa4bed3-682f-40ee-a8ea-4f0e6e3fbc34.png)
| 2021-12-24T12:16:30Z | <patch>
diff --git a/IPython/core/ultratb.py b/IPython/core/ultratb.py
--- a/IPython/core/ultratb.py
+++ b/IPython/core/ultratb.py
@@ -169,7 +169,7 @@ def _format_traceback_lines(lines, Colors, has_colors, lvals):
return res
-def _format_filename(file, ColorFilename, ColorNormal):
+def _format_filename(file, ColorFilename, ColorNormal, *, lineno=None):
"""
Format filename lines with `In [n]` if it's the nth code cell or `File *.py` if it's a module.
@@ -185,14 +185,17 @@ def _format_filename(file, ColorFilename, ColorNormal):
if ipinst is not None and file in ipinst.compile._filename_map:
file = "[%s]" % ipinst.compile._filename_map[file]
- tpl_link = "Input %sIn %%s%s" % (ColorFilename, ColorNormal)
+ tpl_link = f"Input {ColorFilename}In {{file}}{ColorNormal}"
else:
file = util_path.compress_user(
py3compat.cast_unicode(file, util_path.fs_encoding)
)
- tpl_link = "File %s%%s%s" % (ColorFilename, ColorNormal)
+ if lineno is None:
+ tpl_link = f"File {ColorFilename}{{file}}{ColorNormal}"
+ else:
+ tpl_link = f"File {ColorFilename}{{file}}:{{lineno}}{ColorNormal}"
- return tpl_link % file
+ return tpl_link.format(file=file, lineno=lineno)
#---------------------------------------------------------------------------
# Module classes
@@ -439,11 +442,10 @@ def _format_list(self, extracted_list):
Colors = self.Colors
list = []
for filename, lineno, name, line in extracted_list[:-1]:
- item = " %s, line %s%d%s, in %s%s%s\n" % (
- _format_filename(filename, Colors.filename, Colors.Normal),
- Colors.lineno,
- lineno,
- Colors.Normal,
+ item = " %s in %s%s%s\n" % (
+ _format_filename(
+ filename, Colors.filename, Colors.Normal, lineno=lineno
+ ),
Colors.name,
name,
Colors.Normal,
@@ -453,12 +455,11 @@ def _format_list(self, extracted_list):
list.append(item)
# Emphasize the last entry
filename, lineno, name, line = extracted_list[-1]
- item = "%s %s, line %s%d%s, in %s%s%s%s\n" % (
- Colors.normalEm,
- _format_filename(filename, Colors.filenameEm, Colors.normalEm),
- Colors.linenoEm,
- lineno,
+ item = "%s %s in %s%s%s%s\n" % (
Colors.normalEm,
+ _format_filename(
+ filename, Colors.filenameEm, Colors.normalEm, lineno=lineno
+ ),
Colors.nameEm,
name,
Colors.normalEm,
@@ -501,14 +502,15 @@ def _format_exception_only(self, etype, value):
lineno = "unknown"
textline = ""
list.append(
- "%s %s, line %s%s%s\n"
+ "%s %s%s\n"
% (
Colors.normalEm,
_format_filename(
- value.filename, Colors.filenameEm, Colors.normalEm
+ value.filename,
+ Colors.filenameEm,
+ Colors.normalEm,
+ lineno=(None if lineno == "unknown" else lineno),
),
- Colors.linenoEm,
- lineno,
Colors.Normal,
)
)
@@ -628,27 +630,35 @@ def format_record(self, frame_info):
return ' %s[... skipping similar frames: %s]%s\n' % (
Colors.excName, frame_info.description, ColorsNormal)
- indent = ' ' * INDENT_SIZE
- em_normal = '%s\n%s%s' % (Colors.valEm, indent, ColorsNormal)
- tpl_call = 'in %s%%s%s%%s%s' % (Colors.vName, Colors.valEm,
- ColorsNormal)
- tpl_call_fail = 'in %s%%s%s(***failed resolving arguments***)%s' % \
- (Colors.vName, Colors.valEm, ColorsNormal)
- tpl_name_val = '%%s %s= %%s%s' % (Colors.valEm, ColorsNormal)
+ indent = " " * INDENT_SIZE
+ em_normal = "%s\n%s%s" % (Colors.valEm, indent, ColorsNormal)
+ tpl_call = f"in {Colors.vName}{{file}}{Colors.valEm}{{scope}}{ColorsNormal}"
+ tpl_call_fail = "in %s%%s%s(***failed resolving arguments***)%s" % (
+ Colors.vName,
+ Colors.valEm,
+ ColorsNormal,
+ )
+ tpl_name_val = "%%s %s= %%s%s" % (Colors.valEm, ColorsNormal)
- link = _format_filename(frame_info.filename, Colors.filenameEm, ColorsNormal)
+ link = _format_filename(
+ frame_info.filename,
+ Colors.filenameEm,
+ ColorsNormal,
+ lineno=frame_info.lineno,
+ )
args, varargs, varkw, locals_ = inspect.getargvalues(frame_info.frame)
func = frame_info.executing.code_qualname()
- if func == '<module>':
- call = tpl_call % (func, '')
+ if func == "<module>":
+ call = tpl_call.format(file=func, scope="")
else:
# Decide whether to include variable details or not
var_repr = eqrepr if self.include_vars else nullrepr
try:
- call = tpl_call % (func, inspect.formatargvalues(args,
- varargs, varkw,
- locals_, formatvalue=var_repr))
+ scope = inspect.formatargvalues(
+ args, varargs, varkw, locals_, formatvalue=var_repr
+ )
+ call = tpl_call.format(file=func, scope=scope)
except KeyError:
# This happens in situations like errors inside generator
# expressions, where local variables are listed in the
</patch> | [] | [] | ||||
conda__conda-5359 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
conda should exec to non-conda subcommands, not subprocess
</issue>
<code>
[start of README.rst]
1 .. NOTE: This file serves both as the README on GitHub and the index.html for
2 conda.pydata.org. If you update this file, be sure to cd to the web
3 directory and run ``make html; make live``
4
5 .. image:: https://s3.amazonaws.com/conda-dev/conda_logo.svg
6 :alt: Conda Logo
7
8 ----------------------------------------
9
10 .. image:: https://img.shields.io/travis/conda/conda/4.4.x.svg?maxAge=900&label=Linux%20%26%20MacOS
11 :target: https://travis-ci.org/conda/conda
12 :alt: Linux & MacOS tests (Travis)
13
14 .. image:: https://img.shields.io/appveyor/ci/ContinuumAnalyticsFOSS/conda/4.4.x.svg?maxAge=900&label=Windows
15 :target: https://ci.appveyor.com/project/ContinuumAnalyticsFOSS/conda
16 :alt: Windows tests (Appveyor)
17
18 .. image:: https://img.shields.io/codecov/c/github/conda/conda/4.4.x.svg?label=coverage
19 :alt: Codecov Status
20 :target: https://codecov.io/gh/conda/conda/branch/4.4.x
21
22 .. image:: https://img.shields.io/github/release/conda/conda.svg
23 :alt: latest release version
24 :target: https://github.com/conda/conda/releases
25
26 |
27
28 .. image:: https://s3.amazonaws.com/conda-dev/conda-announce-signup-button.svg
29 :alt: Join the Conda Announcment List
30 :target: http://conda.pydata.org/docs/announcements.html
31
32 |
33
34 Conda is a cross-platform, language-agnostic binary package manager. It is the
35 package manager used by `Anaconda
36 <http://docs.continuum.io/anaconda/index.html>`_ installations, but it may be
37 used for other systems as well. Conda makes environments first-class
38 citizens, making it easy to create independent environments even for C
39 libraries. Conda is written entirely in Python, and is BSD licensed open
40 source.
41
42 Conda is enhanced by organizations, tools, and repositories created and managed by
43 the amazing members of the conda community. Some of them can be found
44 `here <https://github.com/conda/conda/wiki/Conda-Community>`_.
45
46
47 Installation
48 ------------
49
50 Conda is a part of the `Anaconda distribution <https://store.continuum.io/cshop/anaconda/>`_. You can also download a
51 minimal installation that only includes conda and its dependencies, called
52 `Miniconda <http://conda.pydata.org/miniconda.html>`_.
53
54
55 Getting Started
56 ---------------
57
58 If you install Anaconda, you will already have hundreds of packages
59 installed. You can see what packages are installed by running
60
61 .. code-block:: bash
62
63 $ conda list
64
65 to see all the packages that are available, use
66
67 .. code-block:: bash
68
69 $ conda search
70
71 and to install a package, use
72
73 .. code-block:: bash
74
75 $ conda install <package-name>
76
77
78 The real power of conda comes from its ability to manage environments. In
79 conda, an environment can be thought of as a completely separate installation.
80 Conda installs packages into environments efficiently using `hard links
81 <http://en.wikipedia.org/wiki/Hard_links>`_ by default when it is possible, so
82 environments are space efficient, and take seconds to create.
83
84 The default environment, which ``conda`` itself is installed into is called
85 ``root``. To create another environment, use the ``conda create``
86 command. For instance, to create an environment with the IPython notebook and
87 NumPy 1.6, which is older than the version that comes with Anaconda by
88 default, you would run
89
90 .. code-block:: bash
91
92 $ conda create -n numpy16 ipython-notebook numpy=1.6
93
94 This creates an environment called ``numpy16`` with the latest version of
95 the IPython notebook, NumPy 1.6, and their dependencies.
96
97 We can now activate this environment, use
98
99 .. code-block:: bash
100
101 # On Linux and Mac OS X
102 $ source activate numpy16
103
104 # On Windows
105 > activate numpy16
106
107 This puts the bin directory of the ``numpy16`` environment in the front of the
108 ``PATH``, and sets it as the default environment for all subsequent conda commands.
109
110 To go back to the root environment, use
111
112 .. code-block:: bash
113
114 # On Linux and Mac OS X
115 $ source deactivate
116
117 # On Windows
118 > deactivate
119
120
121 Building Your Own Packages
122 --------------------------
123
124 You can easily build your own packages for conda, and upload them
125 to `anaconda.org <https://anaconda.org>`_, a free service for hosting
126 packages for conda, as well as other package managers.
127 To build a package, create a recipe.
128 See http://github.com/conda/conda-recipes for many example recipes, and
129 http://docs.continuum.io/conda/build.html for documentation on how to build
130 recipes.
131
132 To upload to anaconda.org, create an account. Then, install the
133 anaconda-client and login
134
135 .. code-block:: bash
136
137 $ conda install anaconda-client
138 $ anaconda login
139
140 Then, after you build your recipe
141
142 .. code-block:: bash
143
144 $ conda build <recipe-dir>
145
146 you will be prompted to upload to anaconda.org.
147
148 To add your anaconda.org channel, or the channel of others to conda so
149 that ``conda install`` will find and install their packages, run
150
151 .. code-block:: bash
152
153 $ conda config --add channels https://conda.anaconda.org/username
154
155 (replacing ``username`` with the user name of the person whose channel you want
156 to add).
157
158 Getting Help
159 ------------
160
161 The documentation for conda is at http://conda.pydata.org/docs/. You can
162 subscribe to the `conda mailing list
163 <https://groups.google.com/a/continuum.io/forum/#!forum/conda>`_. The source
164 code and issue tracker for conda are on `GitHub <https://github.com/conda/conda>`_.
165
166 Contributing
167 ------------
168
169 Contributions to conda are welcome. Just fork the GitHub repository and send a
170 pull request.
171
172 To develop on conda, the easiest way is to use a development build. This can be
173 accomplished as follows:
174
175 * clone the conda git repository to a computer with conda already installed
176 * navigate to the root directory of the git clone
177 * run ``$CONDA/bin/python setup.py develop`` where ``$CONDA`` is the path to your
178 miniconda installation
179
180 Note building a development file requires git to be installed.
181
182 To undo this, run ``$CONDA/bin/python setup.py develop -u``. Note that if you
183 used a python other than ``$CONDA/bin/python`` to install, you may have to manually
184 delete the conda executable. For example, on OS X, if you use a homebrew python
185 located at ``/usr/local/bin/python``, then you'll need to ``rm /usr/local/bin/conda``
186 so that ``which -a conda`` lists first your miniconda installation.
187
188 If you are worried about breaking your conda installation, you can install a
189 separate instance of `Miniconda <http://conda.pydata.org/miniconda.html>`_ and
190 work off it. This is also the only way to test conda in both Python 2 and
191 Python 3, as conda can only be installed into a root environment.
192
193 To run the tests, set up a testing environment by running
194
195 * ``$CONDA/bin/python -m pip install -r utils/requirements-test.txt``.
196 * ``$CONDA/bin/python utils/setup-testing.py develop``
197
198 and then running ``py.test`` in the conda directory. You can also run tests using the
199 Makefile by running ``make unit``, ``make smoketest`` (a single integration test), or
200 ``make integration``. The tests are also run by various CI systems when you make a
201 pull request.
202
[end of README.rst]
[start of conda/_vendor/auxlib/_vendor/five.py]
1 # -*- coding: utf-8 -*-
2 """
3 amqp.five
4 ~~~~~~~~~~~
5
6 Compatibility implementations of features
7 only available in newer Python versions.
8
9
10 """
11 from __future__ import absolute_import
12
13 import io
14 import sys
15
16 try:
17 from collections import Counter
18 except ImportError: # pragma: no cover
19 from collections import defaultdict
20
21 def Counter(): # noqa
22 return defaultdict(int)
23
24 try:
25 buffer_t = buffer
26 except NameError: # pragma: no cover
27 # Py3 does not have buffer, only use this for isa checks.
28
29 class buffer_t(object): # noqa
30 pass
31
32 bytes_t = bytes
33
34 __all__ = ['Counter', 'reload', 'UserList', 'UserDict',
35 'Queue', 'Empty', 'Full', 'LifoQueue', 'builtins',
36 'zip_longest', 'map', 'zip', 'string', 'string_t', 'bytes_t',
37 'long_t', 'text_t', 'int_types', 'module_name_t',
38 'range', 'items', 'keys', 'values', 'nextfun', 'reraise',
39 'WhateverIO', 'with_metaclass', 'open_fqdn', 'StringIO',
40 'THREAD_TIMEOUT_MAX', 'format_d', 'monotonic', 'buffer_t']
41
42
43 # ############# py3k ########################################################
44 PY3 = sys.version_info[0] == 3
45
46 try:
47 reload = reload # noqa
48 except NameError: # pragma: no cover
49 from imp import reload # noqa
50
51 try:
52 from collections import UserList # noqa
53 except ImportError: # pragma: no cover
54 from UserList import UserList # noqa
55
56 try:
57 from collections import UserDict # noqa
58 except ImportError: # pragma: no cover
59 from UserDict import UserDict # noqa
60
61 # ############# time.monotonic #############################################
62
63 if sys.version_info < (3, 3):
64
65 import platform
66 SYSTEM = platform.system()
67
68 try:
69 import ctypes
70 except ImportError: # pragma: no cover
71 ctypes = None # noqa
72
73 if SYSTEM == 'Darwin' and ctypes is not None:
74 from ctypes.util import find_library
75 libSystem = ctypes.CDLL(find_library('libSystem.dylib'))
76 CoreServices = ctypes.CDLL(find_library('CoreServices'),
77 use_errno=True)
78 mach_absolute_time = libSystem.mach_absolute_time
79 mach_absolute_time.restype = ctypes.c_uint64
80 absolute_to_nanoseconds = CoreServices.AbsoluteToNanoseconds
81 absolute_to_nanoseconds.restype = ctypes.c_uint64
82 absolute_to_nanoseconds.argtypes = [ctypes.c_uint64]
83
84 def _monotonic():
85 return absolute_to_nanoseconds(mach_absolute_time()) * 1e-9
86
87 elif SYSTEM == 'Linux' and ctypes is not None:
88 # from stackoverflow:
89 # questions/1205722/how-do-i-get-monotonic-time-durations-in-python
90 import os
91
92 CLOCK_MONOTONIC = 1 # see <linux/time.h>
93
94 class timespec(ctypes.Structure):
95 _fields_ = [
96 ('tv_sec', ctypes.c_long),
97 ('tv_nsec', ctypes.c_long),
98 ]
99
100 librt = ctypes.CDLL('librt.so.1', use_errno=True)
101 clock_gettime = librt.clock_gettime
102 clock_gettime.argtypes = [
103 ctypes.c_int, ctypes.POINTER(timespec),
104 ]
105
106 def _monotonic(): # noqa
107 t = timespec()
108 if clock_gettime(CLOCK_MONOTONIC, ctypes.pointer(t)) != 0:
109 errno_ = ctypes.get_errno()
110 raise OSError(errno_, os.strerror(errno_))
111 return t.tv_sec + t.tv_nsec * 1e-9
112 else:
113 from time import time as _monotonic
114 try:
115 from time import monotonic
116 except ImportError:
117 monotonic = _monotonic # noqa
118
119 # ############# Py3 <-> Py2 #################################################
120
121 if PY3: # pragma: no cover
122 import builtins
123
124 from itertools import zip_longest
125
126 map = map
127 zip = zip
128 string = str
129 string_t = str
130 long_t = int
131 text_t = str
132 range = range
133 int_types = (int,)
134 module_name_t = str
135
136 open_fqdn = 'builtins.open'
137
138 def items(d):
139 return d.items()
140
141 def keys(d):
142 return d.keys()
143
144 def values(d):
145 return d.values()
146
147 def nextfun(it):
148 return it.__next__
149
150 exec_ = getattr(builtins, 'exec')
151
152 def reraise(tp, value, tb=None):
153 if value.__traceback__ is not tb:
154 raise value.with_traceback(tb)
155 raise value
156
157 else:
158 import __builtin__ as builtins # noqa
159 from itertools import ( # noqa
160 imap as map,
161 izip as zip,
162 izip_longest as zip_longest,
163 )
164
165 string = unicode # noqa
166 string_t = basestring # noqa
167 text_t = unicode
168 long_t = long # noqa
169 range = xrange
170 module_name_t = str
171 int_types = (int, long)
172
173 open_fqdn = '__builtin__.open'
174
175 def items(d): # noqa
176 return d.iteritems()
177
178 def keys(d): # noqa
179 return d.iterkeys()
180
181 def values(d): # noqa
182 return d.itervalues()
183
184 def nextfun(it): # noqa
185 return it.next
186
187 def exec_(code, globs=None, locs=None): # pragma: no cover
188 """Execute code in a namespace."""
189 if globs is None:
190 frame = sys._getframe(1)
191 globs = frame.f_globals
192 if locs is None:
193 locs = frame.f_locals
194 del frame
195 elif locs is None:
196 locs = globs
197 exec("""exec code in globs, locs""")
198
199 exec_("""def reraise(tp, value, tb=None): raise tp, value, tb""")
200
201
202 def with_metaclass(Type, skip_attrs=set(('__dict__', '__weakref__'))):
203 """Class decorator to set metaclass.
204
205 Works with both Python 2 and Python 3 and it does not add
206 an extra class in the lookup order like ``six.with_metaclass`` does
207 (that is -- it copies the original class instead of using inheritance).
208
209 """
210
211 def _clone_with_metaclass(Class):
212 attrs = dict((key, value) for key, value in items(vars(Class))
213 if key not in skip_attrs)
214 return Type(Class.__name__, Class.__bases__, attrs)
215
216 return _clone_with_metaclass
217
218 # ############# threading.TIMEOUT_MAX ########################################
219 try:
220 from threading import TIMEOUT_MAX as THREAD_TIMEOUT_MAX
221 except ImportError:
222 THREAD_TIMEOUT_MAX = 1e10 # noqa
223
224 # ############# format(int, ',d') ############################################
225
226 if sys.version_info >= (2, 7): # pragma: no cover
227 def format_d(i):
228 return format(i, ',d')
229 else: # pragma: no cover
230 def format_d(i): # noqa
231 s = '%d' % i
232 groups = []
233 while s and s[-1].isdigit():
234 groups.append(s[-3:])
235 s = s[:-3]
236 return s + ','.join(reversed(groups))
237
238 StringIO = io.StringIO
239 _SIO_write = StringIO.write
240 _SIO_init = StringIO.__init__
241
242
243 class WhateverIO(StringIO):
244
245 def __init__(self, v=None, *a, **kw):
246 _SIO_init(self, v.decode() if isinstance(v, bytes) else v, *a, **kw)
247
248 def write(self, data):
249 _SIO_write(self, data.decode() if isinstance(data, bytes) else data)
[end of conda/_vendor/auxlib/_vendor/five.py]
[start of conda/_vendor/auxlib/_vendor/six.py]
1 # Copyright (c) 2010-2015 Benjamin Peterson
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in all
11 # copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
19 # SOFTWARE.
20
21 """Utilities for writing code that runs on Python 2 and 3"""
22
23 from __future__ import absolute_import
24
25 import functools
26 import itertools
27 import operator
28 import sys
29 import types
30
31 __author__ = "Benjamin Peterson <benjamin@python.org>"
32 __version__ = "1.10.0"
33
34
35 # Useful for very coarse version differentiation.
36 PY2 = sys.version_info[0] == 2
37 PY3 = sys.version_info[0] == 3
38 PY34 = sys.version_info[0:2] >= (3, 4)
39
40 if PY3:
41 string_types = str,
42 integer_types = int,
43 class_types = type,
44 text_type = str
45 binary_type = bytes
46
47 MAXSIZE = sys.maxsize
48 else:
49 string_types = basestring,
50 integer_types = (int, long)
51 class_types = (type, types.ClassType)
52 text_type = unicode
53 binary_type = str
54
55 if sys.platform.startswith("java"):
56 # Jython always uses 32 bits.
57 MAXSIZE = int((1 << 31) - 1)
58 else:
59 # It's possible to have sizeof(long) != sizeof(Py_ssize_t).
60 class X(object):
61
62 def __len__(self):
63 return 1 << 31
64 try:
65 len(X())
66 except OverflowError:
67 # 32-bit
68 MAXSIZE = int((1 << 31) - 1)
69 else:
70 # 64-bit
71 MAXSIZE = int((1 << 63) - 1)
72 del X
73
74
75 def _add_doc(func, doc):
76 """Add documentation to a function."""
77 func.__doc__ = doc
78
79
80 def _import_module(name):
81 """Import module, returning the module after the last dot."""
82 __import__(name)
83 return sys.modules[name]
84
85
86 class _LazyDescr(object):
87
88 def __init__(self, name):
89 self.name = name
90
91 def __get__(self, obj, tp):
92 result = self._resolve()
93 setattr(obj, self.name, result) # Invokes __set__.
94 try:
95 # This is a bit ugly, but it avoids running this again by
96 # removing this descriptor.
97 delattr(obj.__class__, self.name)
98 except AttributeError:
99 pass
100 return result
101
102
103 class MovedModule(_LazyDescr):
104
105 def __init__(self, name, old, new=None):
106 super(MovedModule, self).__init__(name)
107 if PY3:
108 if new is None:
109 new = name
110 self.mod = new
111 else:
112 self.mod = old
113
114 def _resolve(self):
115 return _import_module(self.mod)
116
117 def __getattr__(self, attr):
118 _module = self._resolve()
119 value = getattr(_module, attr)
120 setattr(self, attr, value)
121 return value
122
123
124 class _LazyModule(types.ModuleType):
125
126 def __init__(self, name):
127 super(_LazyModule, self).__init__(name)
128 self.__doc__ = self.__class__.__doc__
129
130 def __dir__(self):
131 attrs = ["__doc__", "__name__"]
132 attrs += [attr.name for attr in self._moved_attributes]
133 return attrs
134
135 # Subclasses should override this
136 _moved_attributes = []
137
138
139 class MovedAttribute(_LazyDescr):
140
141 def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
142 super(MovedAttribute, self).__init__(name)
143 if PY3:
144 if new_mod is None:
145 new_mod = name
146 self.mod = new_mod
147 if new_attr is None:
148 if old_attr is None:
149 new_attr = name
150 else:
151 new_attr = old_attr
152 self.attr = new_attr
153 else:
154 self.mod = old_mod
155 if old_attr is None:
156 old_attr = name
157 self.attr = old_attr
158
159 def _resolve(self):
160 module = _import_module(self.mod)
161 return getattr(module, self.attr)
162
163
164 class _SixMetaPathImporter(object):
165
166 """
167 A meta path importer to import six.moves and its submodules.
168
169 This class implements a PEP302 finder and loader. It should be compatible
170 with Python 2.5 and all existing versions of Python3
171 """
172
173 def __init__(self, six_module_name):
174 self.name = six_module_name
175 self.known_modules = {}
176
177 def _add_module(self, mod, *fullnames):
178 for fullname in fullnames:
179 self.known_modules[self.name + "." + fullname] = mod
180
181 def _get_module(self, fullname):
182 return self.known_modules[self.name + "." + fullname]
183
184 def find_module(self, fullname, path=None):
185 if fullname in self.known_modules:
186 return self
187 return None
188
189 def __get_module(self, fullname):
190 try:
191 return self.known_modules[fullname]
192 except KeyError:
193 raise ImportError("This loader does not know module " + fullname)
194
195 def load_module(self, fullname):
196 try:
197 # in case of a reload
198 return sys.modules[fullname]
199 except KeyError:
200 pass
201 mod = self.__get_module(fullname)
202 if isinstance(mod, MovedModule):
203 mod = mod._resolve()
204 else:
205 mod.__loader__ = self
206 sys.modules[fullname] = mod
207 return mod
208
209 def is_package(self, fullname):
210 """
211 Return true, if the named module is a package.
212
213 We need this method to get correct spec objects with
214 Python 3.4 (see PEP451)
215 """
216 return hasattr(self.__get_module(fullname), "__path__")
217
218 def get_code(self, fullname):
219 """Return None
220
221 Required, if is_package is implemented"""
222 self.__get_module(fullname) # eventually raises ImportError
223 return None
224 get_source = get_code # same as get_code
225
226 _importer = _SixMetaPathImporter(__name__)
227
228
229 class _MovedItems(_LazyModule):
230
231 """Lazy loading of moved objects"""
232 __path__ = [] # mark as package
233
234
235 _moved_attributes = [
236 MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
237 MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
238 MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"),
239 MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
240 MovedAttribute("intern", "__builtin__", "sys"),
241 MovedAttribute("map", "itertools", "builtins", "imap", "map"),
242 MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"),
243 MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"),
244 MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"),
245 MovedAttribute("reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload"),
246 MovedAttribute("reduce", "__builtin__", "functools"),
247 MovedAttribute("shlex_quote", "pipes", "shlex", "quote"),
248 MovedAttribute("StringIO", "StringIO", "io"),
249 MovedAttribute("UserDict", "UserDict", "collections"),
250 MovedAttribute("UserList", "UserList", "collections"),
251 MovedAttribute("UserString", "UserString", "collections"),
252 MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
253 MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
254 MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),
255 MovedModule("builtins", "__builtin__"),
256 MovedModule("configparser", "ConfigParser"),
257 MovedModule("copyreg", "copy_reg"),
258 MovedModule("dbm_gnu", "gdbm", "dbm.gnu"),
259 MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"),
260 MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
261 MovedModule("http_cookies", "Cookie", "http.cookies"),
262 MovedModule("html_entities", "htmlentitydefs", "html.entities"),
263 MovedModule("html_parser", "HTMLParser", "html.parser"),
264 MovedModule("http_client", "httplib", "http.client"),
265 MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
266 MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"),
267 MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
268 MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
269 MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
270 MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
271 MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
272 MovedModule("cPickle", "cPickle", "pickle"),
273 MovedModule("queue", "Queue"),
274 MovedModule("reprlib", "repr"),
275 MovedModule("socketserver", "SocketServer"),
276 MovedModule("_thread", "thread", "_thread"),
277 MovedModule("tkinter", "Tkinter"),
278 MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
279 MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
280 MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
281 MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
282 MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
283 MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"),
284 MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
285 MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
286 MovedModule("tkinter_colorchooser", "tkColorChooser",
287 "tkinter.colorchooser"),
288 MovedModule("tkinter_commondialog", "tkCommonDialog",
289 "tkinter.commondialog"),
290 MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
291 MovedModule("tkinter_font", "tkFont", "tkinter.font"),
292 MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
293 MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
294 "tkinter.simpledialog"),
295 MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"),
296 MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"),
297 MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"),
298 MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
299 MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"),
300 MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"),
301 ]
302 # Add windows specific modules.
303 if sys.platform == "win32":
304 _moved_attributes += [
305 MovedModule("winreg", "_winreg"),
306 ]
307
308 for attr in _moved_attributes:
309 setattr(_MovedItems, attr.name, attr)
310 if isinstance(attr, MovedModule):
311 _importer._add_module(attr, "moves." + attr.name)
312 del attr
313
314 _MovedItems._moved_attributes = _moved_attributes
315
316 moves = _MovedItems(__name__ + ".moves")
317 _importer._add_module(moves, "moves")
318
319
320 class Module_six_moves_urllib_parse(_LazyModule):
321
322 """Lazy loading of moved objects in six.moves.urllib_parse"""
323
324
325 _urllib_parse_moved_attributes = [
326 MovedAttribute("ParseResult", "urlparse", "urllib.parse"),
327 MovedAttribute("SplitResult", "urlparse", "urllib.parse"),
328 MovedAttribute("parse_qs", "urlparse", "urllib.parse"),
329 MovedAttribute("parse_qsl", "urlparse", "urllib.parse"),
330 MovedAttribute("urldefrag", "urlparse", "urllib.parse"),
331 MovedAttribute("urljoin", "urlparse", "urllib.parse"),
332 MovedAttribute("urlparse", "urlparse", "urllib.parse"),
333 MovedAttribute("urlsplit", "urlparse", "urllib.parse"),
334 MovedAttribute("urlunparse", "urlparse", "urllib.parse"),
335 MovedAttribute("urlunsplit", "urlparse", "urllib.parse"),
336 MovedAttribute("quote", "urllib", "urllib.parse"),
337 MovedAttribute("quote_plus", "urllib", "urllib.parse"),
338 MovedAttribute("unquote", "urllib", "urllib.parse"),
339 MovedAttribute("unquote_plus", "urllib", "urllib.parse"),
340 MovedAttribute("urlencode", "urllib", "urllib.parse"),
341 MovedAttribute("splitquery", "urllib", "urllib.parse"),
342 MovedAttribute("splittag", "urllib", "urllib.parse"),
343 MovedAttribute("splituser", "urllib", "urllib.parse"),
344 MovedAttribute("uses_fragment", "urlparse", "urllib.parse"),
345 MovedAttribute("uses_netloc", "urlparse", "urllib.parse"),
346 MovedAttribute("uses_params", "urlparse", "urllib.parse"),
347 MovedAttribute("uses_query", "urlparse", "urllib.parse"),
348 MovedAttribute("uses_relative", "urlparse", "urllib.parse"),
349 ]
350 for attr in _urllib_parse_moved_attributes:
351 setattr(Module_six_moves_urllib_parse, attr.name, attr)
352 del attr
353
354 Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes
355
356 _importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"),
357 "moves.urllib_parse", "moves.urllib.parse")
358
359
360 class Module_six_moves_urllib_error(_LazyModule):
361
362 """Lazy loading of moved objects in six.moves.urllib_error"""
363
364
365 _urllib_error_moved_attributes = [
366 MovedAttribute("URLError", "urllib2", "urllib.error"),
367 MovedAttribute("HTTPError", "urllib2", "urllib.error"),
368 MovedAttribute("ContentTooShortError", "urllib", "urllib.error"),
369 ]
370 for attr in _urllib_error_moved_attributes:
371 setattr(Module_six_moves_urllib_error, attr.name, attr)
372 del attr
373
374 Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes
375
376 _importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"),
377 "moves.urllib_error", "moves.urllib.error")
378
379
380 class Module_six_moves_urllib_request(_LazyModule):
381
382 """Lazy loading of moved objects in six.moves.urllib_request"""
383
384
385 _urllib_request_moved_attributes = [
386 MovedAttribute("urlopen", "urllib2", "urllib.request"),
387 MovedAttribute("install_opener", "urllib2", "urllib.request"),
388 MovedAttribute("build_opener", "urllib2", "urllib.request"),
389 MovedAttribute("pathname2url", "urllib", "urllib.request"),
390 MovedAttribute("url2pathname", "urllib", "urllib.request"),
391 MovedAttribute("getproxies", "urllib", "urllib.request"),
392 MovedAttribute("Request", "urllib2", "urllib.request"),
393 MovedAttribute("OpenerDirector", "urllib2", "urllib.request"),
394 MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"),
395 MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"),
396 MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"),
397 MovedAttribute("ProxyHandler", "urllib2", "urllib.request"),
398 MovedAttribute("BaseHandler", "urllib2", "urllib.request"),
399 MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"),
400 MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"),
401 MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"),
402 MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"),
403 MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"),
404 MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"),
405 MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"),
406 MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"),
407 MovedAttribute("HTTPHandler", "urllib2", "urllib.request"),
408 MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"),
409 MovedAttribute("FileHandler", "urllib2", "urllib.request"),
410 MovedAttribute("FTPHandler", "urllib2", "urllib.request"),
411 MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"),
412 MovedAttribute("UnknownHandler", "urllib2", "urllib.request"),
413 MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"),
414 MovedAttribute("urlretrieve", "urllib", "urllib.request"),
415 MovedAttribute("urlcleanup", "urllib", "urllib.request"),
416 MovedAttribute("URLopener", "urllib", "urllib.request"),
417 MovedAttribute("FancyURLopener", "urllib", "urllib.request"),
418 MovedAttribute("proxy_bypass", "urllib", "urllib.request"),
419 ]
420 for attr in _urllib_request_moved_attributes:
421 setattr(Module_six_moves_urllib_request, attr.name, attr)
422 del attr
423
424 Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes
425
426 _importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"),
427 "moves.urllib_request", "moves.urllib.request")
428
429
430 class Module_six_moves_urllib_response(_LazyModule):
431
432 """Lazy loading of moved objects in six.moves.urllib_response"""
433
434
435 _urllib_response_moved_attributes = [
436 MovedAttribute("addbase", "urllib", "urllib.response"),
437 MovedAttribute("addclosehook", "urllib", "urllib.response"),
438 MovedAttribute("addinfo", "urllib", "urllib.response"),
439 MovedAttribute("addinfourl", "urllib", "urllib.response"),
440 ]
441 for attr in _urllib_response_moved_attributes:
442 setattr(Module_six_moves_urllib_response, attr.name, attr)
443 del attr
444
445 Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes
446
447 _importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"),
448 "moves.urllib_response", "moves.urllib.response")
449
450
451 class Module_six_moves_urllib_robotparser(_LazyModule):
452
453 """Lazy loading of moved objects in six.moves.urllib_robotparser"""
454
455
456 _urllib_robotparser_moved_attributes = [
457 MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"),
458 ]
459 for attr in _urllib_robotparser_moved_attributes:
460 setattr(Module_six_moves_urllib_robotparser, attr.name, attr)
461 del attr
462
463 Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes
464
465 _importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"),
466 "moves.urllib_robotparser", "moves.urllib.robotparser")
467
468
469 class Module_six_moves_urllib(types.ModuleType):
470
471 """Create a six.moves.urllib namespace that resembles the Python 3 namespace"""
472 __path__ = [] # mark as package
473 parse = _importer._get_module("moves.urllib_parse")
474 error = _importer._get_module("moves.urllib_error")
475 request = _importer._get_module("moves.urllib_request")
476 response = _importer._get_module("moves.urllib_response")
477 robotparser = _importer._get_module("moves.urllib_robotparser")
478
479 def __dir__(self):
480 return ['parse', 'error', 'request', 'response', 'robotparser']
481
482 _importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"),
483 "moves.urllib")
484
485
486 def add_move(move):
487 """Add an item to six.moves."""
488 setattr(_MovedItems, move.name, move)
489
490
491 def remove_move(name):
492 """Remove item from six.moves."""
493 try:
494 delattr(_MovedItems, name)
495 except AttributeError:
496 try:
497 del moves.__dict__[name]
498 except KeyError:
499 raise AttributeError("no such move, %r" % (name,))
500
501
502 if PY3:
503 _meth_func = "__func__"
504 _meth_self = "__self__"
505
506 _func_closure = "__closure__"
507 _func_code = "__code__"
508 _func_defaults = "__defaults__"
509 _func_globals = "__globals__"
510 else:
511 _meth_func = "im_func"
512 _meth_self = "im_self"
513
514 _func_closure = "func_closure"
515 _func_code = "func_code"
516 _func_defaults = "func_defaults"
517 _func_globals = "func_globals"
518
519
520 try:
521 advance_iterator = next
522 except NameError:
523 def advance_iterator(it):
524 return it.next()
525 next = advance_iterator
526
527
528 try:
529 callable = callable
530 except NameError:
531 def callable(obj):
532 return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
533
534
535 if PY3:
536 def get_unbound_function(unbound):
537 return unbound
538
539 create_bound_method = types.MethodType
540
541 def create_unbound_method(func, cls):
542 return func
543
544 Iterator = object
545 else:
546 def get_unbound_function(unbound):
547 return unbound.im_func
548
549 def create_bound_method(func, obj):
550 return types.MethodType(func, obj, obj.__class__)
551
552 def create_unbound_method(func, cls):
553 return types.MethodType(func, None, cls)
554
555 class Iterator(object):
556
557 def next(self):
558 return type(self).__next__(self)
559
560 callable = callable
561 _add_doc(get_unbound_function,
562 """Get the function out of a possibly unbound function""")
563
564
565 get_method_function = operator.attrgetter(_meth_func)
566 get_method_self = operator.attrgetter(_meth_self)
567 get_function_closure = operator.attrgetter(_func_closure)
568 get_function_code = operator.attrgetter(_func_code)
569 get_function_defaults = operator.attrgetter(_func_defaults)
570 get_function_globals = operator.attrgetter(_func_globals)
571
572
573 if PY3:
574 def iterkeys(d, **kw):
575 return iter(d.keys(**kw))
576
577 def itervalues(d, **kw):
578 return iter(d.values(**kw))
579
580 def iteritems(d, **kw):
581 return iter(d.items(**kw))
582
583 def iterlists(d, **kw):
584 return iter(d.lists(**kw))
585
586 viewkeys = operator.methodcaller("keys")
587
588 viewvalues = operator.methodcaller("values")
589
590 viewitems = operator.methodcaller("items")
591 else:
592 def iterkeys(d, **kw):
593 return d.iterkeys(**kw)
594
595 def itervalues(d, **kw):
596 return d.itervalues(**kw)
597
598 def iteritems(d, **kw):
599 return d.iteritems(**kw)
600
601 def iterlists(d, **kw):
602 return d.iterlists(**kw)
603
604 viewkeys = operator.methodcaller("viewkeys")
605
606 viewvalues = operator.methodcaller("viewvalues")
607
608 viewitems = operator.methodcaller("viewitems")
609
610 _add_doc(iterkeys, "Return an iterator over the keys of a dictionary.")
611 _add_doc(itervalues, "Return an iterator over the values of a dictionary.")
612 _add_doc(iteritems,
613 "Return an iterator over the (key, value) pairs of a dictionary.")
614 _add_doc(iterlists,
615 "Return an iterator over the (key, [values]) pairs of a dictionary.")
616
617
618 if PY3:
619 def b(s):
620 return s.encode("latin-1")
621
622 def u(s):
623 return s
624 unichr = chr
625 import struct
626 int2byte = struct.Struct(">B").pack
627 del struct
628 byte2int = operator.itemgetter(0)
629 indexbytes = operator.getitem
630 iterbytes = iter
631 import io
632 StringIO = io.StringIO
633 BytesIO = io.BytesIO
634 _assertCountEqual = "assertCountEqual"
635 if sys.version_info[1] <= 1:
636 _assertRaisesRegex = "assertRaisesRegexp"
637 _assertRegex = "assertRegexpMatches"
638 else:
639 _assertRaisesRegex = "assertRaisesRegex"
640 _assertRegex = "assertRegex"
641 else:
642 def b(s):
643 return s
644 # Workaround for standalone backslash
645
646 def u(s):
647 return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape")
648 unichr = unichr
649 int2byte = chr
650
651 def byte2int(bs):
652 return ord(bs[0])
653
654 def indexbytes(buf, i):
655 return ord(buf[i])
656 iterbytes = functools.partial(itertools.imap, ord)
657 import StringIO
658 StringIO = BytesIO = StringIO.StringIO
659 _assertCountEqual = "assertItemsEqual"
660 _assertRaisesRegex = "assertRaisesRegexp"
661 _assertRegex = "assertRegexpMatches"
662 _add_doc(b, """Byte literal""")
663 _add_doc(u, """Text literal""")
664
665
666 def assertCountEqual(self, *args, **kwargs):
667 return getattr(self, _assertCountEqual)(*args, **kwargs)
668
669
670 def assertRaisesRegex(self, *args, **kwargs):
671 return getattr(self, _assertRaisesRegex)(*args, **kwargs)
672
673
674 def assertRegex(self, *args, **kwargs):
675 return getattr(self, _assertRegex)(*args, **kwargs)
676
677
678 if PY3:
679 exec_ = getattr(moves.builtins, "exec")
680
681 def reraise(tp, value, tb=None):
682 if value is None:
683 value = tp()
684 if value.__traceback__ is not tb:
685 raise value.with_traceback(tb)
686 raise value
687
688 else:
689 def exec_(_code_, _globs_=None, _locs_=None):
690 """Execute code in a namespace."""
691 if _globs_ is None:
692 frame = sys._getframe(1)
693 _globs_ = frame.f_globals
694 if _locs_ is None:
695 _locs_ = frame.f_locals
696 del frame
697 elif _locs_ is None:
698 _locs_ = _globs_
699 exec("""exec _code_ in _globs_, _locs_""")
700
701 exec_("""def reraise(tp, value, tb=None):
702 raise tp, value, tb
703 """)
704
705
706 if sys.version_info[:2] == (3, 2):
707 exec_("""def raise_from(value, from_value):
708 if from_value is None:
709 raise value
710 raise value from from_value
711 """)
712 elif sys.version_info[:2] > (3, 2):
713 exec_("""def raise_from(value, from_value):
714 raise value from from_value
715 """)
716 else:
717 def raise_from(value, from_value):
718 raise value
719
720
721 print_ = getattr(moves.builtins, "print", None)
722 if print_ is None:
723 def print_(*args, **kwargs):
724 """The new-style print function for Python 2.4 and 2.5."""
725 fp = kwargs.pop("file", sys.stdout)
726 if fp is None:
727 return
728
729 def write(data):
730 if not isinstance(data, basestring):
731 data = str(data)
732 # If the file has an encoding, encode unicode with it.
733 if (isinstance(fp, file) and
734 isinstance(data, unicode) and
735 fp.encoding is not None):
736 errors = getattr(fp, "errors", None)
737 if errors is None:
738 errors = "strict"
739 data = data.encode(fp.encoding, errors)
740 fp.write(data)
741 want_unicode = False
742 sep = kwargs.pop("sep", None)
743 if sep is not None:
744 if isinstance(sep, unicode):
745 want_unicode = True
746 elif not isinstance(sep, str):
747 raise TypeError("sep must be None or a string")
748 end = kwargs.pop("end", None)
749 if end is not None:
750 if isinstance(end, unicode):
751 want_unicode = True
752 elif not isinstance(end, str):
753 raise TypeError("end must be None or a string")
754 if kwargs:
755 raise TypeError("invalid keyword arguments to print()")
756 if not want_unicode:
757 for arg in args:
758 if isinstance(arg, unicode):
759 want_unicode = True
760 break
761 if want_unicode:
762 newline = unicode("\n")
763 space = unicode(" ")
764 else:
765 newline = "\n"
766 space = " "
767 if sep is None:
768 sep = space
769 if end is None:
770 end = newline
771 for i, arg in enumerate(args):
772 if i:
773 write(sep)
774 write(arg)
775 write(end)
776 if sys.version_info[:2] < (3, 3):
777 _print = print_
778
779 def print_(*args, **kwargs):
780 fp = kwargs.get("file", sys.stdout)
781 flush = kwargs.pop("flush", False)
782 _print(*args, **kwargs)
783 if flush and fp is not None:
784 fp.flush()
785
786 _add_doc(reraise, """Reraise an exception.""")
787
788 if sys.version_info[0:2] < (3, 4):
789 def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS,
790 updated=functools.WRAPPER_UPDATES):
791 def wrapper(f):
792 f = functools.wraps(wrapped, assigned, updated)(f)
793 f.__wrapped__ = wrapped
794 return f
795 return wrapper
796 else:
797 wraps = functools.wraps
798
799
800 def with_metaclass(meta, *bases):
801 """Create a base class with a metaclass."""
802 # This requires a bit of explanation: the basic idea is to make a dummy
803 # metaclass for one level of class instantiation that replaces itself with
804 # the actual metaclass.
805 class metaclass(meta):
806
807 def __new__(cls, name, this_bases, d):
808 return meta(name, bases, d)
809 return type.__new__(metaclass, 'temporary_class', (), {})
810
811
812 def add_metaclass(metaclass):
813 """Class decorator for creating a class with a metaclass."""
814 def wrapper(cls):
815 orig_vars = cls.__dict__.copy()
816 slots = orig_vars.get('__slots__')
817 if slots is not None:
818 if isinstance(slots, str):
819 slots = [slots]
820 for slots_var in slots:
821 orig_vars.pop(slots_var)
822 orig_vars.pop('__dict__', None)
823 orig_vars.pop('__weakref__', None)
824 return metaclass(cls.__name__, cls.__bases__, orig_vars)
825 return wrapper
826
827
828 def python_2_unicode_compatible(klass):
829 """
830 A decorator that defines __unicode__ and __str__ methods under Python 2.
831 Under Python 3 it does nothing.
832
833 To support Python 2 and 3 with a single code base, define a __str__ method
834 returning text and apply this decorator to the class.
835 """
836 if PY2:
837 if '__str__' not in klass.__dict__:
838 raise ValueError("@python_2_unicode_compatible cannot be applied "
839 "to %s because it doesn't define __str__()." %
840 klass.__name__)
841 klass.__unicode__ = klass.__str__
842 klass.__str__ = lambda self: self.__unicode__().encode('utf-8')
843 return klass
844
845
846 # Complete the moves implementation.
847 # This code is at the end of this module to speed up module loading.
848 # Turn this module into a package.
849 __path__ = [] # required for PEP 302 and PEP 451
850 __package__ = __name__ # see PEP 366 @ReservedAssignment
851 if globals().get("__spec__") is not None:
852 __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable
853 # Remove other six meta path importers, since they cause problems. This can
854 # happen if six is removed from sys.modules and then reloaded. (Setuptools does
855 # this for some reason.)
856 if sys.meta_path:
857 for i, importer in enumerate(sys.meta_path):
858 # Here's some real nastiness: Another "instance" of the six module might
859 # be floating around. Therefore, we can't use isinstance() to check for
860 # the six meta path importer, since the other six instance will have
861 # inserted an importer with different class.
862 if (type(importer).__name__ == "_SixMetaPathImporter" and
863 importer.name == __name__):
864 del sys.meta_path[i]
865 break
866 del i, importer
867 # Finally, add the importer to the meta path import hook.
868 sys.meta_path.append(_importer)
869
[end of conda/_vendor/auxlib/_vendor/six.py]
[start of conda/activate.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from glob import glob
5 import os
6 from os.path import abspath, basename, dirname, expanduser, expandvars, isdir, join
7 import re
8 import sys
9 from tempfile import NamedTemporaryFile
10
11 try:
12 from cytoolz.itertoolz import concatv
13 except ImportError: # pragma: no cover
14 from ._vendor.toolz.itertoolz import concatv # NOQA
15
16
17 class Activator(object):
18 # Activate and deactivate have three tasks
19 # 1. Set and unset environment variables
20 # 2. Execute/source activate.d/deactivate.d scripts
21 # 3. Update the command prompt
22 #
23 # Shells should also use 'reactivate' following conda's install, update, and
24 # remove/uninstall commands.
25 #
26 # All core logic is in build_activate() or build_deactivate(), and is independent of
27 # shell type. Each returns a map containing the keys:
28 # set_vars
29 # unset_var
30 # activate_scripts
31 # deactivate_scripts
32 #
33 # The value of the CONDA_PROMPT_MODIFIER environment variable holds conda's contribution
34 # to the command prompt.
35 #
36 # To implement support for a new shell, ideally one would only need to add shell-specific
37 # information to the __init__ method of this class.
38
39 def __init__(self, shell):
40 from .base.context import context
41 self.context = context
42 self.shell = shell
43
44 if shell == 'posix':
45 self.pathsep_join = ':'.join
46 self.path_conversion = native_path_to_unix
47 self.script_extension = '.sh'
48 self.tempfile_extension = None # write instructions to stdout rather than a temp file
49
50 self.unset_var_tmpl = 'unset %s'
51 self.set_var_tmpl = 'export %s="%s"'
52 self.run_script_tmpl = '. "%s"'
53
54 elif shell == 'csh':
55 self.pathsep_join = ':'.join
56 self.path_conversion = native_path_to_unix
57 self.script_extension = '.csh'
58 self.tempfile_extension = None # write instructions to stdout rather than a temp file
59
60 self.unset_var_tmpl = 'unset %s'
61 self.set_var_tmpl = 'setenv %s "%s"'
62 self.run_script_tmpl = 'source "%s"'
63
64 elif shell == 'xonsh':
65 self.pathsep_join = ':'.join
66 self.path_conversion = native_path_to_unix
67 self.script_extension = '.xsh'
68 self.tempfile_extension = '.xsh'
69
70 self.unset_var_tmpl = 'del $%s'
71 self.set_var_tmpl = '$%s = "%s"'
72 self.run_script_tmpl = 'source "%s"'
73
74 elif shell == 'cmd.exe':
75 self.pathsep_join = ';'.join
76 self.path_conversion = path_identity
77 self.script_extension = '.bat'
78 self.tempfile_extension = '.bat'
79
80 self.unset_var_tmpl = '@SET %s='
81 self.set_var_tmpl = '@SET "%s=%s"'
82 self.run_script_tmpl = '@CALL "%s"'
83
84 elif shell == 'fish':
85 self.pathsep_join = ' '.join
86 self.path_conversion = native_path_to_unix
87 self.script_extension = '.fish'
88 self.tempfile_extension = None # write instructions to stdout rather than a temp file
89
90 self.unset_var_tmpl = 'set -e %s'
91 self.set_var_tmpl = 'set -gx %s "%s"'
92 self.run_script_tmpl = 'source "%s"'
93
94 elif shell == 'powershell':
95 self.pathsep_join = ';'.join
96 self.path_conversion = path_identity
97 self.script_extension = '.ps1'
98 self.tempfile_extension = None # write instructions to stdout rather than a temp file
99
100 self.unset_var_tmpl = 'Remove-Variable %s'
101 self.set_var_tmpl = '$env:%s = "%s"'
102 self.run_script_tmpl = '. "%s"'
103
104 else:
105 raise NotImplementedError()
106
107 def _finalize(self, commands, ext):
108 commands = concatv(commands, ('',)) # add terminating newline
109 if ext is None:
110 return '\n'.join(commands)
111 elif ext:
112 with NamedTemporaryFile(suffix=ext, delete=False) as tf:
113 tf.write(ensure_binary('\n'.join(commands)))
114 return tf.name
115 else:
116 raise NotImplementedError()
117
118 def activate(self, name_or_prefix):
119 return self._finalize(self._yield_commands(self.build_activate(name_or_prefix)),
120 self.tempfile_extension)
121
122 def deactivate(self):
123 return self._finalize(self._yield_commands(self.build_deactivate()),
124 self.tempfile_extension)
125
126 def reactivate(self):
127 return self._finalize(self._yield_commands(self.build_reactivate()),
128 self.tempfile_extension)
129
130 def _yield_commands(self, cmds_dict):
131 for key in sorted(cmds_dict.get('unset_vars', ())):
132 yield self.unset_var_tmpl % key
133
134 for key, value in sorted(iteritems(cmds_dict.get('set_vars', {}))):
135 yield self.set_var_tmpl % (key, value)
136
137 for script in cmds_dict.get('deactivate_scripts', ()):
138 yield self.run_script_tmpl % script
139
140 for script in cmds_dict.get('activate_scripts', ()):
141 yield self.run_script_tmpl % script
142
143 def build_activate(self, name_or_prefix):
144 test_path = expand(name_or_prefix)
145 if isdir(test_path):
146 prefix = test_path
147 if not isdir(join(prefix, 'conda-meta')):
148 from .exceptions import EnvironmentLocationNotFound
149 raise EnvironmentLocationNotFound(prefix)
150 elif re.search(r'\\|/', name_or_prefix):
151 prefix = name_or_prefix
152 if not isdir(join(prefix, 'conda-meta')):
153 from .exceptions import EnvironmentLocationNotFound
154 raise EnvironmentLocationNotFound(prefix)
155 else:
156 from .base.context import locate_prefix_by_name
157 prefix = locate_prefix_by_name(self.context, name_or_prefix)
158
159 # query environment
160 old_conda_shlvl = int(os.getenv('CONDA_SHLVL', 0))
161 old_conda_prefix = os.getenv('CONDA_PREFIX')
162 max_shlvl = self.context.max_shlvl
163
164 if old_conda_prefix == prefix:
165 return self.build_reactivate()
166 elif os.getenv('CONDA_PREFIX_%s' % (old_conda_shlvl-1)) == prefix:
167 # in this case, user is attempting to activate the previous environment,
168 # i.e. step back down
169 return self.build_deactivate()
170
171 activate_scripts = glob(join(
172 prefix, 'etc', 'conda', 'activate.d', '*' + self.script_extension
173 ))
174 conda_default_env = self._default_env(prefix)
175 conda_prompt_modifier = self._prompt_modifier(conda_default_env)
176
177 assert 0 <= old_conda_shlvl <= max_shlvl
178 if old_conda_shlvl == 0:
179 new_path = self.pathsep_join(self._add_prefix_to_path(prefix))
180 set_vars = {
181 'CONDA_PYTHON_EXE': sys.executable,
182 'PATH': new_path,
183 'CONDA_PREFIX': prefix,
184 'CONDA_SHLVL': old_conda_shlvl + 1,
185 'CONDA_DEFAULT_ENV': conda_default_env,
186 'CONDA_PROMPT_MODIFIER': conda_prompt_modifier,
187 }
188 deactivate_scripts = ()
189 elif old_conda_shlvl == max_shlvl:
190 new_path = self.pathsep_join(self._replace_prefix_in_path(old_conda_prefix, prefix))
191 set_vars = {
192 'PATH': new_path,
193 'CONDA_PREFIX': prefix,
194 'CONDA_DEFAULT_ENV': conda_default_env,
195 'CONDA_PROMPT_MODIFIER': conda_prompt_modifier,
196 }
197 deactivate_scripts = glob(join(
198 old_conda_prefix, 'etc', 'conda', 'deactivate.d', '*' + self.script_extension
199 ))
200 else:
201 new_path = self.pathsep_join(self._add_prefix_to_path(prefix))
202 set_vars = {
203 'PATH': new_path,
204 'CONDA_PREFIX': prefix,
205 'CONDA_PREFIX_%d' % old_conda_shlvl: old_conda_prefix,
206 'CONDA_SHLVL': old_conda_shlvl + 1,
207 'CONDA_DEFAULT_ENV': conda_default_env,
208 'CONDA_PROMPT_MODIFIER': conda_prompt_modifier,
209 }
210 deactivate_scripts = ()
211
212 return {
213 'unset_vars': (),
214 'set_vars': set_vars,
215 'deactivate_scripts': deactivate_scripts,
216 'activate_scripts': activate_scripts,
217 }
218
219 def build_deactivate(self):
220 # query environment
221 old_conda_shlvl = int(os.getenv('CONDA_SHLVL', 0))
222 old_conda_prefix = os.environ['CONDA_PREFIX']
223 deactivate_scripts = self._get_deactivate_scripts(old_conda_prefix)
224
225 new_conda_shlvl = old_conda_shlvl - 1
226 new_path = self.pathsep_join(self._remove_prefix_from_path(old_conda_prefix))
227
228 assert old_conda_shlvl > 0
229 if old_conda_shlvl == 1:
230 # TODO: warn conda floor
231 unset_vars = (
232 'CONDA_PREFIX',
233 'CONDA_DEFAULT_ENV',
234 'CONDA_PYTHON_EXE',
235 'CONDA_PROMPT_MODIFIER',
236 )
237 set_vars = {
238 'PATH': new_path,
239 'CONDA_SHLVL': new_conda_shlvl,
240 }
241 activate_scripts = ()
242 else:
243 new_prefix = os.getenv('CONDA_PREFIX_%d' % new_conda_shlvl)
244 conda_default_env = self._default_env(new_prefix)
245 conda_prompt_modifier = self._prompt_modifier(conda_default_env)
246
247 unset_vars = (
248 'CONDA_PREFIX_%d' % new_conda_shlvl,
249 )
250 set_vars = {
251 'PATH': new_path,
252 'CONDA_SHLVL': new_conda_shlvl,
253 'CONDA_PREFIX': new_prefix,
254 'CONDA_DEFAULT_ENV': conda_default_env,
255 'CONDA_PROMPT_MODIFIER': conda_prompt_modifier,
256 }
257 activate_scripts = self._get_activate_scripts(new_prefix)
258
259 return {
260 'unset_vars': unset_vars,
261 'set_vars': set_vars,
262 'deactivate_scripts': deactivate_scripts,
263 'activate_scripts': activate_scripts,
264 }
265
266 def build_reactivate(self):
267 conda_prefix = os.environ['CONDA_PREFIX']
268 return {
269 'unset_vars': (),
270 'set_vars': {},
271 'deactivate_scripts': self._get_deactivate_scripts(conda_prefix),
272 'activate_scripts': self._get_activate_scripts(conda_prefix),
273 }
274
275 def _get_starting_path_list(self):
276 path = os.environ['PATH']
277 if on_win:
278 # on Windows, the python interpreter prepends sys.prefix\Library\bin on startup WTF
279 return path.split(os.pathsep)[1:]
280 else:
281 return path.split(os.pathsep)
282
283 def _get_path_dirs(self, prefix):
284 if on_win: # pragma: unix no cover
285 yield prefix.rstrip("\\")
286 yield join(prefix, 'Library', 'mingw-w64', 'bin')
287 yield join(prefix, 'Library', 'usr', 'bin')
288 yield join(prefix, 'Library', 'bin')
289 yield join(prefix, 'Scripts')
290 else:
291 yield join(prefix, 'bin')
292
293 def _add_prefix_to_path(self, prefix, starting_path_dirs=None):
294 if starting_path_dirs is None:
295 starting_path_dirs = self._get_starting_path_list()
296 return self.path_conversion(*tuple(concatv(
297 self._get_path_dirs(prefix),
298 starting_path_dirs,
299 )))
300
301 def _remove_prefix_from_path(self, prefix, starting_path_dirs=None):
302 return self._replace_prefix_in_path(prefix, None, starting_path_dirs)
303
304 def _replace_prefix_in_path(self, old_prefix, new_prefix, starting_path_dirs=None):
305 if starting_path_dirs is None:
306 path_list = self._get_starting_path_list()
307 else:
308 path_list = list(starting_path_dirs)
309 if on_win: # pragma: unix no cover
310 # windows has a nasty habit of adding extra Library\bin directories
311 prefix_dirs = tuple(self._get_path_dirs(old_prefix))
312 try:
313 first_idx = path_list.index(prefix_dirs[0])
314 except ValueError:
315 first_idx = 0
316 else:
317 last_idx = path_list.index(prefix_dirs[-1])
318 del path_list[first_idx:last_idx+1]
319 if new_prefix is not None:
320 path_list[first_idx:first_idx] = list(self._get_path_dirs(new_prefix))
321 else:
322 try:
323 idx = path_list.index(join(old_prefix, 'bin'))
324 except ValueError:
325 idx = 0
326 else:
327 del path_list[idx]
328 if new_prefix is not None:
329 path_list.insert(idx, join(new_prefix, 'bin'))
330 return self.path_conversion(*path_list)
331
332 def _default_env(self, prefix):
333 if prefix == self.context.root_prefix:
334 return 'root'
335 return basename(prefix) if basename(dirname(prefix)) == 'envs' else prefix
336
337 def _prompt_modifier(self, conda_default_env):
338 return "(%s) " % conda_default_env if self.context.changeps1 else ""
339
340 def _get_activate_scripts(self, prefix):
341 return glob(join(
342 prefix, 'etc', 'conda', 'activate.d', '*' + self.script_extension
343 ))
344
345 def _get_deactivate_scripts(self, prefix):
346 return glob(join(
347 prefix, 'etc', 'conda', 'deactivate.d', '*' + self.script_extension
348 ))
349
350
351 def expand(path):
352 return abspath(expanduser(expandvars(path)))
353
354
355 def ensure_binary(value):
356 try:
357 return value.encode('utf-8')
358 except AttributeError: # pragma: no cover
359 # AttributeError: '<>' object has no attribute 'encode'
360 # In this case assume already binary type and do nothing
361 return value
362
363
364 def native_path_to_unix(*paths): # pragma: unix no cover
365 # on windows, uses cygpath to convert windows native paths to posix paths
366 if not on_win:
367 return path_identity(*paths)
368 from subprocess import PIPE, Popen
369 from shlex import split
370 command = 'cygpath --path -f -'
371 p = Popen(split(command), stdin=PIPE, stdout=PIPE, stderr=PIPE)
372 joined = ("%s" % os.pathsep).join(paths)
373 if hasattr(joined, 'encode'):
374 joined = joined.encode('utf-8')
375 stdout, stderr = p.communicate(input=joined)
376 rc = p.returncode
377 if rc != 0 or stderr:
378 from subprocess import CalledProcessError
379 message = "\n stdout: %s\n stderr: %s\n rc: %s\n" % (stdout, stderr, rc)
380 print(message, file=sys.stderr)
381 raise CalledProcessError(rc, command, message)
382 if hasattr(stdout, 'decode'):
383 stdout = stdout.decode('utf-8')
384 final = stdout.strip().split(':')
385 return final[0] if len(final) == 1 else tuple(final)
386
387
388 def path_identity(*paths):
389 return paths[0] if len(paths) == 1 else paths
390
391
392 on_win = bool(sys.platform == "win32")
393 PY2 = sys.version_info[0] == 2
394 if PY2: # pragma: py3 no cover
395 string_types = basestring, # NOQA
396
397 def iteritems(d, **kw):
398 return d.iteritems(**kw)
399 else: # pragma: py2 no cover
400 string_types = str,
401
402 def iteritems(d, **kw):
403 return iter(d.items(**kw))
404
405
406 def main():
407 command = sys.argv[1]
408 shell = sys.argv[2]
409 activator = Activator(shell)
410 remainder_args = sys.argv[3:] if len(sys.argv) >= 4 else ()
411 # if '-h' in remainder_args or '--help' in remainder_args:
412 # pass
413 if command == 'shell.activate':
414 if len(remainder_args) > 1:
415 from .exceptions import ArgumentError
416 raise ArgumentError("activate only accepts a single argument")
417 print(activator.activate(remainder_args and remainder_args[0] or "root"))
418 elif command == 'shell.deactivate':
419 if remainder_args:
420 from .exceptions import ArgumentError
421 raise ArgumentError("deactivate does not accept arguments")
422 print(activator.deactivate())
423 elif command == 'shell.reactivate':
424 if remainder_args:
425 from .exceptions import ArgumentError
426 raise ArgumentError("reactivate does not accept arguments")
427 print(activator.reactivate())
428 else:
429 raise NotImplementedError()
430 return 0
431
432
433 if __name__ == '__main__':
434 sys.exit(main())
435
[end of conda/activate.py]
[start of conda/cli/main.py]
1 # (c) Continuum Analytics, Inc. / http://continuum.io
2 # All Rights Reserved
3 #
4 # conda is distributed under the terms of the BSD 3-clause license.
5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.
6 """conda is a tool for managing environments and packages.
7
8 conda provides the following commands:
9
10 Information
11 ===========
12
13 info : display information about the current install
14 list : list packages linked into a specified environment
15 search : print information about a specified package
16 help : display a list of available conda commands and their help
17 strings
18
19 Package Management
20 ==================
21
22 create : create a new conda environment from a list of specified
23 packages
24 install : install new packages into an existing conda environment
25 update : update packages in a specified conda environment
26
27
28 Packaging
29 =========
30
31 package : create a conda package in an environment
32
33 Additional help for each command can be accessed by using:
34
35 conda <command> -h
36 """
37 from __future__ import absolute_import, division, print_function, unicode_literals
38 import sys
39
40
41 def generate_parser():
42 from argparse import SUPPRESS
43
44 from .. import __version__
45 from .conda_argparse import ArgumentParser
46
47 p = ArgumentParser(
48 description='conda is a tool for managing and deploying applications,'
49 ' environments and packages.',
50 )
51 p.add_argument(
52 '-V', '--version',
53 action='version',
54 version='conda %s' % __version__,
55 help="Show the conda version number and exit."
56 )
57 p.add_argument(
58 "--debug",
59 action="store_true",
60 help=SUPPRESS,
61 )
62 p.add_argument(
63 "--json",
64 action="store_true",
65 help=SUPPRESS,
66 )
67 sub_parsers = p.add_subparsers(
68 metavar='command',
69 dest='cmd',
70 )
71 # http://bugs.python.org/issue9253
72 # http://stackoverflow.com/a/18283730/1599393
73 sub_parsers.required = True
74
75 return p, sub_parsers
76
77
78 def _main(*args):
79 import importlib
80 from logging import CRITICAL, DEBUG, getLogger
81
82 try:
83 from cytoolz.itertoolz import concatv
84 except ImportError: # pragma: no cover
85 from .._vendor.toolz.itertoolz import concatv
86
87 from ..base.constants import SEARCH_PATH
88 from ..base.context import context
89 from ..gateways.logging import set_all_logger_level, set_verbosity
90
91 log = getLogger(__name__)
92
93 if len(args) == 1:
94 args = args + ('-h',)
95
96 p, sub_parsers = generate_parser()
97
98 main_modules = ["info", "help", "list", "search", "create", "install", "update",
99 "remove", "config", "clean", "package"]
100 modules = ["conda.cli.main_"+suffix for suffix in main_modules]
101 for module in modules:
102 imported = importlib.import_module(module)
103 imported.configure_parser(sub_parsers)
104 if "update" in module:
105 imported.configure_parser(sub_parsers, name='upgrade')
106 if "remove" in module:
107 imported.configure_parser(sub_parsers, name='uninstall')
108
109 from .find_commands import find_commands
110
111 # when using sys.argv, first argument is generally conda or __main__.py. Ignore it.
112 if (any(sname in args[0] for sname in ('conda', 'conda.exe', '__main__.py', 'conda-script.py'))
113 and (args[1] in concatv(sub_parsers.choices, find_commands())
114 or args[1].startswith('-'))):
115 log.debug("Ignoring first argument (%s), as it is not a subcommand", args[0])
116 args = args[1:]
117
118 args = p.parse_args(args)
119
120 context.__init__(SEARCH_PATH, 'conda', args)
121
122 if getattr(args, 'json', False):
123 # Silence logging info to avoid interfering with JSON output
124 for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):
125 getLogger(logger).setLevel(CRITICAL + 1)
126
127 if context.debug:
128 set_all_logger_level(DEBUG)
129 elif context.verbosity:
130 set_verbosity(context.verbosity)
131 log.debug("verbosity set to %s", context.verbosity)
132
133 exit_code = args.func(args, p)
134 if isinstance(exit_code, int):
135 return exit_code
136
137
138 def _ensure_text_type(value):
139 # copying here from conda/common/compat.py to avoid the import
140 try:
141 return value.decode('utf-8')
142 except AttributeError:
143 # AttributeError: '<>' object has no attribute 'decode'
144 # In this case assume already text_type and do nothing
145 return value
146 except UnicodeDecodeError:
147 from requests.packages.chardet import detect
148 encoding = detect(value).get('encoding') or 'utf-8'
149 return value.decode(encoding)
150
151
152 def main(*args):
153 if not args:
154 args = sys.argv
155
156 args = tuple(_ensure_text_type(s) for s in args)
157
158 if len(args) > 1:
159 try:
160 argv1 = args[1].strip()
161 if argv1.startswith('shell.'):
162 from ..activate import main as activator_main
163 return activator_main()
164 elif argv1.startswith('..'):
165 import conda.cli.activate as activate
166 activate.main()
167 return
168 if argv1 in ('activate', 'deactivate'):
169 from ..exceptions import CommandNotFoundError
170 raise CommandNotFoundError(argv1)
171 except Exception as e:
172 from ..exceptions import handle_exception
173 from ..gateways import initialize_logging
174 initialize_logging()
175 return handle_exception(e)
176
177 from ..exceptions import conda_exception_handler
178 return conda_exception_handler(_main, *args)
179
180
181 if __name__ == '__main__':
182 sys.exit(main())
183
[end of conda/cli/main.py]
[start of conda/cli/main_help.py]
1 # (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io
2 # All Rights Reserved
3 #
4 # conda is distributed under the terms of the BSD 3-clause license.
5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.
6 from __future__ import print_function, division, absolute_import, unicode_literals
7
8 descr = "Displays a list of available conda commands and their help strings."
9
10 example = """
11 Examples:
12
13 conda help install
14 """
15
16
17 def configure_parser(sub_parsers):
18 p = sub_parsers.add_parser(
19 'help',
20 description=descr,
21 help=descr,
22 epilog=example,
23 )
24 p.add_argument(
25 'command',
26 metavar='COMMAND',
27 action="store",
28 nargs='?',
29 help="""Print help information for COMMAND (same as: conda COMMAND
30 --help).""",
31 )
32 p.set_defaults(func=execute)
33
34
35 def execute(args, parser):
36 if not args.command:
37 parser.print_help()
38 return
39
40 import sys
41 import subprocess
42
43 subprocess.call([sys.executable, sys.argv[0], args.command, '-h'])
44
[end of conda/cli/main_help.py]
[start of conda/common/platform.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from collections import OrderedDict
5 import ctypes
6 from genericpath import exists
7 from glob import glob
8 from logging import getLogger
9 import os
10 import sys
11
12 from .compat import iteritems, on_win
13 from .._vendor.auxlib.decorators import memoize
14
15 log = getLogger(__name__)
16
17
18 def is_admin_on_windows(): # pragma: unix no cover
19 # http://stackoverflow.com/a/1026626/2127762
20 if not on_win: # pragma: no cover
21 return False
22 try:
23 from ctypes.windll.shell32 import IsUserAnAdmin
24 return IsUserAnAdmin() != 0
25 except ImportError as e:
26 log.debug('%r', e)
27 return 'unknown'
28 except Exception as e:
29 log.warn('%r', e)
30 return 'unknown'
31
32
33 @memoize
34 def linux_get_libc_version():
35 """
36 If on linux, returns (libc_family, version), otherwise (None, None)
37 """
38
39 if not sys.platform.startswith('linux'):
40 return None, None
41
42 from os import confstr, confstr_names, readlink
43
44 # Python 2.7 does not have either of these keys in confstr_names, so provide
45 # hard-coded defaults and assert if the key is in confstr_names but differs.
46 # These are defined by POSIX anyway so should never change.
47 confstr_names_fallback = OrderedDict([('CS_GNU_LIBC_VERSION', 2),
48 ('CS_GNU_LIBPTHREAD_VERSION', 3)])
49
50 val = None
51 for k, v in iteritems(confstr_names_fallback):
52 assert k not in confstr_names or confstr_names[k] == v, (
53 "confstr_names_fallback for %s is %s yet in confstr_names it is %s"
54 "" % (k, confstr_names_fallback[k], confstr_names[k])
55 )
56 try:
57 val = str(confstr(v))
58 except:
59 pass
60 else:
61 if val:
62 break
63
64 if not val:
65 # Weird, play it safe and assume glibc 2.5
66 family, version = 'glibc', '2.5'
67 log.warning("Failed to detect libc family and version, assuming %s/%s", family, version)
68 return family, version
69 family, version = val.split(' ')
70
71 # NPTL is just the name of the threading library, even though the
72 # version refers to that of uClibc. readlink() can help to try to
73 # figure out a better name instead.
74 if family == 'NPTL':
75 clibs = glob('/lib/libc.so*')
76 for clib in clibs:
77 clib = readlink(clib)
78 if exists(clib):
79 if clib.startswith('libuClibc'):
80 if version.startswith('0.'):
81 family = 'uClibc'
82 else:
83 family = 'uClibc-ng'
84 return family, version
85 # This could be some other C library; it is unlikely though.
86 family = 'uClibc'
87 log.warning("Failed to detect non-glibc family, assuming %s (%s)", family, version)
88 return family, version
89 return family, version
90
91
92 def get_free_space(dir_name):
93 """Return folder/drive free space (in bytes).
94 :param dir_name: the dir name need to check
95 :return: amount of free space
96
97 Examples:
98 >>> get_free_space(os.getcwd()) > 0
99 True
100 """
101 if on_win:
102 free_bytes = ctypes.c_ulonglong(0)
103 ctypes.windll.kernel32.GetDiskFreeSpaceExW(ctypes.c_wchar_p(dir_name), None, None,
104 ctypes.pointer(free_bytes))
105 return free_bytes.value
106 else:
107 st = os.statvfs(dir_name)
108 return st.f_bavail * st.f_frsize
109
[end of conda/common/platform.py]
[start of conda/gateways/subprocess.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from collections import namedtuple
5 from logging import getLogger
6 import os
7 from os.path import abspath
8 from shlex import split as shlex_split
9 from subprocess import CalledProcessError, PIPE, Popen
10 import sys
11
12 from .logging import TRACE
13 from .. import ACTIVE_SUBPROCESSES
14 from .._vendor.auxlib.ish import dals
15 from ..common.compat import ensure_binary, ensure_text_type, iteritems, on_win, string_types
16
17 log = getLogger(__name__)
18 Response = namedtuple('Response', ('stdout', 'stderr', 'rc'))
19
20
21 def _split_on_unix(command):
22 # I guess windows doesn't like shlex.split
23 return command if on_win else shlex_split(command)
24
25
26 def _format_output(command_str, path, rc, stdout, stderr):
27 return dals("""
28 $ %s
29 ==> cwd: %s <==
30 ==> exit code: %d <==
31 ==> stdout <==
32 %s
33 ==> stderr <==
34 %s
35 """) % (command_str, path, rc, stdout, stderr)
36
37
38 def subprocess_call(command, env=None, path=None, stdin=None, raise_on_error=True):
39 """This utility function should be preferred for all conda subprocessing.
40 It handles multiple tricky details.
41 """
42 env = {str(k): str(v) for k, v in iteritems(env if env else os.environ)}
43 path = sys.prefix if path is None else abspath(path)
44 command_str = command if isinstance(command, string_types) else ' '.join(command)
45 command_arg = _split_on_unix(command) if isinstance(command, string_types) else command
46 log.debug("executing>> %s", command_str)
47 p = Popen(command_arg, cwd=path, stdin=PIPE, stdout=PIPE, stderr=PIPE, env=env)
48 ACTIVE_SUBPROCESSES.add(p)
49 stdin = ensure_binary(stdin) if isinstance(stdin, string_types) else None
50 stdout, stderr = p.communicate(input=stdin)
51 rc = p.returncode
52 ACTIVE_SUBPROCESSES.remove(p)
53 if raise_on_error and rc != 0:
54 log.info(_format_output(command_str, path, rc, stdout, stderr))
55 raise CalledProcessError(rc, command,
56 output=_format_output(command_str, path, rc, stdout, stderr))
57 if log.isEnabledFor(TRACE):
58 log.trace(_format_output(command_str, path, rc, stdout, stderr))
59
60 return Response(ensure_text_type(stdout), ensure_text_type(stderr), int(rc))
61
[end of conda/gateways/subprocess.py]
[start of conda/models/version.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import operator as op
5 import re
6
7 from ..common.compat import string_types, zip, zip_longest
8 from ..exceptions import CondaValueError, InvalidVersionSpecError
9
10
11 # normalized_version() is needed by conda-env
12 # It is currently being pulled from resolve instead, but
13 # eventually it ought to come from here
14 def normalized_version(version):
15 return VersionOrder(version)
16
17
18 def ver_eval(vtest, spec):
19 return VersionSpec(spec).match(vtest)
20
21
22 version_check_re = re.compile(r'^[\*\.\+!_0-9a-z]+$')
23 version_split_re = re.compile('([0-9]+|[*]+|[^0-9*]+)')
24 version_cache = {}
25
26
27 class VersionOrder(object):
28 """
29 This class implements an order relation between version strings.
30 Version strings can contain the usual alphanumeric characters
31 (A-Za-z0-9), separated into components by dots and underscores. Empty
32 segments (i.e. two consecutive dots, a leading/trailing underscore)
33 are not permitted. An optional epoch number - an integer
34 followed by '!' - can preceed the actual version string
35 (this is useful to indicate a change in the versioning
36 scheme itself). Version comparison is case-insensitive.
37
38 Conda supports six types of version strings:
39
40 * Release versions contain only integers, e.g. '1.0', '2.3.5'.
41 * Pre-release versions use additional letters such as 'a' or 'rc',
42 for example '1.0a1', '1.2.beta3', '2.3.5rc3'.
43 * Development versions are indicated by the string 'dev',
44 for example '1.0dev42', '2.3.5.dev12'.
45 * Post-release versions are indicated by the string 'post',
46 for example '1.0post1', '2.3.5.post2'.
47 * Tagged versions have a suffix that specifies a particular
48 property of interest, e.g. '1.1.parallel'. Tags can be added
49 to any of the preceding four types. As far as sorting is concerned,
50 tags are treated like strings in pre-release versions.
51 * An optional local version string separated by '+' can be appended
52 to the main (upstream) version string. It is only considered
53 in comparisons when the main versions are equal, but otherwise
54 handled in exactly the same manner.
55
56 To obtain a predictable version ordering, it is crucial to keep the
57 version number scheme of a given package consistent over time.
58 Specifically,
59
60 * version strings should always have the same number of components
61 (except for an optional tag suffix or local version string),
62 * letters/strings indicating non-release versions should always
63 occur at the same position.
64
65 Before comparison, version strings are parsed as follows:
66
67 * They are first split into epoch, version number, and local version
68 number at '!' and '+' respectively. If there is no '!', the epoch is
69 set to 0. If there is no '+', the local version is empty.
70 * The version part is then split into components at '.' and '_'.
71 * Each component is split again into runs of numerals and non-numerals
72 * Subcomponents containing only numerals are converted to integers.
73 * Strings are converted to lower case, with special treatment for 'dev'
74 and 'post'.
75 * When a component starts with a letter, the fillvalue 0 is inserted
76 to keep numbers and strings in phase, resulting in '1.1.a1' == 1.1.0a1'.
77 * The same is repeated for the local version part.
78
79 Examples:
80
81 1.2g.beta15.rc => [[0], [1], [2, 'g'], [0, 'beta', 15], [0, 'rc']]
82 1!2.15.1_ALPHA => [[1], [2], [15], [1, '_alpha']]
83
84 The resulting lists are compared lexicographically, where the following
85 rules are applied to each pair of corresponding subcomponents:
86
87 * integers are compared numerically
88 * strings are compared lexicographically, case-insensitive
89 * strings are smaller than integers, except
90 * 'dev' versions are smaller than all corresponding versions of other types
91 * 'post' versions are greater than all corresponding versions of other types
92 * if a subcomponent has no correspondent, the missing correspondent is
93 treated as integer 0 to ensure '1.1' == '1.1.0'.
94
95 The resulting order is:
96
97 0.4
98 < 0.4.0
99 < 0.4.1.rc
100 == 0.4.1.RC # case-insensitive comparison
101 < 0.4.1
102 < 0.5a1
103 < 0.5b3
104 < 0.5C1 # case-insensitive comparison
105 < 0.5
106 < 0.9.6
107 < 0.960923
108 < 1.0
109 < 1.1dev1 # special case 'dev'
110 < 1.1a1
111 < 1.1.0dev1 # special case 'dev'
112 == 1.1.dev1 # 0 is inserted before string
113 < 1.1.a1
114 < 1.1.0rc1
115 < 1.1.0
116 == 1.1
117 < 1.1.0post1 # special case 'post'
118 == 1.1.post1 # 0 is inserted before string
119 < 1.1post1 # special case 'post'
120 < 1996.07.12
121 < 1!0.4.1 # epoch increased
122 < 1!3.1.1.6
123 < 2!0.4.1 # epoch increased again
124
125 Some packages (most notably openssl) have incompatible version conventions.
126 In particular, openssl interprets letters as version counters rather than
127 pre-release identifiers. For openssl, the relation
128
129 1.0.1 < 1.0.1a => True # for openssl
130
131 holds, whereas conda packages use the opposite ordering. You can work-around
132 this problem by appending a dash to plain version numbers:
133
134 1.0.1a => 1.0.1post.a # ensure correct ordering for openssl
135 """
136
137 def __new__(cls, vstr):
138 if isinstance(vstr, cls):
139 return vstr
140 self = version_cache.get(vstr)
141 if self is not None:
142 return self
143
144 message = "Malformed version string '%s': " % vstr
145 # version comparison is case-insensitive
146 version = vstr.strip().rstrip().lower()
147 # basic validity checks
148 if version == '':
149 raise CondaValueError("Empty version string.")
150 invalid = not version_check_re.match(version)
151 if invalid and '-' in version and '_' not in version:
152 # Allow for dashes as long as there are no underscores
153 # as well, by converting the former to the latter.
154 version = version.replace('-', '_')
155 invalid = not version_check_re.match(version)
156 if invalid:
157 raise CondaValueError(message + "invalid character(s).")
158 self = version_cache.get(version)
159 if self is not None:
160 version_cache[vstr] = self
161 return self
162
163 # when fillvalue == 0 => 1.1 == 1.1.0
164 # when fillvalue == -1 => 1.1 < 1.1.0
165 self = version_cache[vstr] = version_cache[version] = object.__new__(cls)
166 self.norm_version = version
167 self.fillvalue = 0
168
169 # find epoch
170 version = version.split('!')
171 if len(version) == 1:
172 # epoch not given => set it to '0'
173 epoch = ['0']
174 elif len(version) == 2:
175 # epoch given, must be an integer
176 if not version[0].isdigit():
177 raise CondaValueError(message + "epoch must be an integer.")
178 epoch = [version[0]]
179 else:
180 raise CondaValueError(message + "duplicated epoch separator '!'.")
181
182 # find local version string
183 version = version[-1].split('+')
184 if len(version) == 1:
185 # no local version
186 self.local = []
187 elif len(version) == 2:
188 # local version given
189 self.local = version[1].replace('_', '.').split('.')
190 else:
191 raise CondaValueError(message + "duplicated local version separator '+'.")
192
193 # split version
194 self.version = epoch + version[0].replace('_', '.').split('.')
195
196 # split components into runs of numerals and non-numerals,
197 # convert numerals to int, handle special strings
198 for v in (self.version, self.local):
199 for k in range(len(v)):
200 c = version_split_re.findall(v[k])
201 if not c:
202 raise CondaValueError(message + "empty version component.")
203 for j in range(len(c)):
204 if c[j].isdigit():
205 c[j] = int(c[j])
206 elif c[j] == 'post':
207 # ensure number < 'post' == infinity
208 c[j] = float('inf')
209 elif c[j] == 'dev':
210 # ensure '*' < 'DEV' < '_' < 'a' < number
211 # by upper-casing (all other strings are lower case)
212 c[j] = 'DEV'
213 if v[k][0].isdigit():
214 v[k] = c
215 else:
216 # components shall start with a number to keep numbers and
217 # strings in phase => prepend fillvalue
218 v[k] = [self.fillvalue] + c
219
220 return self
221
222 def __str__(self):
223 return self.norm_version
224
225 def _eq(self, t1, t2):
226 for v1, v2 in zip_longest(t1, t2, fillvalue=[]):
227 for c1, c2 in zip_longest(v1, v2, fillvalue=self.fillvalue):
228 if c1 != c2:
229 return False
230 return True
231
232 def __eq__(self, other):
233 return (self._eq(self.version, other.version) and
234 self._eq(self.local, other.local))
235
236 def startswith(self, other):
237 # Tests if the version lists match up to the last element in "other".
238 if other.local:
239 if not self._eq(self.version, other.version):
240 return False
241 t1 = self.local
242 t2 = other.local
243 else:
244 t1 = self.version
245 t2 = other.version
246 nt = len(t2) - 1
247 if not self._eq(t1[:nt], t2[:nt]):
248 return False
249 v1 = [] if len(t1) <= nt else t1[nt]
250 v2 = t2[nt]
251 nt = len(v2) - 1
252 if not self._eq([v1[:nt]], [v2[:nt]]):
253 return False
254 c1 = self.fillvalue if len(v1) <= nt else v1[nt]
255 c2 = v2[nt]
256 if isinstance(c2, string_types):
257 return isinstance(c1, string_types) and c1.startswith(c2)
258 return c1 == c2
259
260 def __ne__(self, other):
261 return not (self == other)
262
263 def __lt__(self, other):
264 for t1, t2 in zip([self.version, self.local], [other.version, other.local]):
265 for v1, v2 in zip_longest(t1, t2, fillvalue=[]):
266 for c1, c2 in zip_longest(v1, v2, fillvalue=self.fillvalue):
267 if c1 == c2:
268 continue
269 elif isinstance(c1, string_types):
270 if not isinstance(c2, string_types):
271 # str < int
272 return True
273 elif isinstance(c2, string_types):
274 # not (int < str)
275 return False
276 # c1 and c2 have the same type
277 return c1 < c2
278 # self == other
279 return False
280
281 def __gt__(self, other):
282 return other < self
283
284 def __le__(self, other):
285 return not (other < self)
286
287 def __ge__(self, other):
288 return not (self < other)
289
290
291 # each token slurps up leading whitespace, which we strip out.
292 VSPEC_TOKENS = (r'\s*\^[^$]*[$]|' # regexes
293 r'\s*[()|,]|' # parentheses, logical and, logical or
294 r'[^()|,]+') # everything else
295
296
297 def treeify(spec):
298 # Converts a VersionSpec expression string into a tuple-based
299 # expression tree.
300 tokens = re.findall(VSPEC_TOKENS, '(%s)' % spec)
301 output = []
302 stack = []
303
304 def apply_ops(cstop):
305 # cstop: operators with lower precedence
306 while stack and stack[-1] not in cstop:
307 if len(output) < 2:
308 raise InvalidVersionSpecError(spec)
309 c = stack.pop()
310 r = output.pop()
311 # Fuse expressions with the same operator; e.g.,
312 # ('|', ('|', a, b), ('|', c, d))becomes
313 # ('|', a, b, c d)
314 # We're playing a bit of a trick here. Instead of checking
315 # if the left or right entries are tuples, we're counting
316 # on the fact that if we _do_ see a string instead, its
317 # first character cannot possibly be equal to the operator.
318 r = r[1:] if r[0] == c else (r,)
319 l = output.pop()
320 l = l[1:] if l[0] == c else (l,)
321 output.append((c,)+l+r)
322
323 for item in tokens:
324 item = item.strip()
325 if item == '|':
326 apply_ops('(')
327 stack.append('|')
328 elif item == ',':
329 apply_ops('|(')
330 stack.append(',')
331 elif item == '(':
332 stack.append('(')
333 elif item == ')':
334 apply_ops('(')
335 if not stack or stack[-1] != '(':
336 raise InvalidVersionSpecError(spec)
337 stack.pop()
338 else:
339 output.append(item)
340 if stack:
341 raise InvalidVersionSpecError(spec)
342 return output[0]
343
344
345 def untreeify(spec, inand=False):
346 if isinstance(spec, tuple):
347 if spec[0] == '|':
348 res = '|'.join(map(untreeify, spec[1:]))
349 if inand:
350 res = '(%s)' % res
351 else:
352 res = ','.join(map(lambda x: untreeify(x, True), spec[1:]))
353 return res
354 return spec
355
356
357 # This RE matches the operators '==', '!=', '<=', '>=', '<', '>'
358 # followed by a version string. It rejects expressions like
359 # '<= 1.2' (space after operator), '<>1.2' (unknown operator),
360 # and '<=!1.2' (nonsensical operator).
361 version_relation_re = re.compile(r'(==|!=|<=|>=|<|>)(?![=<>!])(\S+)$')
362 regex_split_re = re.compile(r'.*[()|,^$]')
363 opdict = {'==': op.__eq__, '!=': op.__ne__, '<=': op.__le__,
364 '>=': op.__ge__, '<': op.__lt__, '>': op.__gt__}
365
366
367 class VersionSpec(object):
368 def exact_match_(self, vspec):
369 return self.spec == vspec
370
371 def regex_match_(self, vspec):
372 return bool(self.regex.match(vspec))
373
374 def veval_match_(self, vspec):
375 return self.op(VersionOrder(vspec), self.cmp)
376
377 def all_match_(self, vspec):
378 return all(s.match(vspec) for s in self.tup)
379
380 def any_match_(self, vspec):
381 return any(s.match(vspec) for s in self.tup)
382
383 def triv_match_(self, vspec):
384 return True
385
386 def __new__(cls, spec):
387 if isinstance(spec, cls):
388 return spec
389 if isinstance(spec, string_types) and regex_split_re.match(spec):
390 spec = treeify(spec)
391 if isinstance(spec, tuple):
392 self = object.__new__(cls)
393 self.tup = tuple(VersionSpec(s) for s in spec[1:])
394 self.match = self.any_match_ if spec[0] == '|' else self.all_match_
395 self.spec = untreeify(spec)
396 return self
397 self = object.__new__(cls)
398 self.spec = spec = spec.strip()
399 if spec.startswith('^') or spec.endswith('$'):
400 if not spec.startswith('^') or not spec.endswith('$'):
401 raise InvalidVersionSpecError(spec)
402 self.regex = re.compile(spec)
403 self.match = self.regex_match_
404 elif spec.startswith(('=', '<', '>', '!')):
405 m = version_relation_re.match(spec)
406 if m is None:
407 raise InvalidVersionSpecError(spec)
408 op, b = m.groups()
409 self.op = opdict[op]
410 self.cmp = VersionOrder(b)
411 self.match = self.veval_match_
412 elif spec == '*':
413 self.match = self.triv_match_
414 elif '*' in spec.rstrip('*'):
415 self.spec = spec
416 rx = spec.replace('.', r'\.')
417 rx = rx.replace('+', r'\+')
418 rx = rx.replace('*', r'.*')
419 rx = r'^(?:%s)$' % rx
420 self.regex = re.compile(rx)
421 self.match = self.regex_match_
422 elif spec.endswith('*'):
423 self.op = VersionOrder.startswith
424 self.cmp = VersionOrder(spec.rstrip('*').rstrip('.'))
425 self.match = self.veval_match_
426 else:
427 self.match = self.exact_match_
428 return self
429
430 def is_exact(self):
431 return self.match == self.exact_match_
432
433 def __eq__(self, other):
434 if isinstance(other, VersionSpec):
435 return self.spec == other.spec
436 return False
437
438 def __ne__(self, other):
439 if isinstance(other, VersionSpec):
440 return self.spec != other.spec
441 return True
442
443 def __hash__(self):
444 return hash(self.spec)
445
446 def __str__(self):
447 return self.spec
448
449 def __repr__(self):
450 return "VersionSpec('%s')" % self.spec
451
[end of conda/models/version.py]
[start of conda_env/installers/pip.py]
1 from __future__ import absolute_import
2
3 import os
4 import os.path as op
5 import subprocess
6 import tempfile
7 from conda_env.pip_util import pip_args
8 from conda.exceptions import CondaValueError
9
10
11 def _pip_install_via_requirements(prefix, specs, args, *_):
12 """
13 Installs the pip dependencies in specs using a temporary pip requirements file.
14
15 Args
16 ----
17 prefix: string
18 The path to the python and pip executables.
19
20 specs: iterable of strings
21 Each element should be a valid pip dependency.
22 See: https://pip.pypa.io/en/stable/user_guide/#requirements-files
23 https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format
24 """
25 try:
26 pip_workdir = op.dirname(op.abspath(args.file))
27 except AttributeError:
28 pip_workdir = None
29 requirements = None
30 try:
31 # Generate the temporary requirements file
32 requirements = tempfile.NamedTemporaryFile(mode='w',
33 prefix='condaenv.',
34 suffix='.requirements.txt',
35 dir=pip_workdir,
36 delete=False)
37 requirements.write('\n'.join(specs))
38 requirements.close()
39 # pip command line...
40 pip_cmd = pip_args(prefix) + ['install', '-r', requirements.name]
41 # ...run it
42 process = subprocess.Popen(pip_cmd,
43 cwd=pip_workdir,
44 universal_newlines=True)
45 process.communicate()
46 if process.returncode != 0:
47 raise CondaValueError("pip returned an error")
48 finally:
49 # Win/Appveyor does not like it if we use context manager + delete=True.
50 # So we delete the temporary file in a finally block.
51 if requirements is not None and op.isfile(requirements.name):
52 os.remove(requirements.name)
53
54
55 # Conform to Installers API
56 install = _pip_install_via_requirements
57
[end of conda_env/installers/pip.py]
[start of conda_env/pip_util.py]
1 """
2 Functions related to core conda functionality that relates to pip
3
4 NOTE: This modules used to in conda, as conda/pip.py
5 """
6 from __future__ import absolute_import, print_function
7
8 import json
9 import os
10 from os.path import isfile, join
11 import subprocess
12 import sys
13
14
15 def pip_args(prefix):
16 """
17 return the arguments required to invoke pip (in prefix), or None if pip
18 is not installed
19 """
20 if sys.platform == 'win32':
21 pip_path = join(prefix, 'Scripts', 'pip-script.py')
22 py_path = join(prefix, 'python.exe')
23 else:
24 pip_path = join(prefix, 'bin', 'pip')
25 py_path = join(prefix, 'bin', 'python')
26 if isfile(pip_path) and isfile(py_path):
27 ret = [py_path, pip_path]
28
29 # Check the version of pip
30 # --disable-pip-version-check was introduced in pip 6.0
31 # If older than that, they should probably get the warning anyway.
32 pip_version = subprocess.check_output(ret + ['-V']).decode('utf-8').split()[1]
33 major_ver = pip_version.split('.')[0]
34 if int(major_ver) >= 6:
35 ret.append('--disable-pip-version-check')
36 return ret
37 else:
38 return None
39
40
41 class PipPackage(dict):
42 def __str__(self):
43 if 'path' in self:
44 return '%s (%s)-%s-<pip>' % (
45 self['name'],
46 self['path'],
47 self['version']
48 )
49 return '%s-%s-<pip>' % (self['name'], self['version'])
50
51
52 def installed(prefix, output=True):
53 args = pip_args(prefix)
54 if args is None:
55 return
56
57 env = os.environ.copy()
58 env[str('PIP_FORMAT')] = str('legacy')
59
60 args += ['list', '--format', 'json']
61
62 try:
63 s = subprocess.check_output(args, universal_newlines=True, env=env)
64 except Exception:
65 # Any error should just be ignored
66 if output:
67 print("# Warning: subprocess call to pip failed")
68 return
69 pkgs = json.loads(s)
70
71 # For every package in pipinst that is not already represented
72 # in installed append a fake name to installed with 'pip'
73 # as the build string
74 for kwargs in pkgs:
75 kwargs['name'] = kwargs['name'].lower()
76 if ', ' in kwargs['version']:
77 # Packages installed with setup.py develop will include a path in
78 # the version. They should be included here, even if they are
79 # installed with conda, as they are preferred over the conda
80 # version. We still include the conda version, though, because it
81 # is still installed.
82
83 version, path = kwargs['version'].split(', ')
84 # We do this because the code below uses rsplit('-', 2)
85 version = version.replace('-', ' ')
86 kwargs['version'] = version
87 kwargs['path'] = path
88 yield PipPackage(**kwargs)
89
90
91 def add_pip_installed(prefix, installed_pkgs, json=None, output=True):
92 # Defer to json for backwards compatibility
93 if isinstance(json, bool):
94 output = not json
95
96 # TODO Refactor so installed is a real list of objects/dicts
97 # instead of strings allowing for direct comparison
98 # split :: to get rid of channel info
99 conda_names = {d.quad[0] for d in installed_pkgs}
100 for pip_pkg in installed(prefix, output=output):
101 if pip_pkg['name'] in conda_names and 'path' not in pip_pkg:
102 continue
103 installed_pkgs.add(str(pip_pkg))
104
[end of conda_env/pip_util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| conda/conda | 98c6d80f3299edf775b495f90651d558248d2cf8 | conda should exec to non-conda subcommands, not subprocess
| 2017-05-18T13:17:36Z | <patch>
diff --git a/conda/cli/conda_argparse.py b/conda/cli/conda_argparse.py
--- a/conda/cli/conda_argparse.py
+++ b/conda/cli/conda_argparse.py
@@ -45,7 +45,6 @@ def _get_action_from_name(self, name):
def error(self, message):
import re
- import subprocess
from .find_commands import find_executable
exc = sys.exc_info()[1]
@@ -57,7 +56,7 @@ def error(self, message):
else:
argument = None
if argument and argument.dest == "cmd":
- m = re.compile(r"invalid choice: '([\w\-]+)'").match(exc.message)
+ m = re.compile(r"invalid choice: u?'([\w\-]+)'").match(exc.message)
if m:
cmd = m.group(1)
executable = find_executable('conda-' + cmd)
@@ -67,13 +66,7 @@ def error(self, message):
args = [find_executable('conda-' + cmd)]
args.extend(sys.argv[2:])
- p = subprocess.Popen(args)
- try:
- p.communicate()
- except KeyboardInterrupt:
- p.wait()
- finally:
- sys.exit(p.returncode)
+ os.execv(args[0], args)
super(ArgumentParser, self).error(message)
</patch> | [] | [] | ||||
pandas-dev__pandas-9743 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[] (__getitem__) boolean indexing assignment bug with nans
See repro below:
``` python
import pandas as pd
import numpy as np
temp = pd.Series(np.random.randn(10))
temp[3:6] = np.nan
temp[8] = np.nan
nan_index = np.isnan(temp)
# this works
temp1 = temp.copy()
temp1[nan_index] = [99, 99, 99, 99]
temp1[nan_index]
3 99
4 99
5 99
8 99
dtype: float64
# this doesn't - values look like they're being assigned in a different order?
temp2 = temp.copy()
temp2[nan_index] = [99, 99, 99, np.nan]
3 NaN
4 99
5 99
8 99
dtype: float64
# ... but it works properly when using .loc
temp2 = temp.copy()
temp2.loc[nan_index] = [99, 99, 99, np.nan]
3 99
4 99
5 99
8 NaN
dtype: float64
```
output of show_versions():
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.9.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.16.0
nose: 1.3.4
Cython: 0.21.2
numpy: 1.9.2
scipy: 0.14.0
statsmodels: 0.5.0
IPython: 3.0.0
sphinx: 1.2.3
patsy: 0.2.1
dateutil: 2.4.1
pytz: 2015.2
bottleneck: 0.8.0
tables: 3.1.1
numexpr: 2.3.1
matplotlib: 1.4.0
openpyxl: 2.0.2
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.6.6
lxml: 3.4.2
bs4: 4.3.2
html5lib: 0.999
httplib2: 0.8
apiclient: None
sqlalchemy: 0.9.8
pymysql: None
psycopg2: None
```
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 ![Travis-CI Build Status](https://travis-ci.org/pydata/pandas.svg)
4
5 ## What is it
6
7 **pandas** is a Python package providing fast, flexible, and expressive data
8 structures designed to make working with "relational" or "labeled" data both
9 easy and intuitive. It aims to be the fundamental high-level building block for
10 doing practical, **real world** data analysis in Python. Additionally, it has
11 the broader goal of becoming **the most powerful and flexible open source data
12 analysis / manipulation tool available in any language**. It is already well on
13 its way toward this goal.
14
15 ## Main Features
16 Here are just a few of the things that pandas does well:
17
18 - Easy handling of [**missing data**][missing-data] (represented as
19 `NaN`) in floating point as well as non-floating point data
20 - Size mutability: columns can be [**inserted and
21 deleted**][insertion-deletion] from DataFrame and higher dimensional
22 objects
23 - Automatic and explicit [**data alignment**][alignment]: objects can
24 be explicitly aligned to a set of labels, or the user can simply
25 ignore the labels and let `Series`, `DataFrame`, etc. automatically
26 align the data for you in computations
27 - Powerful, flexible [**group by**][groupby] functionality to perform
28 split-apply-combine operations on data sets, for both aggregating
29 and transforming data
30 - Make it [**easy to convert**][conversion] ragged,
31 differently-indexed data in other Python and NumPy data structures
32 into DataFrame objects
33 - Intelligent label-based [**slicing**][slicing], [**fancy
34 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
35 large data sets
36 - Intuitive [**merging**][merging] and [**joining**][joining] data
37 sets
38 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
39 data sets
40 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
41 labels per tick)
42 - Robust IO tools for loading data from [**flat files**][flat-files]
43 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
44 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
45 - [**Time series**][timeseries]-specific functionality: date range
46 generation and frequency conversion, moving window statistics,
47 moving window linear regressions, date shifting and lagging, etc.
48
49
50 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
51 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
52 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
53 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
54 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
55 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
56 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
57 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
58 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
59 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
60 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
61 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
62 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
63 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
64 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
65 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
66 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
67 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
68
69 ## Where to get it
70 The source code is currently hosted on GitHub at:
71 http://github.com/pydata/pandas
72
73 Binary installers for the latest released version are available at the Python
74 package index
75
76 http://pypi.python.org/pypi/pandas/
77
78 And via `easy_install`:
79
80 ```sh
81 easy_install pandas
82 ```
83
84 or `pip`:
85
86 ```sh
87 pip install pandas
88 ```
89
90 or `conda`:
91
92 ```sh
93 conda install pandas
94 ```
95
96 ## Dependencies
97 - [NumPy](http://www.numpy.org): 1.7.0 or higher
98 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
99 - [pytz](http://pytz.sourceforge.net)
100 - Needed for time zone support with ``pandas.date_range``
101
102 ### Highly Recommended Dependencies
103 - [numexpr](https://github.com/pydata/numexpr)
104 - Needed to accelerate some expression evaluation operations
105 - Required by PyTables
106 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
107 - Needed to accelerate certain numerical operations
108
109 ### Optional dependencies
110 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
111 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
112 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
113 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
114 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
115 - [statsmodels](http://statsmodels.sourceforge.net/)
116 - Needed for parts of `pandas.stats`
117 - For Excel I/O:
118 - [xlrd/xlwt](http://www.python-excel.org/)
119 - Excel reading (xlrd) and writing (xlwt)
120 - [openpyxl](http://packages.python.org/openpyxl/)
121 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for
122 writing .xlsx files
123 - xlrd >= 0.9.0
124 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
125 - Alternative Excel writer.
126 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
127 - Needed for `pandas.io.gbq`
128 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
129 - One of the following combinations of libraries is needed to use the
130 top-level [`pandas.read_html`][read-html-docs] function:
131 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
132 recent version of [html5lib][html5lib] is okay.)
133 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
134 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
135 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
136 for reasons as to why you should probably **not** take this approach.
137
138 #### Notes about HTML parsing libraries
139 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
140 either [lxml][lxml] or [html5lib][html5lib] or both.
141 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
142 installed.
143 - You are strongly encouraged to read [HTML reading
144 gotchas][html-gotchas]. It explains issues surrounding the
145 installation and usage of the above three libraries.
146 - You may need to install an older version of
147 [BeautifulSoup4][BeautifulSoup4]:
148 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
149 32-bit Ubuntu/Debian
150 - Additionally, if you're using [Anaconda][Anaconda] you should
151 definitely read [the gotchas about HTML parsing][html-gotchas]
152 libraries
153 - If you're on a system with `apt-get` you can do
154
155 ```sh
156 sudo apt-get build-dep python-lxml
157 ```
158
159 to get the necessary dependencies for installation of [lxml][lxml].
160 This will prevent further headaches down the line.
161
162 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
163 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
164 [lxml]: http://lxml.de
165 [Anaconda]: https://store.continuum.io/cshop/anaconda
166 [NumPy]: http://numpy.scipy.org/
167 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
168 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
169
170 ## Installation from sources
171 To install pandas from source you need Cython in addition to the normal
172 dependencies above. Cython can be installed from pypi:
173
174 ```sh
175 pip install cython
176 ```
177
178 In the `pandas` directory (same one where you found this file after
179 cloning the git repo), execute:
180
181 ```sh
182 python setup.py install
183 ```
184
185 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
186
187 ```sh
188 python setup.py develop
189 ```
190
191 Alternatively, you can use `pip` if you want all the dependencies pulled
192 in automatically (the `-e` option is for installing it in [development
193 mode](http://www.pip-installer.org/en/latest/usage.html)):
194
195 ```sh
196 pip install -e .
197 ```
198
199 On Windows, you will need to install MinGW and execute:
200
201 ```sh
202 python setup.py build --compiler=mingw32
203 python setup.py install
204 ```
205
206 See http://pandas.pydata.org/ for more information.
207
208 ## License
209 BSD
210
211 ## Documentation
212 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
213
214 The Sphinx documentation should provide a good starting point for learning how
215 to use the library. Expect the docs to continue to expand as time goes on.
216
217 ## Background
218 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
219 has been under active development since then.
220
221 ## Discussion and Development
222 Since pandas development is related to a number of other scientific
223 Python projects, questions are welcome on the scipy-user mailing
224 list. Specialized discussions or design issues should take place on
225 the PyData mailing list / Google group:
226
227 https://groups.google.com/forum/#!forum/pydata
228
[end of README.md]
[start of doc/sphinxext/ipython_sphinxext/ipython_directive.py]
1 # -*- coding: utf-8 -*-
2 """
3 Sphinx directive to support embedded IPython code.
4
5 This directive allows pasting of entire interactive IPython sessions, prompts
6 and all, and their code will actually get re-executed at doc build time, with
7 all prompts renumbered sequentially. It also allows you to input code as a pure
8 python input by giving the argument python to the directive. The output looks
9 like an interactive ipython section.
10
11 To enable this directive, simply list it in your Sphinx ``conf.py`` file
12 (making sure the directory where you placed it is visible to sphinx, as is
13 needed for all Sphinx directives). For example, to enable syntax highlighting
14 and the IPython directive::
15
16 extensions = ['IPython.sphinxext.ipython_console_highlighting',
17 'IPython.sphinxext.ipython_directive']
18
19 The IPython directive outputs code-blocks with the language 'ipython'. So
20 if you do not have the syntax highlighting extension enabled as well, then
21 all rendered code-blocks will be uncolored. By default this directive assumes
22 that your prompts are unchanged IPython ones, but this can be customized.
23 The configurable options that can be placed in conf.py are:
24
25 ipython_savefig_dir:
26 The directory in which to save the figures. This is relative to the
27 Sphinx source directory. The default is `html_static_path`.
28 ipython_rgxin:
29 The compiled regular expression to denote the start of IPython input
30 lines. The default is re.compile('In \[(\d+)\]:\s?(.*)\s*'). You
31 shouldn't need to change this.
32 ipython_rgxout:
33 The compiled regular expression to denote the start of IPython output
34 lines. The default is re.compile('Out\[(\d+)\]:\s?(.*)\s*'). You
35 shouldn't need to change this.
36 ipython_promptin:
37 The string to represent the IPython input prompt in the generated ReST.
38 The default is 'In [%d]:'. This expects that the line numbers are used
39 in the prompt.
40 ipython_promptout:
41 The string to represent the IPython prompt in the generated ReST. The
42 default is 'Out [%d]:'. This expects that the line numbers are used
43 in the prompt.
44 ipython_mplbackend:
45 The string which specifies if the embedded Sphinx shell should import
46 Matplotlib and set the backend. The value specifies a backend that is
47 passed to `matplotlib.use()` before any lines in `ipython_execlines` are
48 executed. If not specified in conf.py, then the default value of 'agg' is
49 used. To use the IPython directive without matplotlib as a dependency, set
50 the value to `None`. It may end up that matplotlib is still imported
51 if the user specifies so in `ipython_execlines` or makes use of the
52 @savefig pseudo decorator.
53 ipython_execlines:
54 A list of strings to be exec'd in the embedded Sphinx shell. Typical
55 usage is to make certain packages always available. Set this to an empty
56 list if you wish to have no imports always available. If specified in
57 conf.py as `None`, then it has the effect of making no imports available.
58 If omitted from conf.py altogether, then the default value of
59 ['import numpy as np', 'import matplotlib.pyplot as plt'] is used.
60 ipython_holdcount
61 When the @suppress pseudo-decorator is used, the execution count can be
62 incremented or not. The default behavior is to hold the execution count,
63 corresponding to a value of `True`. Set this to `False` to increment
64 the execution count after each suppressed command.
65
66 As an example, to use the IPython directive when `matplotlib` is not available,
67 one sets the backend to `None`::
68
69 ipython_mplbackend = None
70
71 An example usage of the directive is:
72
73 .. code-block:: rst
74
75 .. ipython::
76
77 In [1]: x = 1
78
79 In [2]: y = x**2
80
81 In [3]: print(y)
82
83 See http://matplotlib.org/sampledoc/ipython_directive.html for additional
84 documentation.
85
86 ToDo
87 ----
88
89 - Turn the ad-hoc test() function into a real test suite.
90 - Break up ipython-specific functionality from matplotlib stuff into better
91 separated code.
92
93 Authors
94 -------
95
96 - John D Hunter: orignal author.
97 - Fernando Perez: refactoring, documentation, cleanups, port to 0.11.
98 - VáclavŠmilauer <eudoxos-AT-arcig.cz>: Prompt generalizations.
99 - Skipper Seabold, refactoring, cleanups, pure python addition
100 """
101 from __future__ import print_function
102 from __future__ import unicode_literals
103
104 #-----------------------------------------------------------------------------
105 # Imports
106 #-----------------------------------------------------------------------------
107
108 # Stdlib
109 import os
110 import re
111 import sys
112 import tempfile
113 import ast
114 from pandas.compat import zip, range, map, lmap, u, cStringIO as StringIO
115 import warnings
116
117 # To keep compatibility with various python versions
118 try:
119 from hashlib import md5
120 except ImportError:
121 from md5 import md5
122
123 # Third-party
124 import sphinx
125 from docutils.parsers.rst import directives
126 from docutils import nodes
127 from sphinx.util.compat import Directive
128
129 # Our own
130 from IPython import Config, InteractiveShell
131 from IPython.core.profiledir import ProfileDir
132 from IPython.utils import io
133 from IPython.utils.py3compat import PY3
134
135 if PY3:
136 from io import StringIO
137 text_type = str
138 else:
139 from StringIO import StringIO
140 text_type = unicode
141
142 #-----------------------------------------------------------------------------
143 # Globals
144 #-----------------------------------------------------------------------------
145 # for tokenizing blocks
146 COMMENT, INPUT, OUTPUT = range(3)
147
148 #-----------------------------------------------------------------------------
149 # Functions and class declarations
150 #-----------------------------------------------------------------------------
151
152 def block_parser(part, rgxin, rgxout, fmtin, fmtout):
153 """
154 part is a string of ipython text, comprised of at most one
155 input, one ouput, comments, and blank lines. The block parser
156 parses the text into a list of::
157
158 blocks = [ (TOKEN0, data0), (TOKEN1, data1), ...]
159
160 where TOKEN is one of [COMMENT | INPUT | OUTPUT ] and
161 data is, depending on the type of token::
162
163 COMMENT : the comment string
164
165 INPUT: the (DECORATOR, INPUT_LINE, REST) where
166 DECORATOR: the input decorator (or None)
167 INPUT_LINE: the input as string (possibly multi-line)
168 REST : any stdout generated by the input line (not OUTPUT)
169
170 OUTPUT: the output string, possibly multi-line
171
172 """
173 block = []
174 lines = part.split('\n')
175 N = len(lines)
176 i = 0
177 decorator = None
178 while 1:
179
180 if i==N:
181 # nothing left to parse -- the last line
182 break
183
184 line = lines[i]
185 i += 1
186 line_stripped = line.strip()
187 if line_stripped.startswith('#'):
188 block.append((COMMENT, line))
189 continue
190
191 if line_stripped.startswith('@'):
192 # we're assuming at most one decorator -- may need to
193 # rethink
194 decorator = line_stripped
195 continue
196
197 # does this look like an input line?
198 matchin = rgxin.match(line)
199 if matchin:
200 lineno, inputline = int(matchin.group(1)), matchin.group(2)
201
202 # the ....: continuation string
203 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
204 Nc = len(continuation)
205 # input lines can continue on for more than one line, if
206 # we have a '\' line continuation char or a function call
207 # echo line 'print'. The input line can only be
208 # terminated by the end of the block or an output line, so
209 # we parse out the rest of the input line if it is
210 # multiline as well as any echo text
211
212 rest = []
213 while i<N:
214
215 # look ahead; if the next line is blank, or a comment, or
216 # an output line, we're done
217
218 nextline = lines[i]
219 matchout = rgxout.match(nextline)
220 #print "nextline=%s, continuation=%s, starts=%s"%(nextline, continuation, nextline.startswith(continuation))
221 if matchout or nextline.startswith('#'):
222 break
223 elif nextline.startswith(continuation):
224 nextline = nextline[Nc:]
225 if nextline and nextline[0] == ' ':
226 nextline = nextline[1:]
227
228 inputline += '\n' + nextline
229
230 else:
231 rest.append(nextline)
232 i+= 1
233
234 block.append((INPUT, (decorator, inputline, '\n'.join(rest))))
235 continue
236
237 # if it looks like an output line grab all the text to the end
238 # of the block
239 matchout = rgxout.match(line)
240 if matchout:
241 lineno, output = int(matchout.group(1)), matchout.group(2)
242 if i<N-1:
243 output = '\n'.join([output] + lines[i:])
244
245 block.append((OUTPUT, output))
246 break
247
248 return block
249
250
251 class DecodingStringIO(StringIO, object):
252 def __init__(self,buf='',encodings=('utf8',), *args, **kwds):
253 super(DecodingStringIO, self).__init__(buf, *args, **kwds)
254 self.set_encodings(encodings)
255
256 def set_encodings(self, encodings):
257 self.encodings = encodings
258
259 def write(self,data):
260 if isinstance(data, text_type):
261 return super(DecodingStringIO, self).write(data)
262 else:
263 for enc in self.encodings:
264 try:
265 data = data.decode(enc)
266 return super(DecodingStringIO, self).write(data)
267 except :
268 pass
269 # default to brute utf8 if no encoding succeded
270 return super(DecodingStringIO, self).write(data.decode('utf8', 'replace'))
271
272
273 class EmbeddedSphinxShell(object):
274 """An embedded IPython instance to run inside Sphinx"""
275
276 def __init__(self, exec_lines=None,state=None):
277
278 self.cout = DecodingStringIO(u'')
279
280 if exec_lines is None:
281 exec_lines = []
282
283 self.state = state
284
285 # Create config object for IPython
286 config = Config()
287 config.InteractiveShell.autocall = False
288 config.InteractiveShell.autoindent = False
289 config.InteractiveShell.colors = 'NoColor'
290
291 # create a profile so instance history isn't saved
292 tmp_profile_dir = tempfile.mkdtemp(prefix='profile_')
293 profname = 'auto_profile_sphinx_build'
294 pdir = os.path.join(tmp_profile_dir,profname)
295 profile = ProfileDir.create_profile_dir(pdir)
296
297 # Create and initialize global ipython, but don't start its mainloop.
298 # This will persist across different EmbededSphinxShell instances.
299 IP = InteractiveShell.instance(config=config, profile_dir=profile)
300
301 # io.stdout redirect must be done after instantiating InteractiveShell
302 io.stdout = self.cout
303 io.stderr = self.cout
304
305 # For debugging, so we can see normal output, use this:
306 #from IPython.utils.io import Tee
307 #io.stdout = Tee(self.cout, channel='stdout') # dbg
308 #io.stderr = Tee(self.cout, channel='stderr') # dbg
309
310 # Store a few parts of IPython we'll need.
311 self.IP = IP
312 self.user_ns = self.IP.user_ns
313 self.user_global_ns = self.IP.user_global_ns
314
315 self.input = ''
316 self.output = ''
317
318 self.is_verbatim = False
319 self.is_doctest = False
320 self.is_suppress = False
321
322 # Optionally, provide more detailed information to shell.
323 self.directive = None
324
325 # on the first call to the savefig decorator, we'll import
326 # pyplot as plt so we can make a call to the plt.gcf().savefig
327 self._pyplot_imported = False
328
329 # Prepopulate the namespace.
330 for line in exec_lines:
331 self.process_input_line(line, store_history=False)
332
333 def clear_cout(self):
334 self.cout.seek(0)
335 self.cout.truncate(0)
336
337 def process_input_line(self, line, store_history=True):
338 """process the input, capturing stdout"""
339
340 stdout = sys.stdout
341 splitter = self.IP.input_splitter
342 try:
343 sys.stdout = self.cout
344 splitter.push(line)
345 more = splitter.push_accepts_more()
346 if not more:
347 try:
348 source_raw = splitter.source_raw_reset()[1]
349 except:
350 # recent ipython #4504
351 source_raw = splitter.raw_reset()
352 self.IP.run_cell(source_raw, store_history=store_history)
353 finally:
354 sys.stdout = stdout
355
356 def process_image(self, decorator):
357 """
358 # build out an image directive like
359 # .. image:: somefile.png
360 # :width 4in
361 #
362 # from an input like
363 # savefig somefile.png width=4in
364 """
365 savefig_dir = self.savefig_dir
366 source_dir = self.source_dir
367 saveargs = decorator.split(' ')
368 filename = saveargs[1]
369 # insert relative path to image file in source
370 outfile = os.path.relpath(os.path.join(savefig_dir,filename),
371 source_dir)
372
373 imagerows = ['.. image:: %s'%outfile]
374
375 for kwarg in saveargs[2:]:
376 arg, val = kwarg.split('=')
377 arg = arg.strip()
378 val = val.strip()
379 imagerows.append(' :%s: %s'%(arg, val))
380
381 image_file = os.path.basename(outfile) # only return file name
382 image_directive = '\n'.join(imagerows)
383 return image_file, image_directive
384
385 # Callbacks for each type of token
386 def process_input(self, data, input_prompt, lineno):
387 """
388 Process data block for INPUT token.
389
390 """
391 decorator, input, rest = data
392 image_file = None
393 image_directive = None
394
395 is_verbatim = decorator=='@verbatim' or self.is_verbatim
396 is_doctest = (decorator is not None and \
397 decorator.startswith('@doctest')) or self.is_doctest
398 is_suppress = decorator=='@suppress' or self.is_suppress
399 is_okexcept = decorator=='@okexcept' or self.is_okexcept
400 is_okwarning = decorator=='@okwarning' or self.is_okwarning
401 is_savefig = decorator is not None and \
402 decorator.startswith('@savefig')
403
404 # set the encodings to be used by DecodingStringIO
405 # to convert the execution output into unicode if
406 # needed. this attrib is set by IpythonDirective.run()
407 # based on the specified block options, defaulting to ['ut
408 self.cout.set_encodings(self.output_encoding)
409
410 input_lines = input.split('\n')
411
412 if len(input_lines) > 1:
413 if input_lines[-1] != "":
414 input_lines.append('') # make sure there's a blank line
415 # so splitter buffer gets reset
416
417 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
418
419 if is_savefig:
420 image_file, image_directive = self.process_image(decorator)
421
422 ret = []
423 is_semicolon = False
424
425 # Hold the execution count, if requested to do so.
426 if is_suppress and self.hold_count:
427 store_history = False
428 else:
429 store_history = True
430
431 # Note: catch_warnings is not thread safe
432 with warnings.catch_warnings(record=True) as ws:
433 for i, line in enumerate(input_lines):
434 if line.endswith(';'):
435 is_semicolon = True
436
437 if i == 0:
438 # process the first input line
439 if is_verbatim:
440 self.process_input_line('')
441 self.IP.execution_count += 1 # increment it anyway
442 else:
443 # only submit the line in non-verbatim mode
444 self.process_input_line(line, store_history=store_history)
445 formatted_line = '%s %s'%(input_prompt, line)
446 else:
447 # process a continuation line
448 if not is_verbatim:
449 self.process_input_line(line, store_history=store_history)
450
451 formatted_line = '%s %s'%(continuation, line)
452
453 if not is_suppress:
454 ret.append(formatted_line)
455
456 if not is_suppress and len(rest.strip()) and is_verbatim:
457 # the "rest" is the standard output of the
458 # input, which needs to be added in
459 # verbatim mode
460 ret.append(rest)
461
462 self.cout.seek(0)
463 output = self.cout.read()
464 if not is_suppress and not is_semicolon:
465 ret.append(output)
466 elif is_semicolon: # get spacing right
467 ret.append('')
468
469 # context information
470 filename = self.state.document.current_source
471 lineno = self.state.document.current_line
472
473 # output any exceptions raised during execution to stdout
474 # unless :okexcept: has been specified.
475 if not is_okexcept and "Traceback" in output:
476 s = "\nException in %s at block ending on line %s\n" % (filename, lineno)
477 s += "Specify :okexcept: as an option in the ipython:: block to suppress this message\n"
478 sys.stdout.write('\n\n>>>' + ('-' * 73))
479 sys.stdout.write(s)
480 sys.stdout.write(output)
481 sys.stdout.write('<<<' + ('-' * 73) + '\n\n')
482
483 # output any warning raised during execution to stdout
484 # unless :okwarning: has been specified.
485 if not is_okwarning:
486 for w in ws:
487 s = "\nWarning in %s at block ending on line %s\n" % (filename, lineno)
488 s += "Specify :okwarning: as an option in the ipython:: block to suppress this message\n"
489 sys.stdout.write('\n\n>>>' + ('-' * 73))
490 sys.stdout.write(s)
491 sys.stdout.write('-' * 76 + '\n')
492 s=warnings.formatwarning(w.message, w.category,
493 w.filename, w.lineno, w.line)
494 sys.stdout.write(s)
495 sys.stdout.write('<<<' + ('-' * 73) + '\n')
496
497 self.cout.truncate(0)
498 return (ret, input_lines, output, is_doctest, decorator, image_file,
499 image_directive)
500
501
502 def process_output(self, data, output_prompt,
503 input_lines, output, is_doctest, decorator, image_file):
504 """
505 Process data block for OUTPUT token.
506
507 """
508 TAB = ' ' * 4
509
510 if is_doctest and output is not None:
511
512 found = output
513 found = found.strip()
514 submitted = data.strip()
515
516 if self.directive is None:
517 source = 'Unavailable'
518 content = 'Unavailable'
519 else:
520 source = self.directive.state.document.current_source
521 content = self.directive.content
522 # Add tabs and join into a single string.
523 content = '\n'.join([TAB + line for line in content])
524
525 # Make sure the output contains the output prompt.
526 ind = found.find(output_prompt)
527 if ind < 0:
528 e = ('output does not contain output prompt\n\n'
529 'Document source: {0}\n\n'
530 'Raw content: \n{1}\n\n'
531 'Input line(s):\n{TAB}{2}\n\n'
532 'Output line(s):\n{TAB}{3}\n\n')
533 e = e.format(source, content, '\n'.join(input_lines),
534 repr(found), TAB=TAB)
535 raise RuntimeError(e)
536 found = found[len(output_prompt):].strip()
537
538 # Handle the actual doctest comparison.
539 if decorator.strip() == '@doctest':
540 # Standard doctest
541 if found != submitted:
542 e = ('doctest failure\n\n'
543 'Document source: {0}\n\n'
544 'Raw content: \n{1}\n\n'
545 'On input line(s):\n{TAB}{2}\n\n'
546 'we found output:\n{TAB}{3}\n\n'
547 'instead of the expected:\n{TAB}{4}\n\n')
548 e = e.format(source, content, '\n'.join(input_lines),
549 repr(found), repr(submitted), TAB=TAB)
550 raise RuntimeError(e)
551 else:
552 self.custom_doctest(decorator, input_lines, found, submitted)
553
554 def process_comment(self, data):
555 """Process data fPblock for COMMENT token."""
556 if not self.is_suppress:
557 return [data]
558
559 def save_image(self, image_file):
560 """
561 Saves the image file to disk.
562 """
563 self.ensure_pyplot()
564 command = ('plt.gcf().savefig("%s", bbox_inches="tight", '
565 'dpi=100)' % image_file)
566
567 #print 'SAVEFIG', command # dbg
568 self.process_input_line('bookmark ipy_thisdir', store_history=False)
569 self.process_input_line('cd -b ipy_savedir', store_history=False)
570 self.process_input_line(command, store_history=False)
571 self.process_input_line('cd -b ipy_thisdir', store_history=False)
572 self.process_input_line('bookmark -d ipy_thisdir', store_history=False)
573 self.clear_cout()
574
575 def process_block(self, block):
576 """
577 process block from the block_parser and return a list of processed lines
578 """
579 ret = []
580 output = None
581 input_lines = None
582 lineno = self.IP.execution_count
583
584 input_prompt = self.promptin % lineno
585 output_prompt = self.promptout % lineno
586 image_file = None
587 image_directive = None
588
589 for token, data in block:
590 if token == COMMENT:
591 out_data = self.process_comment(data)
592 elif token == INPUT:
593 (out_data, input_lines, output, is_doctest, decorator,
594 image_file, image_directive) = \
595 self.process_input(data, input_prompt, lineno)
596 elif token == OUTPUT:
597 out_data = \
598 self.process_output(data, output_prompt,
599 input_lines, output, is_doctest,
600 decorator, image_file)
601 if out_data:
602 ret.extend(out_data)
603
604 # save the image files
605 if image_file is not None:
606 self.save_image(image_file)
607
608 return ret, image_directive
609
610 def ensure_pyplot(self):
611 """
612 Ensures that pyplot has been imported into the embedded IPython shell.
613
614 Also, makes sure to set the backend appropriately if not set already.
615
616 """
617 # We are here if the @figure pseudo decorator was used. Thus, it's
618 # possible that we could be here even if python_mplbackend were set to
619 # `None`. That's also strange and perhaps worthy of raising an
620 # exception, but for now, we just set the backend to 'agg'.
621
622 if not self._pyplot_imported:
623 if 'matplotlib.backends' not in sys.modules:
624 # Then ipython_matplotlib was set to None but there was a
625 # call to the @figure decorator (and ipython_execlines did
626 # not set a backend).
627 #raise Exception("No backend was set, but @figure was used!")
628 import matplotlib
629 matplotlib.use('agg')
630
631 # Always import pyplot into embedded shell.
632 self.process_input_line('import matplotlib.pyplot as plt',
633 store_history=False)
634 self._pyplot_imported = True
635
636 def process_pure_python(self, content):
637 """
638 content is a list of strings. it is unedited directive content
639
640 This runs it line by line in the InteractiveShell, prepends
641 prompts as needed capturing stderr and stdout, then returns
642 the content as a list as if it were ipython code
643 """
644 output = []
645 savefig = False # keep up with this to clear figure
646 multiline = False # to handle line continuation
647 multiline_start = None
648 fmtin = self.promptin
649
650 ct = 0
651
652 for lineno, line in enumerate(content):
653
654 line_stripped = line.strip()
655 if not len(line):
656 output.append(line)
657 continue
658
659 # handle decorators
660 if line_stripped.startswith('@'):
661 output.extend([line])
662 if 'savefig' in line:
663 savefig = True # and need to clear figure
664 continue
665
666 # handle comments
667 if line_stripped.startswith('#'):
668 output.extend([line])
669 continue
670
671 # deal with lines checking for multiline
672 continuation = u' %s:'% ''.join(['.']*(len(str(ct))+2))
673 if not multiline:
674 modified = u"%s %s" % (fmtin % ct, line_stripped)
675 output.append(modified)
676 ct += 1
677 try:
678 ast.parse(line_stripped)
679 output.append(u'')
680 except Exception: # on a multiline
681 multiline = True
682 multiline_start = lineno
683 else: # still on a multiline
684 modified = u'%s %s' % (continuation, line)
685 output.append(modified)
686
687 # if the next line is indented, it should be part of multiline
688 if len(content) > lineno + 1:
689 nextline = content[lineno + 1]
690 if len(nextline) - len(nextline.lstrip()) > 3:
691 continue
692 try:
693 mod = ast.parse(
694 '\n'.join(content[multiline_start:lineno+1]))
695 if isinstance(mod.body[0], ast.FunctionDef):
696 # check to see if we have the whole function
697 for element in mod.body[0].body:
698 if isinstance(element, ast.Return):
699 multiline = False
700 else:
701 output.append(u'')
702 multiline = False
703 except Exception:
704 pass
705
706 if savefig: # clear figure if plotted
707 self.ensure_pyplot()
708 self.process_input_line('plt.clf()', store_history=False)
709 self.clear_cout()
710 savefig = False
711
712 return output
713
714 def custom_doctest(self, decorator, input_lines, found, submitted):
715 """
716 Perform a specialized doctest.
717
718 """
719 from .custom_doctests import doctests
720
721 args = decorator.split()
722 doctest_type = args[1]
723 if doctest_type in doctests:
724 doctests[doctest_type](self, args, input_lines, found, submitted)
725 else:
726 e = "Invalid option to @doctest: {0}".format(doctest_type)
727 raise Exception(e)
728
729
730 class IPythonDirective(Directive):
731
732 has_content = True
733 required_arguments = 0
734 optional_arguments = 4 # python, suppress, verbatim, doctest
735 final_argumuent_whitespace = True
736 option_spec = { 'python': directives.unchanged,
737 'suppress' : directives.flag,
738 'verbatim' : directives.flag,
739 'doctest' : directives.flag,
740 'okexcept': directives.flag,
741 'okwarning': directives.flag,
742 'output_encoding': directives.unchanged_required
743 }
744
745 shell = None
746
747 seen_docs = set()
748
749 def get_config_options(self):
750 # contains sphinx configuration variables
751 config = self.state.document.settings.env.config
752
753 # get config variables to set figure output directory
754 confdir = self.state.document.settings.env.app.confdir
755 savefig_dir = config.ipython_savefig_dir
756 source_dir = os.path.dirname(self.state.document.current_source)
757 if savefig_dir is None:
758 savefig_dir = config.html_static_path
759 if isinstance(savefig_dir, list):
760 savefig_dir = savefig_dir[0] # safe to assume only one path?
761 savefig_dir = os.path.join(confdir, savefig_dir)
762
763 # get regex and prompt stuff
764 rgxin = config.ipython_rgxin
765 rgxout = config.ipython_rgxout
766 promptin = config.ipython_promptin
767 promptout = config.ipython_promptout
768 mplbackend = config.ipython_mplbackend
769 exec_lines = config.ipython_execlines
770 hold_count = config.ipython_holdcount
771
772 return (savefig_dir, source_dir, rgxin, rgxout,
773 promptin, promptout, mplbackend, exec_lines, hold_count)
774
775 def setup(self):
776 # Get configuration values.
777 (savefig_dir, source_dir, rgxin, rgxout, promptin, promptout,
778 mplbackend, exec_lines, hold_count) = self.get_config_options()
779
780 if self.shell is None:
781 # We will be here many times. However, when the
782 # EmbeddedSphinxShell is created, its interactive shell member
783 # is the same for each instance.
784
785 if mplbackend:
786 import matplotlib
787 # Repeated calls to use() will not hurt us since `mplbackend`
788 # is the same each time.
789 matplotlib.use(mplbackend)
790
791 # Must be called after (potentially) importing matplotlib and
792 # setting its backend since exec_lines might import pylab.
793 self.shell = EmbeddedSphinxShell(exec_lines, self.state)
794
795 # Store IPython directive to enable better error messages
796 self.shell.directive = self
797
798 # reset the execution count if we haven't processed this doc
799 #NOTE: this may be borked if there are multiple seen_doc tmp files
800 #check time stamp?
801 if not self.state.document.current_source in self.seen_docs:
802 self.shell.IP.history_manager.reset()
803 self.shell.IP.execution_count = 1
804 self.shell.IP.prompt_manager.width = 0
805 self.seen_docs.add(self.state.document.current_source)
806
807 # and attach to shell so we don't have to pass them around
808 self.shell.rgxin = rgxin
809 self.shell.rgxout = rgxout
810 self.shell.promptin = promptin
811 self.shell.promptout = promptout
812 self.shell.savefig_dir = savefig_dir
813 self.shell.source_dir = source_dir
814 self.shell.hold_count = hold_count
815
816 # setup bookmark for saving figures directory
817 self.shell.process_input_line('bookmark ipy_savedir %s'%savefig_dir,
818 store_history=False)
819 self.shell.clear_cout()
820
821 return rgxin, rgxout, promptin, promptout
822
823 def teardown(self):
824 # delete last bookmark
825 self.shell.process_input_line('bookmark -d ipy_savedir',
826 store_history=False)
827 self.shell.clear_cout()
828
829 def run(self):
830 debug = False
831
832 #TODO, any reason block_parser can't be a method of embeddable shell
833 # then we wouldn't have to carry these around
834 rgxin, rgxout, promptin, promptout = self.setup()
835
836 options = self.options
837 self.shell.is_suppress = 'suppress' in options
838 self.shell.is_doctest = 'doctest' in options
839 self.shell.is_verbatim = 'verbatim' in options
840 self.shell.is_okexcept = 'okexcept' in options
841 self.shell.is_okwarning = 'okwarning' in options
842
843 self.shell.output_encoding = [options.get('output_encoding', 'utf8')]
844
845 # handle pure python code
846 if 'python' in self.arguments:
847 content = self.content
848 self.content = self.shell.process_pure_python(content)
849
850 parts = '\n'.join(self.content).split('\n\n')
851
852 lines = ['.. code-block:: ipython', '']
853 figures = []
854
855 for part in parts:
856 block = block_parser(part, rgxin, rgxout, promptin, promptout)
857 if len(block):
858 rows, figure = self.shell.process_block(block)
859 for row in rows:
860 lines.extend([' %s'%line for line in row.split('\n')])
861
862 if figure is not None:
863 figures.append(figure)
864
865 for figure in figures:
866 lines.append('')
867 lines.extend(figure.split('\n'))
868 lines.append('')
869
870 if len(lines)>2:
871 if debug:
872 print('\n'.join(lines))
873 else:
874 # This has to do with input, not output. But if we comment
875 # these lines out, then no IPython code will appear in the
876 # final output.
877 self.state_machine.insert_input(
878 lines, self.state_machine.input_lines.source(0))
879
880 # cleanup
881 self.teardown()
882
883 return []
884
885 # Enable as a proper Sphinx directive
886 def setup(app):
887 setup.app = app
888
889 app.add_directive('ipython', IPythonDirective)
890 app.add_config_value('ipython_savefig_dir', None, 'env')
891 app.add_config_value('ipython_rgxin',
892 re.compile('In \[(\d+)\]:\s?(.*)\s*'), 'env')
893 app.add_config_value('ipython_rgxout',
894 re.compile('Out\[(\d+)\]:\s?(.*)\s*'), 'env')
895 app.add_config_value('ipython_promptin', 'In [%d]:', 'env')
896 app.add_config_value('ipython_promptout', 'Out[%d]:', 'env')
897
898 # We could just let matplotlib pick whatever is specified as the default
899 # backend in the matplotlibrc file, but this would cause issues if the
900 # backend didn't work in headless environments. For this reason, 'agg'
901 # is a good default backend choice.
902 app.add_config_value('ipython_mplbackend', 'agg', 'env')
903
904 # If the user sets this config value to `None`, then EmbeddedSphinxShell's
905 # __init__ method will treat it as [].
906 execlines = ['import numpy as np', 'import matplotlib.pyplot as plt']
907 app.add_config_value('ipython_execlines', execlines, 'env')
908
909 app.add_config_value('ipython_holdcount', True, 'env')
910
911 # Simple smoke test, needs to be converted to a proper automatic test.
912 def test():
913
914 examples = [
915 r"""
916 In [9]: pwd
917 Out[9]: '/home/jdhunter/py4science/book'
918
919 In [10]: cd bookdata/
920 /home/jdhunter/py4science/book/bookdata
921
922 In [2]: from pylab import *
923
924 In [2]: ion()
925
926 In [3]: im = imread('stinkbug.png')
927
928 @savefig mystinkbug.png width=4in
929 In [4]: imshow(im)
930 Out[4]: <matplotlib.image.AxesImage object at 0x39ea850>
931
932 """,
933 r"""
934
935 In [1]: x = 'hello world'
936
937 # string methods can be
938 # used to alter the string
939 @doctest
940 In [2]: x.upper()
941 Out[2]: 'HELLO WORLD'
942
943 @verbatim
944 In [3]: x.st<TAB>
945 x.startswith x.strip
946 """,
947 r"""
948
949 In [130]: url = 'http://ichart.finance.yahoo.com/table.csv?s=CROX\
950 .....: &d=9&e=22&f=2009&g=d&a=1&br=8&c=2006&ignore=.csv'
951
952 In [131]: print url.split('&')
953 ['http://ichart.finance.yahoo.com/table.csv?s=CROX', 'd=9', 'e=22', 'f=2009', 'g=d', 'a=1', 'b=8', 'c=2006', 'ignore=.csv']
954
955 In [60]: import urllib
956
957 """,
958 r"""\
959
960 In [133]: import numpy.random
961
962 @suppress
963 In [134]: numpy.random.seed(2358)
964
965 @doctest
966 In [135]: numpy.random.rand(10,2)
967 Out[135]:
968 array([[ 0.64524308, 0.59943846],
969 [ 0.47102322, 0.8715456 ],
970 [ 0.29370834, 0.74776844],
971 [ 0.99539577, 0.1313423 ],
972 [ 0.16250302, 0.21103583],
973 [ 0.81626524, 0.1312433 ],
974 [ 0.67338089, 0.72302393],
975 [ 0.7566368 , 0.07033696],
976 [ 0.22591016, 0.77731835],
977 [ 0.0072729 , 0.34273127]])
978
979 """,
980
981 r"""
982 In [106]: print x
983 jdh
984
985 In [109]: for i in range(10):
986 .....: print i
987 .....:
988 .....:
989 0
990 1
991 2
992 3
993 4
994 5
995 6
996 7
997 8
998 9
999 """,
1000
1001 r"""
1002
1003 In [144]: from pylab import *
1004
1005 In [145]: ion()
1006
1007 # use a semicolon to suppress the output
1008 @savefig test_hist.png width=4in
1009 In [151]: hist(np.random.randn(10000), 100);
1010
1011
1012 @savefig test_plot.png width=4in
1013 In [151]: plot(np.random.randn(10000), 'o');
1014 """,
1015
1016 r"""
1017 # use a semicolon to suppress the output
1018 In [151]: plt.clf()
1019
1020 @savefig plot_simple.png width=4in
1021 In [151]: plot([1,2,3])
1022
1023 @savefig hist_simple.png width=4in
1024 In [151]: hist(np.random.randn(10000), 100);
1025
1026 """,
1027 r"""
1028 # update the current fig
1029 In [151]: ylabel('number')
1030
1031 In [152]: title('normal distribution')
1032
1033
1034 @savefig hist_with_text.png
1035 In [153]: grid(True)
1036
1037 @doctest float
1038 In [154]: 0.1 + 0.2
1039 Out[154]: 0.3
1040
1041 @doctest float
1042 In [155]: np.arange(16).reshape(4,4)
1043 Out[155]:
1044 array([[ 0, 1, 2, 3],
1045 [ 4, 5, 6, 7],
1046 [ 8, 9, 10, 11],
1047 [12, 13, 14, 15]])
1048
1049 In [1]: x = np.arange(16, dtype=float).reshape(4,4)
1050
1051 In [2]: x[0,0] = np.inf
1052
1053 In [3]: x[0,1] = np.nan
1054
1055 @doctest float
1056 In [4]: x
1057 Out[4]:
1058 array([[ inf, nan, 2., 3.],
1059 [ 4., 5., 6., 7.],
1060 [ 8., 9., 10., 11.],
1061 [ 12., 13., 14., 15.]])
1062
1063
1064 """,
1065 ]
1066 # skip local-file depending first example:
1067 examples = examples[1:]
1068
1069 #ipython_directive.DEBUG = True # dbg
1070 #options = dict(suppress=True) # dbg
1071 options = dict()
1072 for example in examples:
1073 content = example.split('\n')
1074 IPythonDirective('debug', arguments=None, options=options,
1075 content=content, lineno=0,
1076 content_offset=None, block_text=None,
1077 state=None, state_machine=None,
1078 )
1079
1080 # Run test suite as a script
1081 if __name__=='__main__':
1082 if not os.path.isdir('_static'):
1083 os.mkdir('_static')
1084 test()
1085 print('All OK? Check figures in _static/')
1086
[end of doc/sphinxext/ipython_sphinxext/ipython_directive.py]
[start of pandas/tseries/tools.py]
1 from datetime import datetime, timedelta
2 import re
3 import sys
4
5 import numpy as np
6
7 import pandas.lib as lib
8 import pandas.tslib as tslib
9 import pandas.core.common as com
10 from pandas.compat import StringIO, callable
11 import pandas.compat as compat
12
13 try:
14 import dateutil
15 from dateutil.parser import parse, DEFAULTPARSER
16 from dateutil.relativedelta import relativedelta
17
18 # raise exception if dateutil 2.0 install on 2.x platform
19 if (sys.version_info[0] == 2 and
20 dateutil.__version__ == '2.0'): # pragma: no cover
21 raise Exception('dateutil 2.0 incompatible with Python 2.x, you must '
22 'install version 1.5 or 2.1+!')
23 except ImportError: # pragma: no cover
24 print('Please install python-dateutil via easy_install or some method!')
25 raise # otherwise a 2nd import won't show the message
26
27 _DATEUTIL_LEXER_SPLIT = None
28 try:
29 # Since these are private methods from dateutil, it is safely imported
30 # here so in case this interface changes, pandas will just fallback
31 # to not using the functionality
32 from dateutil.parser import _timelex
33
34 if hasattr(_timelex, 'split'):
35 def _lexer_split_from_str(dt_str):
36 # The StringIO(str(_)) is for dateutil 2.2 compatibility
37 return _timelex.split(StringIO(str(dt_str)))
38
39 _DATEUTIL_LEXER_SPLIT = _lexer_split_from_str
40 except (ImportError, AttributeError):
41 pass
42
43 def _infer_tzinfo(start, end):
44 def _infer(a, b):
45 tz = a.tzinfo
46 if b and b.tzinfo:
47 if not (tslib.get_timezone(tz) == tslib.get_timezone(b.tzinfo)):
48 raise AssertionError('Inputs must both have the same timezone,'
49 ' {0} != {1}'.format(tz, b.tzinfo))
50 return tz
51 tz = None
52 if start is not None:
53 tz = _infer(start, end)
54 elif end is not None:
55 tz = _infer(end, start)
56 return tz
57
58
59 def _guess_datetime_format(dt_str, dayfirst=False,
60 dt_str_parse=compat.parse_date,
61 dt_str_split=_DATEUTIL_LEXER_SPLIT):
62 """
63 Guess the datetime format of a given datetime string.
64
65 Parameters
66 ----------
67 dt_str : string, datetime string to guess the format of
68 dayfirst : boolean, default False
69 If True parses dates with the day first, eg 20/01/2005
70 Warning: dayfirst=True is not strict, but will prefer to parse
71 with day first (this is a known bug).
72 dt_str_parse : function, defaults to `compate.parse_date` (dateutil)
73 This function should take in a datetime string and return
74 a `datetime.datetime` guess that the datetime string represents
75 dt_str_split : function, defaults to `_DATEUTIL_LEXER_SPLIT` (dateutil)
76 This function should take in a datetime string and return
77 a list of strings, the guess of the various specific parts
78 e.g. '2011/12/30' -> ['2011', '/', '12', '/', '30']
79
80 Returns
81 -------
82 ret : datetime formatt string (for `strftime` or `strptime`)
83 """
84 if dt_str_parse is None or dt_str_split is None:
85 return None
86
87 if not isinstance(dt_str, compat.string_types):
88 return None
89
90 day_attribute_and_format = (('day',), '%d')
91
92 datetime_attrs_to_format = [
93 (('year', 'month', 'day'), '%Y%m%d'),
94 (('year',), '%Y'),
95 (('month',), '%B'),
96 (('month',), '%b'),
97 (('month',), '%m'),
98 day_attribute_and_format,
99 (('hour',), '%H'),
100 (('minute',), '%M'),
101 (('second',), '%S'),
102 (('microsecond',), '%f'),
103 (('second', 'microsecond'), '%S.%f'),
104 ]
105
106 if dayfirst:
107 datetime_attrs_to_format.remove(day_attribute_and_format)
108 datetime_attrs_to_format.insert(0, day_attribute_and_format)
109
110 try:
111 parsed_datetime = dt_str_parse(dt_str, dayfirst=dayfirst)
112 except:
113 # In case the datetime can't be parsed, its format cannot be guessed
114 return None
115
116 if parsed_datetime is None:
117 return None
118
119 try:
120 tokens = dt_str_split(dt_str)
121 except:
122 # In case the datetime string can't be split, its format cannot
123 # be guessed
124 return None
125
126 format_guess = [None] * len(tokens)
127 found_attrs = set()
128
129 for attrs, attr_format in datetime_attrs_to_format:
130 # If a given attribute has been placed in the format string, skip
131 # over other formats for that same underlying attribute (IE, month
132 # can be represented in multiple different ways)
133 if set(attrs) & found_attrs:
134 continue
135
136 if all(getattr(parsed_datetime, attr) is not None for attr in attrs):
137 for i, token_format in enumerate(format_guess):
138 if (token_format is None and
139 tokens[i] == parsed_datetime.strftime(attr_format)):
140 format_guess[i] = attr_format
141 found_attrs.update(attrs)
142 break
143
144 # Only consider it a valid guess if we have a year, month and day
145 if len(set(['year', 'month', 'day']) & found_attrs) != 3:
146 return None
147
148 output_format = []
149 for i, guess in enumerate(format_guess):
150 if guess is not None:
151 # Either fill in the format placeholder (like %Y)
152 output_format.append(guess)
153 else:
154 # Or just the token separate (IE, the dashes in "01-01-2013")
155 try:
156 # If the token is numeric, then we likely didn't parse it
157 # properly, so our guess is wrong
158 float(tokens[i])
159 return None
160 except ValueError:
161 pass
162
163 output_format.append(tokens[i])
164
165 guessed_format = ''.join(output_format)
166
167 if parsed_datetime.strftime(guessed_format) == dt_str:
168 return guessed_format
169
170 def _guess_datetime_format_for_array(arr, **kwargs):
171 # Try to guess the format based on the first non-NaN element
172 non_nan_elements = com.notnull(arr).nonzero()[0]
173 if len(non_nan_elements):
174 return _guess_datetime_format(arr[non_nan_elements[0]], **kwargs)
175
176 def to_datetime(arg, errors='ignore', dayfirst=False, utc=None, box=True,
177 format=None, exact=True, coerce=False, unit='ns',
178 infer_datetime_format=False):
179 """
180 Convert argument to datetime.
181
182 Parameters
183 ----------
184 arg : string, datetime, array of strings (with possible NAs)
185 errors : {'ignore', 'raise'}, default 'ignore'
186 Errors are ignored by default (values left untouched)
187 dayfirst : boolean, default False
188 If True parses dates with the day first, eg 20/01/2005
189 Warning: dayfirst=True is not strict, but will prefer to parse
190 with day first (this is a known bug).
191 utc : boolean, default None
192 Return UTC DatetimeIndex if True (converting any tz-aware
193 datetime.datetime objects as well)
194 box : boolean, default True
195 If True returns a DatetimeIndex, if False returns ndarray of values
196 format : string, default None
197 strftime to parse time, eg "%d/%m/%Y", note that "%f" will parse
198 all the way up to nanoseconds
199 exact : boolean, True by default
200 If True, require an exact format match.
201 If False, allow the format to match anywhere in the target string.
202 coerce : force errors to NaT (False by default)
203 Timestamps outside the interval between Timestamp.min and Timestamp.max
204 (approximately 1677-09-22 to 2262-04-11) will be also forced to NaT.
205 unit : unit of the arg (D,s,ms,us,ns) denote the unit in epoch
206 (e.g. a unix timestamp), which is an integer/float number
207 infer_datetime_format : boolean, default False
208 If no `format` is given, try to infer the format based on the first
209 datetime string. Provides a large speed-up in many cases.
210
211 Returns
212 -------
213 ret : datetime if parsing succeeded. Return type depends on input:
214 - list-like: DatetimeIndex
215 - Series: Series of datetime64 dtype
216 - scalar: Timestamp
217 In case when it is not possible to return designated types (e.g. when
218 any element of input is before Timestamp.min or after Timestamp.max)
219 return will have datetime.datetime type (or correspoding array/Series).
220
221 Examples
222 --------
223 Take separate series and convert to datetime
224
225 >>> import pandas as pd
226 >>> i = pd.date_range('20000101',periods=100)
227 >>> df = pd.DataFrame(dict(year = i.year, month = i.month, day = i.day))
228 >>> pd.to_datetime(df.year*10000 + df.month*100 + df.day, format='%Y%m%d')
229 0 2000-01-01
230 1 2000-01-02
231 ...
232 98 2000-04-08
233 99 2000-04-09
234 Length: 100, dtype: datetime64[ns]
235
236 Or from strings
237
238 >>> df = df.astype(str)
239 >>> pd.to_datetime(df.day + df.month + df.year, format="%d%m%Y")
240 0 2000-01-01
241 1 2000-01-02
242 ...
243 98 2000-04-08
244 99 2000-04-09
245 Length: 100, dtype: datetime64[ns]
246
247 Date that does not meet timestamp limitations:
248
249 >>> pd.to_datetime('13000101', format='%Y%m%d')
250 datetime.datetime(1300, 1, 1, 0, 0)
251 >>> pd.to_datetime('13000101', format='%Y%m%d', coerce=True)
252 NaT
253 """
254 from pandas import Timestamp
255 from pandas.core.series import Series
256 from pandas.tseries.index import DatetimeIndex
257
258 def _convert_listlike(arg, box, format):
259
260 if isinstance(arg, (list,tuple)):
261 arg = np.array(arg, dtype='O')
262
263 if com.is_datetime64_ns_dtype(arg):
264 if box and not isinstance(arg, DatetimeIndex):
265 try:
266 return DatetimeIndex(arg, tz='utc' if utc else None)
267 except ValueError:
268 pass
269
270 return arg
271
272 arg = com._ensure_object(arg)
273
274 if infer_datetime_format and format is None:
275 format = _guess_datetime_format_for_array(arg, dayfirst=dayfirst)
276
277 if format is not None:
278 # There is a special fast-path for iso8601 formatted
279 # datetime strings, so in those cases don't use the inferred
280 # format because this path makes process slower in this
281 # special case
282 format_is_iso8601 = (
283 '%Y-%m-%dT%H:%M:%S.%f'.startswith(format) or
284 '%Y-%m-%d %H:%M:%S.%f'.startswith(format)
285 )
286 if format_is_iso8601:
287 format = None
288
289 try:
290 result = None
291
292 if format is not None:
293 # shortcut formatting here
294 if format == '%Y%m%d':
295 try:
296 result = _attempt_YYYYMMDD(arg, coerce=coerce)
297 except:
298 raise ValueError("cannot convert the input to '%Y%m%d' date format")
299
300 # fallback
301 if result is None:
302 try:
303 result = tslib.array_strptime(
304 arg, format, exact=exact, coerce=coerce
305 )
306 except (tslib.OutOfBoundsDatetime):
307 if errors == 'raise':
308 raise
309 result = arg
310 except ValueError:
311 # Only raise this error if the user provided the
312 # datetime format, and not when it was inferred
313 if not infer_datetime_format:
314 raise
315
316 if result is None and (format is None or infer_datetime_format):
317 result = tslib.array_to_datetime(arg, raise_=errors == 'raise',
318 utc=utc, dayfirst=dayfirst,
319 coerce=coerce, unit=unit)
320
321 if com.is_datetime64_dtype(result) and box:
322 result = DatetimeIndex(result, tz='utc' if utc else None)
323 return result
324
325 except ValueError as e:
326 try:
327 values, tz = tslib.datetime_to_datetime64(arg)
328 return DatetimeIndex._simple_new(values, None, tz=tz)
329 except (ValueError, TypeError):
330 raise e
331
332 if arg is None:
333 return arg
334 elif isinstance(arg, Timestamp):
335 return arg
336 elif isinstance(arg, Series):
337 values = _convert_listlike(arg.values, False, format)
338 return Series(values, index=arg.index, name=arg.name)
339 elif com.is_list_like(arg):
340 return _convert_listlike(arg, box, format)
341
342 return _convert_listlike(np.array([ arg ]), box, format)[0]
343
344 class DateParseError(ValueError):
345 pass
346
347 def _attempt_YYYYMMDD(arg, coerce):
348 """ try to parse the YYYYMMDD/%Y%m%d format, try to deal with NaT-like,
349 arg is a passed in as an object dtype, but could really be ints/strings with nan-like/or floats (e.g. with nan) """
350
351 def calc(carg):
352 # calculate the actual result
353 carg = carg.astype(object)
354 return tslib.array_to_datetime(lib.try_parse_year_month_day(carg/10000,carg/100 % 100, carg % 100), coerce=coerce)
355
356 def calc_with_mask(carg,mask):
357 result = np.empty(carg.shape, dtype='M8[ns]')
358 iresult = result.view('i8')
359 iresult[~mask] = tslib.iNaT
360 result[mask] = calc(carg[mask].astype(np.float64).astype(np.int64)).astype('M8[ns]')
361 return result
362
363 # try intlike / strings that are ints
364 try:
365 return calc(arg.astype(np.int64))
366 except:
367 pass
368
369 # a float with actual np.nan
370 try:
371 carg = arg.astype(np.float64)
372 return calc_with_mask(carg,com.notnull(carg))
373 except:
374 pass
375
376 # string with NaN-like
377 try:
378 mask = ~lib.ismember(arg, tslib._nat_strings)
379 return calc_with_mask(arg,mask)
380 except:
381 pass
382
383 return None
384
385 # patterns for quarters like '4Q2005', '05Q1'
386 qpat1full = re.compile(r'(\d)Q-?(\d\d\d\d)')
387 qpat2full = re.compile(r'(\d\d\d\d)-?Q(\d)')
388 qpat1 = re.compile(r'(\d)Q-?(\d\d)')
389 qpat2 = re.compile(r'(\d\d)-?Q(\d)')
390 ypat = re.compile(r'(\d\d\d\d)$')
391 has_time = re.compile('(.+)([\s]|T)+(.+)')
392
393
394 def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
395 """
396 Try hard to parse datetime string, leveraging dateutil plus some extra
397 goodies like quarter recognition.
398
399 Parameters
400 ----------
401 arg : compat.string_types
402 freq : str or DateOffset, default None
403 Helps with interpreting time string if supplied
404 dayfirst : bool, default None
405 If None uses default from print_config
406 yearfirst : bool, default None
407 If None uses default from print_config
408
409 Returns
410 -------
411 datetime, datetime/dateutil.parser._result, str
412 """
413 from pandas.core.config import get_option
414 from pandas.tseries.offsets import DateOffset
415 from pandas.tseries.frequencies import (_get_rule_month, _month_numbers,
416 _get_freq_str)
417
418 if not isinstance(arg, compat.string_types):
419 return arg
420
421 arg = arg.upper()
422
423 default = datetime(1, 1, 1).replace(hour=0, minute=0,
424 second=0, microsecond=0)
425
426 # special handling for possibilities eg, 2Q2005, 2Q05, 2005Q1, 05Q1
427 if len(arg) in [4, 5, 6, 7]:
428 m = ypat.match(arg)
429 if m:
430 ret = default.replace(year=int(m.group(1)))
431 return ret, ret, 'year'
432
433 add_century = False
434 if len(arg) > 5:
435 qpats = [(qpat1full, 1), (qpat2full, 0)]
436 else:
437 add_century = True
438 qpats = [(qpat1, 1), (qpat2, 0)]
439
440 for pat, yfirst in qpats:
441 qparse = pat.match(arg)
442 if qparse is not None:
443 if yfirst:
444 yi, qi = 1, 2
445 else:
446 yi, qi = 2, 1
447 q = int(qparse.group(yi))
448 y_str = qparse.group(qi)
449 y = int(y_str)
450 if add_century:
451 y += 2000
452
453 if freq is not None:
454 # hack attack, #1228
455 mnum = _month_numbers[_get_rule_month(freq)] + 1
456 month = (mnum + (q - 1) * 3) % 12 + 1
457 if month > mnum:
458 y -= 1
459 else:
460 month = (q - 1) * 3 + 1
461
462 ret = default.replace(year=y, month=month)
463 return ret, ret, 'quarter'
464
465 is_mo_str = freq is not None and freq == 'M'
466 is_mo_off = getattr(freq, 'rule_code', None) == 'M'
467 is_monthly = is_mo_str or is_mo_off
468 if len(arg) == 6 and is_monthly:
469 try:
470 ret = _try_parse_monthly(arg)
471 if ret is not None:
472 return ret, ret, 'month'
473 except Exception:
474 pass
475
476 # montly f7u12
477 mresult = _attempt_monthly(arg)
478 if mresult:
479 return mresult
480
481 if dayfirst is None:
482 dayfirst = get_option("display.date_dayfirst")
483 if yearfirst is None:
484 yearfirst = get_option("display.date_yearfirst")
485
486 try:
487 parsed, reso = dateutil_parse(arg, default, dayfirst=dayfirst,
488 yearfirst=yearfirst)
489 except Exception as e:
490 # TODO: allow raise of errors within instead
491 raise DateParseError(e)
492
493 if parsed is None:
494 raise DateParseError("Could not parse %s" % arg)
495
496 return parsed, parsed, reso # datetime, resolution
497
498
499 def dateutil_parse(timestr, default,
500 ignoretz=False, tzinfos=None,
501 **kwargs):
502 """ lifted from dateutil to get resolution"""
503 from dateutil import tz
504 import time
505 fobj = StringIO(str(timestr))
506
507 res = DEFAULTPARSER._parse(fobj, **kwargs)
508
509 # dateutil 2.2 compat
510 if isinstance(res, tuple):
511 res, _ = res
512
513 if res is None:
514 raise ValueError("unknown string format")
515
516 repl = {}
517 reso = None
518 for attr in ["year", "month", "day", "hour",
519 "minute", "second", "microsecond"]:
520 value = getattr(res, attr)
521 if value is not None:
522 repl[attr] = value
523 reso = attr
524
525 if reso is None:
526 raise ValueError("Cannot parse date.")
527
528 if reso == 'microsecond':
529 if repl['microsecond'] == 0:
530 reso = 'second'
531 elif repl['microsecond'] % 1000 == 0:
532 reso = 'millisecond'
533
534 ret = default.replace(**repl)
535 if res.weekday is not None and not res.day:
536 ret = ret + relativedelta.relativedelta(weekday=res.weekday)
537 if not ignoretz:
538 if callable(tzinfos) or tzinfos and res.tzname in tzinfos:
539 if callable(tzinfos):
540 tzdata = tzinfos(res.tzname, res.tzoffset)
541 else:
542 tzdata = tzinfos.get(res.tzname)
543 if isinstance(tzdata, datetime.tzinfo):
544 tzinfo = tzdata
545 elif isinstance(tzdata, compat.string_types):
546 tzinfo = tz.tzstr(tzdata)
547 elif isinstance(tzdata, int):
548 tzinfo = tz.tzoffset(res.tzname, tzdata)
549 else:
550 raise ValueError("offset must be tzinfo subclass, "
551 "tz string, or int offset")
552 ret = ret.replace(tzinfo=tzinfo)
553 elif res.tzname and res.tzname in time.tzname:
554 ret = ret.replace(tzinfo=tz.tzlocal())
555 elif res.tzoffset == 0:
556 ret = ret.replace(tzinfo=tz.tzutc())
557 elif res.tzoffset:
558 ret = ret.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset))
559 return ret, reso
560
561
562 def _attempt_monthly(val):
563 pats = ['%Y-%m', '%m-%Y', '%b %Y', '%b-%Y']
564 for pat in pats:
565 try:
566 ret = datetime.strptime(val, pat)
567 return ret, ret, 'month'
568 except Exception:
569 pass
570
571
572 def _try_parse_monthly(arg):
573 base = 2000
574 add_base = False
575 default = datetime(1, 1, 1).replace(hour=0, minute=0, second=0,
576 microsecond=0)
577
578 if len(arg) == 4:
579 add_base = True
580 y = int(arg[:2])
581 m = int(arg[2:4])
582 elif len(arg) >= 6: # 201201
583 y = int(arg[:4])
584 m = int(arg[4:6])
585 if add_base:
586 y += base
587 ret = default.replace(year=y, month=m)
588 return ret
589
590
591 normalize_date = tslib.normalize_date
592
593
594 def format(dt):
595 """Returns date in YYYYMMDD format."""
596 return dt.strftime('%Y%m%d')
597
598 OLE_TIME_ZERO = datetime(1899, 12, 30, 0, 0, 0)
599
600
601 def ole2datetime(oledt):
602 """function for converting excel date to normal date format"""
603 val = float(oledt)
604
605 # Excel has a bug where it thinks the date 2/29/1900 exists
606 # we just reject any date before 3/1/1900.
607 if val < 61:
608 raise ValueError("Value is outside of acceptable range: %s " % val)
609
610 return OLE_TIME_ZERO + timedelta(days=val)
611
[end of pandas/tseries/tools.py]
[start of pandas/util/print_versions.py]
1 import os
2 import platform
3 import sys
4 import struct
5 import subprocess
6 import codecs
7
8
9 def get_sys_info():
10 "Returns system information as a dict"
11
12 blob = []
13
14 # get full commit hash
15 commit = None
16 if os.path.isdir(".git") and os.path.isdir("pandas"):
17 try:
18 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "),
19 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
20 so, serr = pipe.communicate()
21 except:
22 pass
23 else:
24 if pipe.returncode == 0:
25 commit = so
26 try:
27 commit = so.decode('utf-8')
28 except ValueError:
29 pass
30 commit = commit.strip().strip('"')
31
32 blob.append(('commit', commit))
33
34 try:
35 sysname, nodename, release, version, machine, processor = platform.uname(
36 )
37 blob.extend([
38 ("python", "%d.%d.%d.%s.%s" % sys.version_info[:]),
39 ("python-bits", struct.calcsize("P") * 8),
40 ("OS", "%s" % (sysname)),
41 ("OS-release", "%s" % (release)),
42 # ("Version", "%s" % (version)),
43 ("machine", "%s" % (machine)),
44 ("processor", "%s" % (processor)),
45 ("byteorder", "%s" % sys.byteorder),
46 ("LC_ALL", "%s" % os.environ.get('LC_ALL', "None")),
47 ("LANG", "%s" % os.environ.get('LANG', "None")),
48
49 ])
50 except:
51 pass
52
53 return blob
54
55
56 def show_versions(as_json=False):
57 import imp
58 sys_info = get_sys_info()
59
60 deps = [
61 # (MODULE_NAME, f(mod) -> mod version)
62 ("pandas", lambda mod: mod.__version__),
63 ("nose", lambda mod: mod.__version__),
64 ("Cython", lambda mod: mod.__version__),
65 ("numpy", lambda mod: mod.version.version),
66 ("scipy", lambda mod: mod.version.version),
67 ("statsmodels", lambda mod: mod.__version__),
68 ("IPython", lambda mod: mod.__version__),
69 ("sphinx", lambda mod: mod.__version__),
70 ("patsy", lambda mod: mod.__version__),
71 ("dateutil", lambda mod: mod.__version__),
72 ("pytz", lambda mod: mod.VERSION),
73 ("bottleneck", lambda mod: mod.__version__),
74 ("tables", lambda mod: mod.__version__),
75 ("numexpr", lambda mod: mod.__version__),
76 ("matplotlib", lambda mod: mod.__version__),
77 ("openpyxl", lambda mod: mod.__version__),
78 ("xlrd", lambda mod: mod.__VERSION__),
79 ("xlwt", lambda mod: mod.__VERSION__),
80 ("xlsxwriter", lambda mod: mod.__version__),
81 ("lxml", lambda mod: mod.etree.__version__),
82 ("bs4", lambda mod: mod.__version__),
83 ("html5lib", lambda mod: mod.__version__),
84 ("httplib2", lambda mod: mod.__version__),
85 ("apiclient", lambda mod: mod.__version__),
86 ("sqlalchemy", lambda mod: mod.__version__),
87 ("pymysql", lambda mod: mod.__version__),
88 ("psycopg2", lambda mod: mod.__version__),
89 ]
90
91 deps_blob = list()
92 for (modname, ver_f) in deps:
93 try:
94 try:
95 mod = imp.load_module(modname, *imp.find_module(modname))
96 except (ImportError):
97 import importlib
98 mod = importlib.import_module(modname)
99 ver = ver_f(mod)
100 deps_blob.append((modname, ver))
101 except:
102 deps_blob.append((modname, None))
103
104 if (as_json):
105 # 2.6-safe
106 try:
107 import json
108 except:
109 import simplejson as json
110
111 j = dict(system=dict(sys_info), dependencies=dict(deps_blob))
112
113 if as_json == True:
114 print(j)
115 else:
116 with codecs.open(as_json, "wb", encoding='utf8') as f:
117 json.dump(j, f, indent=2)
118
119 else:
120
121 print("\nINSTALLED VERSIONS")
122 print("------------------")
123
124 for k, stat in sys_info:
125 print("%s: %s" % (k, stat))
126
127 print("")
128 for k, stat in deps_blob:
129 print("%s: %s" % (k, stat))
130
131
132 def main():
133 # optparse is 2.6-safe
134 from optparse import OptionParser
135 parser = OptionParser()
136 parser.add_option("-j", "--json", metavar="FILE", nargs=1,
137 help="Save output as JSON into file, pass in '-' to output to stdout")
138
139 (options, args) = parser.parse_args()
140
141 if options.json == "-":
142 options.json = True
143
144 show_versions(as_json=options.json)
145
146 return 0
147
148 if __name__ == "__main__":
149 sys.exit(main())
150
[end of pandas/util/print_versions.py]
[start of vb_suite/indexing.py]
1 from vbench.benchmark import Benchmark
2 from datetime import datetime
3
4 SECTION = 'Indexing and scalar value access'
5
6 common_setup = """from pandas_vb_common import *
7 """
8
9 #----------------------------------------------------------------------
10 # Series.__getitem__, get_value, __getitem__(slice)
11
12 setup = common_setup + """
13 tm.N = 1000
14 ts = tm.makeTimeSeries()
15 dt = ts.index[500]
16 """
17 statement = "ts[dt]"
18 bm_getitem = Benchmark(statement, setup, ncalls=100000,
19 name='time_series_getitem_scalar')
20
21 setup = common_setup + """
22 index = tm.makeStringIndex(1000)
23 s = Series(np.random.rand(1000), index=index)
24 idx = index[100]
25 """
26 statement = "s.get_value(idx)"
27 bm_get_value = Benchmark(statement, setup,
28 name='series_get_value',
29 start_date=datetime(2011, 11, 12))
30
31
32 setup = common_setup + """
33 index = tm.makeStringIndex(1000000)
34 s = Series(np.random.rand(1000000), index=index)
35 """
36 series_getitem_pos_slice = Benchmark("s[:800000]", setup,
37 name="series_getitem_pos_slice")
38
39
40 setup = common_setup + """
41 index = tm.makeStringIndex(1000000)
42 s = Series(np.random.rand(1000000), index=index)
43 lbl = s.index[800000]
44 """
45 series_getitem_label_slice = Benchmark("s[:lbl]", setup,
46 name="series_getitem_label_slice")
47
48
49 #----------------------------------------------------------------------
50 # DataFrame __getitem__
51
52 setup = common_setup + """
53 index = tm.makeStringIndex(1000)
54 columns = tm.makeStringIndex(30)
55 df = DataFrame(np.random.rand(1000, 30), index=index,
56 columns=columns)
57 idx = index[100]
58 col = columns[10]
59 """
60 statement = "df[col][idx]"
61 bm_df_getitem = Benchmark(statement, setup,
62 name='dataframe_getitem_scalar')
63
64 setup = common_setup + """
65 try:
66 klass = DataMatrix
67 except:
68 klass = DataFrame
69
70 index = tm.makeStringIndex(1000)
71 columns = tm.makeStringIndex(30)
72 df = klass(np.random.rand(1000, 30), index=index, columns=columns)
73 idx = index[100]
74 col = columns[10]
75 """
76 statement = "df[col][idx]"
77 bm_df_getitem2 = Benchmark(statement, setup,
78 name='datamatrix_getitem_scalar')
79
80
81 #----------------------------------------------------------------------
82 # ix get scalar
83
84 setup = common_setup + """
85 index = tm.makeStringIndex(1000)
86 columns = tm.makeStringIndex(30)
87 df = DataFrame(np.random.randn(1000, 30), index=index, columns=columns)
88 idx = index[100]
89 col = columns[10]
90 """
91
92 indexing_frame_get_value_ix = Benchmark("df.ix[idx,col]", setup,
93 name='indexing_frame_get_value_ix',
94 start_date=datetime(2011, 11, 12))
95
96 indexing_frame_get_value = Benchmark("df.get_value(idx,col)", setup,
97 name='indexing_frame_get_value',
98 start_date=datetime(2011, 11, 12))
99
100 setup = common_setup + """
101 mi = MultiIndex.from_tuples([(x,y) for x in range(1000) for y in range(1000)])
102 s = Series(np.random.randn(1000000), index=mi)
103 """
104
105 series_xs_mi_ix = Benchmark("s.ix[999]", setup,
106 name='series_xs_mi_ix',
107 start_date=datetime(2013, 1, 1))
108
109 setup = common_setup + """
110 mi = MultiIndex.from_tuples([(x,y) for x in range(1000) for y in range(1000)])
111 s = Series(np.random.randn(1000000), index=mi)
112 df = DataFrame(s)
113 """
114
115 frame_xs_mi_ix = Benchmark("df.ix[999]", setup,
116 name='frame_xs_mi_ix',
117 start_date=datetime(2013, 1, 1))
118
119 #----------------------------------------------------------------------
120 # Boolean DataFrame row selection
121
122 setup = common_setup + """
123 df = DataFrame(np.random.randn(10000, 4), columns=['A', 'B', 'C', 'D'])
124 indexer = df['B'] > 0
125 obj_indexer = indexer.astype('O')
126 """
127 indexing_dataframe_boolean_rows = \
128 Benchmark("df[indexer]", setup, name='indexing_dataframe_boolean_rows')
129
130 indexing_dataframe_boolean_rows_object = \
131 Benchmark("df[obj_indexer]", setup,
132 name='indexing_dataframe_boolean_rows_object')
133
134 setup = common_setup + """
135 df = DataFrame(np.random.randn(50000, 100))
136 df2 = DataFrame(np.random.randn(50000, 100))
137 """
138 indexing_dataframe_boolean = \
139 Benchmark("df > df2", setup, name='indexing_dataframe_boolean',
140 start_date=datetime(2012, 1, 1))
141
142 setup = common_setup + """
143 import pandas.computation.expressions as expr
144 df = DataFrame(np.random.randn(50000, 100))
145 df2 = DataFrame(np.random.randn(50000, 100))
146 expr.set_numexpr_threads(1)
147 """
148
149 indexing_dataframe_boolean_st = \
150 Benchmark("df > df2", setup, name='indexing_dataframe_boolean_st',cleanup="expr.set_numexpr_threads()",
151 start_date=datetime(2013, 2, 26))
152
153
154 setup = common_setup + """
155 import pandas.computation.expressions as expr
156 df = DataFrame(np.random.randn(50000, 100))
157 df2 = DataFrame(np.random.randn(50000, 100))
158 expr.set_use_numexpr(False)
159 """
160
161 indexing_dataframe_boolean_no_ne = \
162 Benchmark("df > df2", setup, name='indexing_dataframe_boolean_no_ne',cleanup="expr.set_use_numexpr(True)",
163 start_date=datetime(2013, 2, 26))
164 #----------------------------------------------------------------------
165 # MultiIndex sortlevel
166
167 setup = common_setup + """
168 a = np.repeat(np.arange(100), 1000)
169 b = np.tile(np.arange(1000), 100)
170 midx = MultiIndex.from_arrays([a, b])
171 midx = midx.take(np.random.permutation(np.arange(100000)))
172 """
173 sort_level_zero = Benchmark("midx.sortlevel(0)", setup,
174 start_date=datetime(2012, 1, 1))
175 sort_level_one = Benchmark("midx.sortlevel(1)", setup,
176 start_date=datetime(2012, 1, 1))
177
178 #----------------------------------------------------------------------
179 # Panel subset selection
180
181 setup = common_setup + """
182 p = Panel(np.random.randn(100, 100, 100))
183 inds = range(0, 100, 10)
184 """
185
186 indexing_panel_subset = Benchmark('p.ix[inds, inds, inds]', setup,
187 start_date=datetime(2012, 1, 1))
188
189 #----------------------------------------------------------------------
190 # Iloc
191
192 setup = common_setup + """
193 df = DataFrame({'A' : [0.1] * 3000, 'B' : [1] * 3000})
194 idx = np.array(range(30)) * 99
195 df2 = DataFrame({'A' : [0.1] * 1000, 'B' : [1] * 1000})
196 df2 = concat([df2, 2*df2, 3*df2])
197 """
198
199 frame_iloc_dups = Benchmark('df2.iloc[idx]', setup,
200 start_date=datetime(2013, 1, 1))
201
202 frame_loc_dups = Benchmark('df2.loc[idx]', setup,
203 start_date=datetime(2013, 1, 1))
204
205 setup = common_setup + """
206 df = DataFrame(dict( A = [ 'foo'] * 1000000))
207 """
208
209 frame_iloc_big = Benchmark('df.iloc[:100,0]', setup,
210 start_date=datetime(2013, 1, 1))
211
212 #----------------------------------------------------------------------
213 # basic tests for [], .loc[], .iloc[] and .ix[]
214
215 setup = common_setup + """
216 s = Series(np.random.rand(1000000))
217 """
218
219 series_getitem_scalar = Benchmark("s[800000]", setup)
220 series_getitem_slice = Benchmark("s[:800000]", setup)
221 series_getitem_list_like = Benchmark("s[[800000]]", setup)
222 series_getitem_array = Benchmark("s[np.arange(10000)]", setup)
223
224 series_loc_scalar = Benchmark("s.loc[800000]", setup)
225 series_loc_slice = Benchmark("s.loc[:800000]", setup)
226 series_loc_list_like = Benchmark("s.loc[[800000]]", setup)
227 series_loc_array = Benchmark("s.loc[np.arange(10000)]", setup)
228
229 series_iloc_scalar = Benchmark("s.iloc[800000]", setup)
230 series_iloc_slice = Benchmark("s.iloc[:800000]", setup)
231 series_iloc_list_like = Benchmark("s.iloc[[800000]]", setup)
232 series_iloc_array = Benchmark("s.iloc[np.arange(10000)]", setup)
233
234 series_ix_scalar = Benchmark("s.ix[800000]", setup)
235 series_ix_slice = Benchmark("s.ix[:800000]", setup)
236 series_ix_list_like = Benchmark("s.ix[[800000]]", setup)
237 series_ix_array = Benchmark("s.ix[np.arange(10000)]", setup)
238
[end of vb_suite/indexing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| pandas-dev/pandas | 8d2818e32d0bbb50e183ccb5724c391e4f604670 | [] (__getitem__) boolean indexing assignment bug with nans
See repro below:
``` python
import pandas as pd
import numpy as np
temp = pd.Series(np.random.randn(10))
temp[3:6] = np.nan
temp[8] = np.nan
nan_index = np.isnan(temp)
# this works
temp1 = temp.copy()
temp1[nan_index] = [99, 99, 99, 99]
temp1[nan_index]
3 99
4 99
5 99
8 99
dtype: float64
# this doesn't - values look like they're being assigned in a different order?
temp2 = temp.copy()
temp2[nan_index] = [99, 99, 99, np.nan]
3 NaN
4 99
5 99
8 99
dtype: float64
# ... but it works properly when using .loc
temp2 = temp.copy()
temp2.loc[nan_index] = [99, 99, 99, np.nan]
3 99
4 99
5 99
8 NaN
dtype: float64
```
output of show_versions():
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.9.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.16.0
nose: 1.3.4
Cython: 0.21.2
numpy: 1.9.2
scipy: 0.14.0
statsmodels: 0.5.0
IPython: 3.0.0
sphinx: 1.2.3
patsy: 0.2.1
dateutil: 2.4.1
pytz: 2015.2
bottleneck: 0.8.0
tables: 3.1.1
numexpr: 2.3.1
matplotlib: 1.4.0
openpyxl: 2.0.2
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.6.6
lxml: 3.4.2
bs4: 4.3.2
html5lib: 0.999
httplib2: 0.8
apiclient: None
sqlalchemy: 0.9.8
pymysql: None
psycopg2: None
```
| 2015-03-28T14:08:51Z | <patch>
diff --git a/doc/source/whatsnew/v0.16.1.txt b/doc/source/whatsnew/v0.16.1.txt
--- a/doc/source/whatsnew/v0.16.1.txt
+++ b/doc/source/whatsnew/v0.16.1.txt
@@ -64,3 +64,4 @@ Bug Fixes
- Bug in ``Series.quantile`` on empty Series of type ``Datetime`` or ``Timedelta`` (:issue:`9675`)
+- Bug in ``where`` causing incorrect results when upcasting was required (:issue:`9731`)
diff --git a/pandas/core/common.py b/pandas/core/common.py
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1081,15 +1081,6 @@ def _infer_dtype_from_scalar(val):
return dtype, val
-def _maybe_cast_scalar(dtype, value):
- """ if we a scalar value and are casting to a dtype that needs nan -> NaT
- conversion
- """
- if np.isscalar(value) and dtype in _DATELIKE_DTYPES and isnull(value):
- return tslib.iNaT
- return value
-
-
def _maybe_promote(dtype, fill_value=np.nan):
# if we passed an array here, determine the fill value by dtype
@@ -1154,16 +1145,39 @@ def _maybe_promote(dtype, fill_value=np.nan):
return dtype, fill_value
-def _maybe_upcast_putmask(result, mask, other, dtype=None, change=None):
- """ a safe version of put mask that (potentially upcasts the result
- return the result
- if change is not None, then MUTATE the change (and change the dtype)
- return a changed flag
+def _maybe_upcast_putmask(result, mask, other):
"""
+ A safe version of putmask that potentially upcasts the result
- if mask.any():
+ Parameters
+ ----------
+ result : ndarray
+ The destination array. This will be mutated in-place if no upcasting is
+ necessary.
+ mask : boolean ndarray
+ other : ndarray or scalar
+ The source array or value
- other = _maybe_cast_scalar(result.dtype, other)
+ Returns
+ -------
+ result : ndarray
+ changed : boolean
+ Set to true if the result array was upcasted
+ """
+
+ if mask.any():
+ # Two conversions for date-like dtypes that can't be done automatically
+ # in np.place:
+ # NaN -> NaT
+ # integer or integer array -> date-like array
+ if result.dtype in _DATELIKE_DTYPES:
+ if lib.isscalar(other):
+ if isnull(other):
+ other = tslib.iNaT
+ elif is_integer(other):
+ other = np.array(other, dtype=result.dtype)
+ elif is_integer_dtype(other):
+ other = np.array(other, dtype=result.dtype)
def changeit():
@@ -1173,39 +1187,26 @@ def changeit():
om = other[mask]
om_at = om.astype(result.dtype)
if (om == om_at).all():
- new_other = result.values.copy()
- new_other[mask] = om_at
- result[:] = new_other
+ new_result = result.values.copy()
+ new_result[mask] = om_at
+ result[:] = new_result
return result, False
except:
pass
# we are forced to change the dtype of the result as the input
# isn't compatible
- r, fill_value = _maybe_upcast(
- result, fill_value=other, dtype=dtype, copy=True)
- np.putmask(r, mask, other)
-
- # we need to actually change the dtype here
- if change is not None:
-
- # if we are trying to do something unsafe
- # like put a bigger dtype in a smaller one, use the smaller one
- # pragma: no cover
- if change.dtype.itemsize < r.dtype.itemsize:
- raise AssertionError(
- "cannot change dtype of input to smaller size")
- change.dtype = r.dtype
- change[:] = r
+ r, _ = _maybe_upcast(result, fill_value=other, copy=True)
+ np.place(r, mask, other)
return r, True
- # we want to decide whether putmask will work
+ # we want to decide whether place will work
# if we have nans in the False portion of our mask then we need to
- # upcast (possibily) otherwise we DON't want to upcast (e.g. if we are
- # have values, say integers in the success portion then its ok to not
+ # upcast (possibly), otherwise we DON't want to upcast (e.g. if we
+ # have values, say integers, in the success portion then it's ok to not
# upcast)
- new_dtype, fill_value = _maybe_promote(result.dtype, other)
+ new_dtype, _ = _maybe_promote(result.dtype, other)
if new_dtype != result.dtype:
# we have a scalar or len 0 ndarray
@@ -1222,7 +1223,7 @@ def changeit():
return changeit()
try:
- np.putmask(result, mask, other)
+ np.place(result, mask, other)
except:
return changeit()
</patch> | [] | [] | ||||
conan-io__conan-5547 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
build_requirements is ignored
I have A package, which build_requires B package. And C package requires A, build_requires B. When I execute "conan install" for C, conan will skip B. If I remove requires A, conan will not skip B. What I want is conan will install A and B. Any help you can provide would be great.
Thanks
To help us debug your issue please explain:
To help us debug your issue please explain:
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
</issue>
<code>
[start of README.rst]
1 |Logo|
2
3 Conan
4 =====
5
6 Decentralized, open-source (MIT), C/C++ package manager.
7
8 - Homepage: https://conan.io/
9 - Github: https://github.com/conan-io/conan
10 - Docs: https://docs.conan.io/en/latest/
11 - Slack: https://cpplang.now.sh/ (#conan channel)
12 - Twitter: https://twitter.com/conan_io
13
14
15 Conan is a package manager for C and C++ developers:
16
17 - It is fully decentralized. Users can host their packages in their servers, privately. Integrates with Artifactory and Bintray.
18 - Portable. Works across all platforms, including Linux, OSX, Windows (with native and first class support, WSL, MinGW),
19 Solaris, FreeBSD, embedded and cross compiling, docker, WSL
20 - Manage binaries. It is able to create, upload and download binaries for any configuration and platform,
21 even cross-compiling, saving lots of time in development and continuous integration. The binary compatibility
22 can be configured and customized. Manage all your artifacts in exactly the same way in all platforms.
23 - Integrates with any build system, including any propietary and custom one. Provides tested support for major build systems
24 (CMake, MSBuild, Makefiles, Meson, etc).
25 - Extensible: Its python based recipes, together with extensions points allows for a great power and flexibility.
26 - Large and active community, specially in Github (https://github.com/conan-io/conan) and Slack (https://cpplang.now.sh/ #conan channel).
27 This community also create and maintains packages in Conan-center and Bincrafters repositories in Bintray.
28 - Stable. Used in production by many companies, since 1.0 there is a committment not to break package recipes and documented behavior.
29
30
31
32 +------------------------+-------------------------+-------------------------+-------------------------+
33 | **master** | **develop** | **Coverage** | **Code Climate** |
34 +========================+=========================+=========================+=========================+
35 | |Build Status Master| | |Build Status Develop| | |Develop coverage| | |Develop climate| |
36 +------------------------+-------------------------+-------------------------+-------------------------+
37
38
39 Setup
40 =====
41
42 Please read https://docs.conan.io/en/latest/installation.html
43
44 From binaries
45 -------------
46
47 We have installers for `most platforms here <http://conan.io>`__ but you
48 can run **conan** from sources if you want.
49
50 From pip
51 --------
52
53 Conan is compatible with Python 2 and Python 3.
54
55 - Install pip following `pip docs`_.
56 - Install conan:
57
58 .. code-block:: bash
59
60 $ pip install conan
61
62 You can also use `test.pypi.org <https://test.pypi.org/project/conan/#history>`_ repository to install development (non-stable) Conan versions:
63
64
65 .. code-block:: bash
66
67 $ pip install --index-url https://test.pypi.org/simple/ conan
68
69
70 From Homebrew (OSx)
71 -------------------
72
73 - Install Homebrew following `brew homepage`_.
74
75 .. code-block:: bash
76
77 $ brew update
78 $ brew install conan
79
80 From source
81 -----------
82
83 You can run **conan** client and server in Windows, MacOS, and Linux.
84
85 - **Install pip following** `pip docs`_.
86
87 - **Clone conan repository:**
88
89 .. code-block:: bash
90
91 $ git clone https://github.com/conan-io/conan.git
92
93 - **Install in editable mode**
94
95 .. code-block:: bash
96
97 $ cd conan && sudo pip install -e .
98
99 If you are in Windows, using ``sudo`` is not required.
100
101 - **You are ready, try to run conan:**
102
103 .. code-block::
104
105 $ conan --help
106
107 Consumer commands
108 install Installs the requirements specified in a conanfile (.py or .txt).
109 config Manages configuration. Edits the conan.conf or installs config files.
110 get Gets a file or list a directory of a given reference or package.
111 info Gets information about the dependency graph of a recipe.
112 search Searches package recipes and binaries in the local cache or in a remote.
113 Creator commands
114 new Creates a new package recipe template with a 'conanfile.py'.
115 create Builds a binary package for recipe (conanfile.py) located in current dir.
116 upload Uploads a recipe and binary packages to a remote.
117 export Copies the recipe (conanfile.py & associated files) to your local cache.
118 export-pkg Exports a recipe & creates a package with given files calling 'package'.
119 test Test a package, consuming it with a conanfile recipe with a test() method.
120 Package development commands
121 source Calls your local conanfile.py 'source()' method.
122 build Calls your local conanfile.py 'build()' method.
123 package Calls your local conanfile.py 'package()' method.
124 Misc commands
125 profile Lists profiles in the '.conan/profiles' folder, or shows profile details.
126 remote Manages the remote list and the package recipes associated to a remote.
127 user Authenticates against a remote with user/pass, caching the auth token.
128 imports Calls your local conanfile.py or conanfile.txt 'imports' method.
129 copy Copies conan recipes and packages to another user/channel.
130 remove Removes packages or binaries matching pattern from local cache or remote.
131 alias Creates and exports an 'alias recipe'.
132 download Downloads recipe and binaries to the local cache, without using settings.
133
134 Conan commands. Type "conan <command> -h" for help
135
136 Contributing to the project
137 ===========================
138
139 Feedback and contribution is always welcome in this project.
140 Please read our `contributing guide <https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md>`_.
141
142 Running the tests
143 =================
144
145 Using tox
146 ---------
147
148 .. code-block:: bash
149
150 $ tox
151
152 It will install the needed requirements and launch `nose` skipping some heavy and slow test.
153 If you want to run the full test suite:
154
155 .. code-block:: bash
156
157 $ tox -e full
158
159 Without tox
160 -----------
161
162 **Install python requirements**
163
164 .. code-block:: bash
165
166 $ pip install -r conans/requirements.txt
167 $ pip install -r conans/requirements_server.txt
168 $ pip install -r conans/requirements_dev.txt
169
170
171 Only in OSX:
172
173 .. code-block:: bash
174
175 $ pip install -r conans/requirements_osx.txt # You can omit this one if not running OSX
176
177
178 If you are not Windows and you are not using a python virtual environment, you will need to run these
179 commands using `sudo`.
180
181 Before you can run the tests, you need to set a few environment variables first.
182
183 .. code-block:: bash
184
185 $ export PYTHONPATH=$PYTHONPATH:$(pwd)
186
187 On Windows it would be (while being in the conan root directory):
188
189 .. code-block:: bash
190
191 $ set PYTHONPATH=.
192
193 Ensure that your ``cmake`` has version 2.8 or later. You can see the
194 version with the following command:
195
196 .. code-block:: bash
197
198 $ cmake --version
199
200 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your
201 operating system and your requirements.
202
203 These should work for the GCC from ``build-essential`` on Ubuntu 14.04:
204
205 .. code-block:: bash
206
207 $ export CONAN_COMPILER=gcc
208 $ export CONAN_COMPILER_VERSION=4.8
209
210 These should work for OS X:
211
212 .. code-block:: bash
213
214 $ export CONAN_COMPILER=clang
215 $ export CONAN_COMPILER_VERSION=3.5
216
217 Finally, there are some tests that use conan to package Go-lang
218 libraries, so you might **need to install go-lang** in your computer and
219 add it to the path.
220
221 You can run the actual tests like this:
222
223 .. code-block:: bash
224
225 $ nosetests .
226
227
228 There are a couple of test attributes defined, as ``slow``, or ``golang`` that you can use
229 to filter the tests, and do not execute them:
230
231 .. code-block:: bash
232
233 $ nosetests . -a !golang
234
235 A few minutes later it should print ``OK``:
236
237 .. code-block:: bash
238
239 ............................................................................................
240 ----------------------------------------------------------------------
241 Ran 146 tests in 50.993s
242
243 OK
244
245 To run specific tests, you can specify the test name too, something like:
246
247 .. code-block:: bash
248
249 $ nosetests conans.test.command.config_install_test:ConfigInstallTest.install_file_test --nocapture
250
251 The ``--nocapture`` argument can be useful to see some output that otherwise is captured by nosetests.
252
253 License
254 -------
255
256 `MIT LICENSE <./LICENSE.md>`__
257
258 .. |Build Status Master| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/master
259 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/master
260
261 .. |Build Status Develop| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/develop
262 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/develop
263
264 .. |Master coverage| image:: https://codecov.io/gh/conan-io/conan/branch/master/graph/badge.svg
265 :target: https://codecov.io/gh/conan-io/conan/branch/master
266
267 .. |Develop coverage| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graph/badge.svg
268 :target: https://codecov.io/gh/conan-io/conan/branch/develop
269
270 .. |Coverage graph| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graphs/tree.svg
271 :height: 50px
272 :width: 50 px
273 :alt: Conan develop coverage
274
275 .. |Develop climate| image:: https://api.codeclimate.com/v1/badges/081b53e570d5220b34e4/maintainability.svg
276 :target: https://codeclimate.com/github/conan-io/conan/maintainability
277
278 .. |Logo| image:: https://conan.io/img/jfrog_conan_logo.png
279
280
281 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/
282
283 .. _`brew homepage`: http://brew.sh/
284
[end of README.rst]
[start of conans/client/command.py]
1 import inspect
2 import json
3 import os
4 import sys
5
6 import argparse
7 import six
8 from argparse import ArgumentError
9
10 from conans import __version__ as client_version
11 from conans.client.cmd.uploader import UPLOAD_POLICY_FORCE, \
12 UPLOAD_POLICY_NO_OVERWRITE, UPLOAD_POLICY_NO_OVERWRITE_RECIPE, UPLOAD_POLICY_SKIP
13 from conans.client.conan_api import (Conan, default_manifest_folder, _make_abs_path)
14 from conans.client.conan_command_output import CommandOutputer
15 from conans.client.output import Color
16 from conans.client.printer import Printer
17 from conans.errors import ConanException, ConanInvalidConfiguration, NoRemoteAvailable, \
18 ConanMigrationError
19 from conans.model.ref import ConanFileReference, PackageReference, get_reference_fields, \
20 check_valid_ref
21 from conans.unicode import get_cwd
22 from conans.util.config_parser import get_bool_from_text
23 from conans.util.files import exception_message_safe
24 from conans.util.files import save
25 from conans.util.log import logger
26
27 # Exit codes for conan command:
28 SUCCESS = 0 # 0: Success (done)
29 ERROR_GENERAL = 1 # 1: General ConanException error (done)
30 ERROR_MIGRATION = 2 # 2: Migration error
31 USER_CTRL_C = 3 # 3: Ctrl+C
32 USER_CTRL_BREAK = 4 # 4: Ctrl+Break
33 ERROR_SIGTERM = 5 # 5: SIGTERM
34 ERROR_INVALID_CONFIGURATION = 6 # 6: Invalid configuration (done)
35
36
37 class Extender(argparse.Action):
38 """Allows to use the same flag several times in a command and creates a list with the values.
39 For example:
40 conan install MyPackage/1.2@user/channel -o qt:value -o mode:2 -s cucumber:true
41 It creates:
42 options = ['qt:value', 'mode:2']
43 settings = ['cucumber:true']
44 """
45 def __call__(self, parser, namespace, values, option_strings=None): # @UnusedVariable
46 # Need None here incase `argparse.SUPPRESS` was supplied for `dest`
47 dest = getattr(namespace, self.dest, None)
48 if not hasattr(dest, 'extend') or dest == self.default:
49 dest = []
50 setattr(namespace, self.dest, dest)
51 # if default isn't set to None, this method might be called
52 # with the default as `values` for other arguments which
53 # share this destination.
54 parser.set_defaults(**{self.dest: None})
55
56 if isinstance(values, str):
57 dest.append(values)
58 elif values:
59 try:
60 dest.extend(values)
61 except ValueError:
62 dest.append(values)
63
64
65 class OnceArgument(argparse.Action):
66 """Allows to declare a parameter that can have only one value, by default argparse takes the
67 latest declared and it's very confusing.
68 """
69 def __call__(self, parser, namespace, values, option_string=None):
70 if getattr(namespace, self.dest) is not None and self.default is None:
71 msg = '{o} can only be specified once'.format(o=option_string)
72 raise argparse.ArgumentError(None, msg)
73 setattr(namespace, self.dest, values)
74
75
76 class SmartFormatter(argparse.HelpFormatter):
77
78 def _fill_text(self, text, width, indent):
79 import textwrap
80 text = textwrap.dedent(text)
81 return ''.join(indent + line for line in text.splitlines(True))
82
83
84 _QUERY_EXAMPLE = ("os=Windows AND (arch=x86 OR compiler=gcc)")
85 _PATTERN_EXAMPLE = ("boost/*")
86 _REFERENCE_EXAMPLE = ("MyPackage/1.2@user/channel")
87 _PREF_EXAMPLE = ("MyPackage/1.2@user/channel:af7901d8bdfde621d086181aa1c495c25a17b137")
88
89 _BUILD_FOLDER_HELP = ("Directory for the build process. Defaulted to the current directory. A "
90 "relative path to current directory can also be specified")
91 _INSTALL_FOLDER_HELP = ("Directory containing the conaninfo.txt and conanbuildinfo.txt files "
92 "(from previous 'conan install'). Defaulted to --build-folder")
93 _KEEP_SOURCE_HELP = ("Do not remove the source folder in local cache, even if the recipe changed. "
94 "Use this for testing purposes only")
95 _PATTERN_OR_REFERENCE_HELP = ("Pattern or package recipe reference, e.g., '%s', "
96 "'%s'" % (_PATTERN_EXAMPLE, _REFERENCE_EXAMPLE))
97 _PATTERN_REF_OR_PREF_HELP = ("Pattern, recipe reference or package reference e.g., '%s', "
98 "'%s', '%s'" % (_PATTERN_EXAMPLE, _REFERENCE_EXAMPLE, _PREF_EXAMPLE))
99 _REF_OR_PREF_HELP = ("Recipe reference or package reference e.g., '%s', "
100 "'%s'" % (_REFERENCE_EXAMPLE, _PREF_EXAMPLE))
101 _PATH_HELP = ("Path to a folder containing a conanfile.py or to a recipe file "
102 "e.g., my_folder/conanfile.py")
103 _QUERY_HELP = ("Packages query: '%s'. The 'pattern_or_reference' parameter has "
104 "to be a reference: %s" % (_QUERY_EXAMPLE, _REFERENCE_EXAMPLE))
105 _SOURCE_FOLDER_HELP = ("Directory containing the sources. Defaulted to the conanfile's directory. A"
106 " relative path to current directory can also be specified")
107
108
109 class Command(object):
110 """A single command of the conan application, with all the first level commands. Manages the
111 parsing of parameters and delegates functionality in collaborators. It can also show help of the
112 tool.
113 """
114 def __init__(self, conan_api):
115 assert isinstance(conan_api, Conan)
116 self._conan = conan_api
117 self._out = conan_api._user_io.out
118
119 @property
120 def _outputer(self):
121 return CommandOutputer(self._out, self._conan._cache)
122
123 def help(self, *args):
124 """
125 Shows help for a specific command.
126 """
127 parser = argparse.ArgumentParser(description=self.help.__doc__,
128 prog="conan help",
129 formatter_class=SmartFormatter)
130 parser.add_argument("command", help='command', nargs="?")
131 args = parser.parse_args(*args)
132 if not args.command:
133 self._show_help()
134 return
135 try:
136 commands = self._commands()
137 method = commands[args.command]
138 self._warn_python2()
139 method(["--help"])
140 except KeyError:
141 raise ConanException("Unknown command '%s'" % args.command)
142
143 def new(self, *args):
144 """
145 Creates a new package recipe template with a 'conanfile.py' and optionally,
146 'test_package' testing files.
147 """
148 parser = argparse.ArgumentParser(description=self.new.__doc__,
149 prog="conan new",
150 formatter_class=SmartFormatter)
151 parser.add_argument("name", help='Package name, e.g.: "Poco/1.7.3" or complete reference'
152 ' for CI scripts: "Poco/1.7.3@conan/stable"')
153 parser.add_argument("-t", "--test", action='store_true', default=False,
154 help='Create test_package skeleton to test package')
155 parser.add_argument("-i", "--header", action='store_true', default=False,
156 help='Create a headers only package template')
157 parser.add_argument("-c", "--pure-c", action='store_true', default=False,
158 help='Create a C language package only package, '
159 'deleting "self.settings.compiler.libcxx" setting '
160 'in the configure method')
161 parser.add_argument("-s", "--sources", action='store_true', default=False,
162 help='Create a package with embedded sources in "src" folder, '
163 'using "exports_sources" instead of retrieving external code with '
164 'the "source()" method')
165 parser.add_argument("-b", "--bare", action='store_true', default=False,
166 help='Create the minimum package recipe, without build() method. '
167 'Useful in combination with "export-pkg" command')
168 parser.add_argument("-m", "--template",
169 help='Use the given template from the local cache for conanfile.py')
170 parser.add_argument("-cis", "--ci-shared", action='store_true',
171 default=False,
172 help='Package will have a "shared" option to be used in CI')
173 parser.add_argument("-cilg", "--ci-travis-gcc", action='store_true',
174 default=False,
175 help='Generate travis-ci files for linux gcc')
176 parser.add_argument("-cilc", "--ci-travis-clang", action='store_true',
177 default=False,
178 help='Generate travis-ci files for linux clang')
179 parser.add_argument("-cio", "--ci-travis-osx", action='store_true',
180 default=False,
181 help='Generate travis-ci files for OSX apple-clang')
182 parser.add_argument("-ciw", "--ci-appveyor-win", action='store_true',
183 default=False, help='Generate appveyor files for Appveyor '
184 'Visual Studio')
185 parser.add_argument("-ciglg", "--ci-gitlab-gcc", action='store_true',
186 default=False,
187 help='Generate GitLab files for linux gcc')
188 parser.add_argument("-ciglc", "--ci-gitlab-clang", action='store_true',
189 default=False,
190 help='Generate GitLab files for linux clang')
191 parser.add_argument("-ciccg", "--ci-circleci-gcc", action='store_true',
192 default=False,
193 help='Generate CircleCI files for linux gcc')
194 parser.add_argument("-ciccc", "--ci-circleci-clang", action='store_true',
195 default=False,
196 help='Generate CircleCI files for linux clang')
197 parser.add_argument("-cicco", "--ci-circleci-osx", action='store_true',
198 default=False,
199 help='Generate CircleCI files for OSX apple-clang')
200 parser.add_argument("-gi", "--gitignore", action='store_true', default=False,
201 help='Generate a .gitignore with the known patterns to excluded')
202 parser.add_argument("-ciu", "--ci-upload-url",
203 help='Define URL of the repository to upload')
204
205 args = parser.parse_args(*args)
206 self._warn_python2()
207 self._conan.new(args.name, header=args.header, pure_c=args.pure_c, test=args.test,
208 exports_sources=args.sources, bare=args.bare,
209 visual_versions=args.ci_appveyor_win,
210 linux_gcc_versions=args.ci_travis_gcc,
211 linux_clang_versions=args.ci_travis_clang,
212 gitignore=args.gitignore,
213 osx_clang_versions=args.ci_travis_osx, shared=args.ci_shared,
214 upload_url=args.ci_upload_url,
215 gitlab_gcc_versions=args.ci_gitlab_gcc,
216 gitlab_clang_versions=args.ci_gitlab_clang,
217 circleci_gcc_versions=args.ci_circleci_gcc,
218 circleci_clang_versions=args.ci_circleci_clang,
219 circleci_osx_versions=args.ci_circleci_osx,
220 template=args.template)
221
222 def inspect(self, *args):
223 """
224 Displays conanfile attributes, like name, version and options. Works locally,
225 in local cache and remote.
226 """
227 parser = argparse.ArgumentParser(description=self.inspect.__doc__,
228 prog="conan inspect",
229 formatter_class=SmartFormatter)
230 parser.add_argument("path_or_reference", help="Path to a folder containing a recipe"
231 " (conanfile.py) or to a recipe file. e.g., "
232 "./my_project/conanfile.py. It could also be a reference")
233 parser.add_argument("-a", "--attribute", help='The attribute to be displayed, e.g "name"',
234 nargs="?", action=Extender)
235 parser.add_argument("-r", "--remote", help='look in the specified remote server',
236 action=OnceArgument)
237 parser.add_argument("-j", "--json", default=None, action=OnceArgument,
238 help='json output file')
239 parser.add_argument('--raw', default=None, action=OnceArgument,
240 help='Print just the value of the requested attribute')
241
242 args = parser.parse_args(*args)
243
244 if args.raw and args.attribute:
245 raise ConanException("Argument '--raw' is incompatible with '-a'")
246
247 if args.raw and args.json:
248 raise ConanException("Argument '--raw' is incompatible with '--json'")
249
250 attributes = [args.raw, ] if args.raw else args.attribute
251
252 result = self._conan.inspect(args.path_or_reference, attributes, args.remote)
253 Printer(self._out).print_inspect(result, raw=args.raw)
254 if args.json:
255 json_output = json.dumps(result)
256 if not os.path.isabs(args.json):
257 json_output_file = os.path.join(get_cwd(), args.json)
258 else:
259 json_output_file = args.json
260 save(json_output_file, json_output)
261
262 def test(self, *args):
263 """
264 Tests a package consuming it from a conanfile.py with a test() method.
265
266 This command installs the conanfile dependencies (including the tested
267 package), calls a 'conan build' to build test apps and finally executes
268 the test() method. The testing recipe does not require name or version,
269 neither definition of package() or package_info() methods. The package
270 to be tested must exist in the local cache or in any configured remote.
271 """
272 parser = argparse.ArgumentParser(description=self.test.__doc__,
273 prog="conan test",
274 formatter_class=SmartFormatter)
275 parser.add_argument("path", help='Path to the "testing" folder containing a conanfile.py or'
276 ' to a recipe file with test() method'
277 'e.g. conan test_package/conanfile.py pkg/version@user/channel')
278 parser.add_argument("reference",
279 help='pkg/version@user/channel of the package to be tested')
280 parser.add_argument("-tbf", "--test-build-folder", action=OnceArgument,
281 help="Working directory of the build process.")
282
283 _add_common_install_arguments(parser, build_help=_help_build_policies)
284 args = parser.parse_args(*args)
285 self._warn_python2()
286 return self._conan.test(args.path, args.reference, args.profile, args.settings,
287 args.options, args.env, args.remote, args.update,
288 build_modes=args.build, test_build_folder=args.test_build_folder,
289 lockfile=args.lockfile)
290
291 def create(self, *args):
292 """
293 Builds a binary package for a recipe (conanfile.py).
294
295 Uses the specified configuration in a profile or in -s settings, -o
296 options etc. If a 'test_package' folder (the name can be configured
297 with -tf) is found, the command will run the consumer project to ensure
298 that the package has been created correctly. Check 'conan test' command
299 to know more about 'test_folder' project.
300 """
301 parser = argparse.ArgumentParser(description=self.create.__doc__,
302 prog="conan create",
303 formatter_class=SmartFormatter)
304 parser.add_argument("path", help=_PATH_HELP)
305 parser.add_argument("reference", nargs='?', default=None,
306 help='user/channel, version@user/channel or pkg/version@user/channel '
307 '(if name or version declared in conanfile.py, they should match)')
308 parser.add_argument("-j", "--json", default=None, action=OnceArgument,
309 help='json file path where the install information will be written to')
310 parser.add_argument('-k', '-ks', '--keep-source', default=False, action='store_true',
311 help=_KEEP_SOURCE_HELP)
312 parser.add_argument('-kb', '--keep-build', default=False, action='store_true',
313 help='Do not remove the build folder in local cache. '
314 'Implies --keep-source. '
315 'Use this for testing purposes only')
316 parser.add_argument("-ne", "--not-export", default=False, action='store_true',
317 help='Do not export the conanfile.py')
318 parser.add_argument("-tbf", "--test-build-folder", action=OnceArgument,
319 help='Working directory for the build of the test project.')
320 parser.add_argument("-tf", "--test-folder", action=OnceArgument,
321 help='Alternative test folder name. By default it is "test_package". '
322 'Use "None" to skip the test stage')
323
324 _add_manifests_arguments(parser)
325 _add_common_install_arguments(parser, build_help=_help_build_policies)
326
327 args = parser.parse_args(*args)
328 self._warn_python2()
329
330 name, version, user, channel, _ = get_reference_fields(args.reference,
331 user_channel_input=True)
332
333 if any([user, channel]) and not all([user, channel]):
334 # Or user/channel or nothing, but not partial
335 raise ConanException("Invalid parameter '%s', "
336 "specify the full reference or user/channel" % args.reference)
337
338 if args.test_folder == "None":
339 # Now if parameter --test-folder=None (string None) we have to skip tests
340 args.test_folder = False
341
342 cwd = get_cwd()
343
344 info = None
345 try:
346 info = self._conan.create(args.path, name, version, user, channel,
347 args.profile, args.settings, args.options,
348 args.env, args.test_folder, args.not_export,
349 args.build, args.keep_source, args.keep_build, args.verify,
350 args.manifests, args.manifests_interactive,
351 args.remote, args.update,
352 test_build_folder=args.test_build_folder,
353 lockfile=args.lockfile)
354 except ConanException as exc:
355 info = exc.info
356 raise
357 finally:
358 if args.json and info:
359 self._outputer.json_output(info, args.json, cwd)
360
361 def download(self, *args):
362 """
363 Downloads recipe and binaries to the local cache, without using settings.
364
365 It works specifying the recipe reference and package ID to be
366 installed. Not transitive, requirements of the specified reference will
367 NOT be retrieved. Useful together with 'conan copy' to automate the
368 promotion of packages to a different user/channel. Only if a reference
369 is specified, it will download all packages from the specified remote.
370 If no remote is specified, it will use the default remote.
371 """
372
373 parser = argparse.ArgumentParser(description=self.download.__doc__,
374 prog="conan download",
375 formatter_class=SmartFormatter)
376 parser.add_argument("reference",
377 help='pkg/version@user/channel')
378 parser.add_argument("-p", "--package", nargs=1, action=Extender,
379 help='Force install specified package ID (ignore settings/options)'
380 ' [DEPRECATED: use full reference instead]')
381 parser.add_argument("-r", "--remote", help='look in the specified remote server',
382 action=OnceArgument)
383 parser.add_argument("-re", "--recipe", help='Downloads only the recipe', default=False,
384 action="store_true")
385
386 args = parser.parse_args(*args)
387
388 try:
389 pref = PackageReference.loads(args.reference, validate=True)
390 except ConanException:
391 reference = args.reference
392 packages_list = args.package
393
394 if packages_list:
395 self._out.warn("Usage of `--package` argument is deprecated."
396 " Use a full reference instead: "
397 "`conan download [...] {}:{}`".format(reference, packages_list[0]))
398 else:
399 reference = repr(pref.ref)
400 packages_list = [pref.id]
401 if args.package:
402 raise ConanException("Use a full package reference (preferred) or the `--package`"
403 " command argument, but not both.")
404
405 self._warn_python2()
406 return self._conan.download(reference=reference, packages=packages_list,
407 remote_name=args.remote, recipe=args.recipe)
408
409 def install(self, *args):
410 """
411 Installs the requirements specified in a recipe (conanfile.py or conanfile.txt).
412
413 It can also be used to install a concrete package specifying a
414 reference. If any requirement is not found in the local cache, it will
415 retrieve the recipe from a remote, looking for it sequentially in the
416 configured remotes. When the recipes have been downloaded it will try
417 to download a binary package matching the specified settings, only from
418 the remote from which the recipe was retrieved. If no binary package is
419 found, it can be build from sources using the '--build' option. When
420 the package is installed, Conan will write the files for the specified
421 generators.
422 """
423 parser = argparse.ArgumentParser(description=self.install.__doc__,
424 prog="conan install",
425 formatter_class=SmartFormatter)
426 parser.add_argument("path_or_reference", help="Path to a folder containing a recipe"
427 " (conanfile.py or conanfile.txt) or to a recipe file. e.g., "
428 "./my_project/conanfile.txt. It could also be a reference")
429 parser.add_argument("reference", nargs="?",
430 help='Reference for the conanfile path of the first argument: '
431 'user/channel, version@user/channel or pkg/version@user/channel'
432 '(if name or version declared in conanfile.py, they should match)')
433 parser.add_argument("-g", "--generator", nargs=1, action=Extender,
434 help='Generators to use')
435 parser.add_argument("-if", "--install-folder", action=OnceArgument,
436 help='Use this directory as the directory where to put the generator'
437 'files. e.g., conaninfo/conanbuildinfo.txt')
438 _add_manifests_arguments(parser)
439
440 parser.add_argument("--no-imports", action='store_true', default=False,
441 help='Install specified packages but avoid running imports')
442 parser.add_argument("-j", "--json", default=None, action=OnceArgument,
443 help='Path to a json file where the install information will be '
444 'written')
445
446 _add_common_install_arguments(parser, build_help=_help_build_policies)
447
448 args = parser.parse_args(*args)
449 cwd = get_cwd()
450
451 # We need @ otherwise it could be a path, so check strict
452 path_is_reference = check_valid_ref(args.path_or_reference)
453
454 info = None
455 try:
456 if not path_is_reference:
457 name, version, user, channel, _ = get_reference_fields(args.reference,
458 user_channel_input=True)
459 info = self._conan.install(path=args.path_or_reference,
460 name=name, version=version, user=user, channel=channel,
461 settings=args.settings, options=args.options,
462 env=args.env,
463 remote_name=args.remote,
464 verify=args.verify, manifests=args.manifests,
465 manifests_interactive=args.manifests_interactive,
466 build=args.build, profile_names=args.profile,
467 update=args.update, generators=args.generator,
468 no_imports=args.no_imports,
469 install_folder=args.install_folder,
470 lockfile=args.lockfile)
471 else:
472 if args.reference:
473 raise ConanException("A full reference was provided as first argument, second "
474 "argument not allowed")
475
476 ref = ConanFileReference.loads(args.path_or_reference, validate=False)
477 manifest_interactive = args.manifests_interactive
478 info = self._conan.install_reference(ref, settings=args.settings,
479 options=args.options,
480 env=args.env,
481 remote_name=args.remote,
482 verify=args.verify, manifests=args.manifests,
483 manifests_interactive=manifest_interactive,
484 build=args.build, profile_names=args.profile,
485 update=args.update,
486 generators=args.generator,
487 install_folder=args.install_folder,
488 lockfile=args.lockfile)
489
490 except ConanException as exc:
491 info = exc.info
492 raise
493 finally:
494 if args.json and info:
495 self._outputer.json_output(info, args.json, cwd)
496
497 def config(self, *args):
498 """
499 Manages Conan configuration.
500
501 Used to edit conan.conf, or install config files.
502 """
503 parser = argparse.ArgumentParser(description=self.config.__doc__,
504 prog="conan config",
505 formatter_class=SmartFormatter)
506
507 subparsers = parser.add_subparsers(dest='subcommand', help='sub-command help')
508 subparsers.required = True
509
510 rm_subparser = subparsers.add_parser('rm', help='Remove an existing config element')
511 set_subparser = subparsers.add_parser('set', help='Set a value for a configuration item')
512 get_subparser = subparsers.add_parser('get', help='Get the value of configuration item')
513 install_subparser = subparsers.add_parser('install', help='install a full configuration '
514 'from a local or remote zip file')
515 rm_subparser.add_argument("item", help="Item to remove")
516 get_subparser.add_argument("item", nargs="?", help="Item to print")
517 set_subparser.add_argument("item", help="'item=value' to set")
518 install_subparser.add_argument("item", nargs="?",
519 help="git repository, local folder or zip file (local or "
520 "http) where the configuration is stored")
521
522 install_subparser.add_argument("--verify-ssl", nargs="?", default="True",
523 help='Verify SSL connection when downloading file')
524 install_subparser.add_argument("--type", "-t", choices=["git"],
525 help='Type of remote config')
526 install_subparser.add_argument("--args", "-a",
527 help='String with extra arguments for "git clone"')
528 install_subparser.add_argument("-sf", "--source-folder",
529 help='Install files only from a source subfolder from the '
530 'specified origin')
531 install_subparser.add_argument("-tf", "--target-folder",
532 help='Install to that path in the conan cache')
533
534 args = parser.parse_args(*args)
535
536 if args.subcommand == "set":
537 try:
538 key, value = args.item.split("=", 1)
539 except ValueError:
540 if "hooks." in args.item:
541 key, value = args.item.split("=", 1)[0], None
542 else:
543 raise ConanException("Please specify 'key=value'")
544 return self._conan.config_set(key, value)
545 elif args.subcommand == "get":
546 return self._conan.config_get(args.item)
547 elif args.subcommand == "rm":
548 return self._conan.config_rm(args.item)
549 elif args.subcommand == "install":
550 verify_ssl = get_bool_from_text(args.verify_ssl)
551 return self._conan.config_install(args.item, verify_ssl, args.type, args.args,
552 source_folder=args.source_folder,
553 target_folder=args.target_folder)
554
555 def info(self, *args):
556 """
557 Gets information about the dependency graph of a recipe.
558
559 It can be used with a recipe or a reference for any existing package in
560 your local cache.
561 """
562
563 info_only_options = ["id", "build_id", "remote", "url", "license", "requires", "update",
564 "required", "date", "author", "None"]
565 path_only_options = ["export_folder", "build_folder", "package_folder", "source_folder"]
566 str_path_only_options = ", ".join(['"%s"' % field for field in path_only_options])
567 str_only_options = ", ".join(['"%s"' % field for field in info_only_options])
568
569 parser = argparse.ArgumentParser(description=self.info.__doc__,
570 prog="conan info",
571 formatter_class=SmartFormatter)
572 parser.add_argument("path_or_reference", help="Path to a folder containing a recipe"
573 " (conanfile.py or conanfile.txt) or to a recipe file. e.g., "
574 "./my_project/conanfile.txt. It could also be a reference")
575 parser.add_argument("--paths", action='store_true', default=False,
576 help='Show package paths in local cache')
577 parser.add_argument("-bo", "--build-order",
578 help='given a modified reference, return an ordered list to build (CI)',
579 nargs=1, action=Extender)
580 parser.add_argument("-g", "--graph", action=OnceArgument,
581 help='Creates file with project dependencies graph. It will generate '
582 'a DOT or HTML file depending on the filename extension')
583 parser.add_argument("-if", "--install-folder", action=OnceArgument,
584 help="local folder containing the conaninfo.txt and conanbuildinfo.txt "
585 "files (from a previous conan install execution). Defaulted to "
586 "current folder, unless --profile, -s or -o is specified. If you "
587 "specify both install-folder and any setting/option "
588 "it will raise an error.")
589 parser.add_argument("-j", "--json", nargs='?', const="1", type=str,
590 help='Path to a json file where the information will be written')
591 parser.add_argument("-n", "--only", nargs=1, action=Extender,
592 help="Show only the specified fields: %s. '--paths' information can "
593 "also be filtered with options %s. Use '--only None' to show only "
594 "references." % (str_only_options, str_path_only_options))
595 parser.add_argument("--package-filter", nargs='?',
596 help='Print information only for packages that match the filter pattern'
597 ' e.g., MyPackage/1.2@user/channel or MyPackage*')
598 dry_build_help = ("Apply the --build argument to output the information, "
599 "as it would be done by the install command")
600 parser.add_argument("-db", "--dry-build", action=Extender, nargs="?", help=dry_build_help)
601 build_help = ("Given a build policy, return an ordered list of packages that would be built"
602 " from sources during the install command")
603
604 _add_common_install_arguments(parser, build_help=build_help)
605 args = parser.parse_args(*args)
606
607 if args.install_folder and (args.profile or args.settings or args.options or args.env):
608 raise ArgumentError(None,
609 "--install-folder cannot be used together with -s, -o, -e or -pr")
610 if args.build_order and args.graph:
611 raise ArgumentError(None,
612 "--build-order cannot be used together with --graph")
613
614 # BUILD ORDER ONLY
615 if args.build_order:
616 ret = self._conan.info_build_order(args.path_or_reference,
617 settings=args.settings,
618 options=args.options,
619 env=args.env,
620 profile_names=args.profile,
621 remote_name=args.remote,
622 build_order=args.build_order,
623 check_updates=args.update,
624 install_folder=args.install_folder)
625 if args.json:
626 json_arg = True if args.json == "1" else args.json
627 self._outputer.json_build_order(ret, json_arg, get_cwd())
628 else:
629 self._outputer.build_order(ret)
630
631 # INSTALL SIMULATION, NODES TO INSTALL
632 elif args.build is not None:
633 nodes, _ = self._conan.info_nodes_to_build(args.path_or_reference,
634 build_modes=args.build,
635 settings=args.settings,
636 options=args.options,
637 env=args.env,
638 profile_names=args.profile,
639 remote_name=args.remote,
640 check_updates=args.update,
641 install_folder=args.install_folder)
642 if args.json:
643 json_arg = True if args.json == "1" else args.json
644 self._outputer.json_nodes_to_build(nodes, json_arg, get_cwd())
645 else:
646 self._outputer.nodes_to_build(nodes)
647
648 # INFO ABOUT DEPS OF CURRENT PROJECT OR REFERENCE
649 else:
650 data = self._conan.info(args.path_or_reference,
651 remote_name=args.remote,
652 settings=args.settings,
653 options=args.options,
654 env=args.env,
655 profile_names=args.profile,
656 update=args.update,
657 install_folder=args.install_folder,
658 build=args.dry_build,
659 lockfile=args.lockfile)
660 deps_graph, _ = data
661 only = args.only
662 if args.only == ["None"]:
663 only = []
664 if only and args.paths and (set(only) - set(path_only_options)):
665 raise ConanException("Invalid --only value '%s' with --path specified, allowed "
666 "values: [%s]." % (only, str_path_only_options))
667 elif only and not args.paths and (set(only) - set(info_only_options)):
668 raise ConanException("Invalid --only value '%s', allowed values: [%s].\n"
669 "Use --only=None to show only the references."
670 % (only, str_only_options))
671
672 if args.graph:
673 self._outputer.info_graph(args.graph, deps_graph, get_cwd())
674 if args.json:
675 json_arg = True if args.json == "1" else args.json
676 self._outputer.json_info(deps_graph, json_arg, get_cwd(), show_paths=args.paths)
677
678 if not args.graph and not args.json:
679 self._outputer.info(deps_graph, only, args.package_filter, args.paths)
680
681 def source(self, *args):
682 """
683 Calls your local conanfile.py 'source()' method.
684
685 Usually downloads and uncompresses the package sources.
686 """
687 parser = argparse.ArgumentParser(description=self.source.__doc__,
688 prog="conan source",
689 formatter_class=SmartFormatter)
690 parser.add_argument("path", help=_PATH_HELP)
691 parser.add_argument("-sf", "--source-folder", action=OnceArgument,
692 help='Destination directory. Defaulted to current directory')
693 parser.add_argument("-if", "--install-folder", action=OnceArgument,
694 help=_INSTALL_FOLDER_HELP + " Optional, source method will run without "
695 "the information retrieved from the conaninfo.txt and "
696 "conanbuildinfo.txt, only required when using conditional source() "
697 "based on settings, options, env_info and user_info")
698 args = parser.parse_args(*args)
699
700 try:
701 if "@" in args.path and ConanFileReference.loads(args.path):
702 raise ArgumentError(None,
703 "'conan source' doesn't accept a reference anymore. "
704 "If you were using it as a concurrency workaround, "
705 "you can call 'conan install' simultaneously from several "
706 "different processes, the concurrency is now natively supported"
707 ". The path parameter should be a folder containing a "
708 "conanfile.py file.")
709 except ConanException:
710 pass
711
712 self._warn_python2()
713 return self._conan.source(args.path, args.source_folder, args.install_folder)
714
715 def build(self, *args):
716 """
717 Calls your local conanfile.py 'build()' method.
718
719 The recipe will be built in the local directory specified by
720 --build-folder, reading the sources from --source-folder. If you are
721 using a build helper, like CMake(), the --package-folder will be
722 configured as destination folder for the install step.
723 """
724
725 parser = argparse.ArgumentParser(description=self.build.__doc__,
726 prog="conan build",
727 formatter_class=SmartFormatter)
728 parser.add_argument("path", help=_PATH_HELP)
729 parser.add_argument("-b", "--build", default=None, action="store_true",
730 help="Execute the build step (variable should_build=True). When "
731 "specified, configure/install/test won't run unless "
732 "--configure/--install/--test specified")
733 parser.add_argument("-bf", "--build-folder", action=OnceArgument, help=_BUILD_FOLDER_HELP)
734 parser.add_argument("-c", "--configure", default=None, action="store_true",
735 help="Execute the configuration step (variable should_configure=True). "
736 "When specified, build/install/test won't run unless "
737 "--build/--install/--test specified")
738 parser.add_argument("-i", "--install", default=None, action="store_true",
739 help="Execute the install step (variable should_install=True). When "
740 "specified, configure/build/test won't run unless "
741 "--configure/--build/--test specified")
742 parser.add_argument("-t", "--test", default=None, action="store_true",
743 help="Execute the test step (variable should_test=True). When "
744 "specified, configure/build/install won't run unless "
745 "--configure/--build/--install specified")
746 parser.add_argument("-if", "--install-folder", action=OnceArgument,
747 help=_INSTALL_FOLDER_HELP)
748 parser.add_argument("-pf", "--package-folder", action=OnceArgument,
749 help="Directory to install the package (when the build system or "
750 "build() method does it). Defaulted to the '{build_folder}/package' "
751 "folder. A relative path can be specified, relative to the current "
752 "folder. Also an absolute path is allowed.")
753 parser.add_argument("-sf", "--source-folder", action=OnceArgument, help=_SOURCE_FOLDER_HELP)
754 args = parser.parse_args(*args)
755
756 self._warn_python2()
757
758 if args.build or args.configure or args.install or args.test:
759 build, config, install, test = (bool(args.build), bool(args.configure),
760 bool(args.install), bool(args.test))
761 else:
762 build = config = install = test = True
763 return self._conan.build(conanfile_path=args.path,
764 source_folder=args.source_folder,
765 package_folder=args.package_folder,
766 build_folder=args.build_folder,
767 install_folder=args.install_folder,
768 should_configure=config,
769 should_build=build,
770 should_install=install,
771 should_test=test)
772
773 def package(self, *args):
774 """
775 Calls your local conanfile.py 'package()' method.
776
777 This command works in the user space and it will copy artifacts from
778 the --build-folder and --source-folder folder to the --package-folder
779 one. It won't create a new package in the local cache, if you want to
780 do it, use 'conan create' or 'conan export-pkg' after a 'conan build'
781 command.
782 """
783 parser = argparse.ArgumentParser(description=self.package.__doc__,
784 prog="conan package",
785 formatter_class=SmartFormatter)
786 parser.add_argument("path", help=_PATH_HELP)
787 parser.add_argument("-bf", "--build-folder", action=OnceArgument, help=_BUILD_FOLDER_HELP)
788 parser.add_argument("-if", "--install-folder", action=OnceArgument,
789 help=_INSTALL_FOLDER_HELP)
790 parser.add_argument("-pf", "--package-folder", action=OnceArgument,
791 help="folder to install the package. Defaulted to the "
792 "'{build_folder}/package' folder. A relative path can be specified"
793 " (relative to the current directory). Also an absolute path"
794 " is allowed.")
795 parser.add_argument("-sf", "--source-folder", action=OnceArgument, help=_SOURCE_FOLDER_HELP)
796 args = parser.parse_args(*args)
797 try:
798 if "@" in args.path and ConanFileReference.loads(args.path):
799 raise ArgumentError(None,
800 "'conan package' doesn't accept a reference anymore. "
801 "The path parameter should be a conanfile.py or a folder "
802 "containing one. If you were using the 'conan package' "
803 "command for development purposes we recommend to use "
804 "the local development commands: 'conan build' + "
805 "'conan package' and finally 'conan create' to regenerate the "
806 "package, or 'conan export_package' to store the already built "
807 "binaries in the local cache without rebuilding them.")
808 except ConanException:
809 pass
810
811 self._warn_python2()
812 return self._conan.package(path=args.path,
813 build_folder=args.build_folder,
814 package_folder=args.package_folder,
815 source_folder=args.source_folder,
816 install_folder=args.install_folder)
817
818 def imports(self, *args):
819 """
820 Calls your local conanfile.py or conanfile.txt 'imports' method.
821
822 It requires to have been previously installed and have a
823 conanbuildinfo.txt generated file in the --install-folder (defaulted to
824 current directory).
825 """
826 parser = argparse.ArgumentParser(description=self.imports.__doc__,
827 prog="conan imports",
828 formatter_class=SmartFormatter)
829 parser.add_argument("path",
830 help=_PATH_HELP + " With --undo option, this parameter is the folder "
831 "containing the conan_imports_manifest.txt file generated in a previous"
832 " execution. e.g.: conan imports ./imported_files --undo ")
833 parser.add_argument("-if", "--install-folder", action=OnceArgument,
834 help=_INSTALL_FOLDER_HELP)
835 parser.add_argument("-imf", "--import-folder", action=OnceArgument,
836 help="Directory to copy the artifacts to. By default it will be the"
837 " current directory")
838 parser.add_argument("-u", "--undo", default=False, action="store_true",
839 help="Undo imports. Remove imported files")
840 args = parser.parse_args(*args)
841
842 if args.undo:
843 return self._conan.imports_undo(args.path)
844
845 try:
846 if "@" in args.path and ConanFileReference.loads(args.path):
847 raise ArgumentError(None, "Parameter 'path' cannot be a reference. Use a folder "
848 "containing a conanfile.py or conanfile.txt file.")
849 except ConanException:
850 pass
851 self._warn_python2()
852 return self._conan.imports(args.path, args.import_folder, args.install_folder)
853
854 def export_pkg(self, *args):
855 """
856 Exports a recipe, then creates a package from local source and build folders.
857
858 If '--package-folder' is provided it will copy the files from there, otherwise it
859 will execute package() method over '--source-folder' and '--build-folder' to create
860 the binary package.
861 """
862
863 parser = argparse.ArgumentParser(description=self.export_pkg.__doc__,
864 prog="conan export-pkg",
865 formatter_class=SmartFormatter)
866 parser.add_argument("path", help=_PATH_HELP)
867 parser.add_argument("reference", nargs='?', default=None,
868 help="user/channel or pkg/version@user/channel "
869 "(if name and version are not declared in the "
870 "conanfile.py)")
871
872 parser.add_argument("-bf", "--build-folder", action=OnceArgument, help=_BUILD_FOLDER_HELP)
873 parser.add_argument("-e", "--env", nargs=1, action=Extender,
874 help='Environment variables that will be set during the package build, '
875 '-e CXX=/usr/bin/clang++')
876 parser.add_argument('-f', '--force', default=False, action='store_true',
877 help='Overwrite existing package if existing')
878 parser.add_argument("-if", "--install-folder", action=OnceArgument,
879 help=_INSTALL_FOLDER_HELP + " If these files are found in the specified"
880 " folder and any of '-e', '-o', '-pr' or '-s' arguments are used, it "
881 "will raise an error.")
882 parser.add_argument("-o", "--options", nargs=1, action=Extender,
883 help='Define options values, e.g., -o pkg:with_qt=true')
884 parser.add_argument("-pr", "--profile", action=Extender,
885 help='Profile for this package')
886 parser.add_argument("-pf", "--package-folder", action=OnceArgument,
887 help="folder containing a locally created package. If a value is given,"
888 " it won't call the recipe 'package()' method, and will run a copy"
889 " of the provided folder.")
890 parser.add_argument("-s", "--settings", nargs=1, action=Extender,
891 help='Define settings values, e.g., -s compiler=gcc')
892 parser.add_argument("-sf", "--source-folder", action=OnceArgument, help=_SOURCE_FOLDER_HELP)
893 parser.add_argument("-j", "--json", default=None, action=OnceArgument,
894 help='Path to a json file where the install information will be '
895 'written')
896 parser.add_argument("-l", "--lockfile", action=OnceArgument, nargs='?', const=".",
897 help="Path to a lockfile or folder containing 'conan.lock' file. "
898 "Lockfile will be updated with the exported package")
899
900 args = parser.parse_args(*args)
901
902 self._warn_python2()
903 name, version, user, channel, _ = get_reference_fields(args.reference,
904 user_channel_input=True)
905 cwd = os.getcwd()
906 info = None
907
908 try:
909 info = self._conan.export_pkg(conanfile_path=args.path,
910 name=name,
911 version=version,
912 source_folder=args.source_folder,
913 build_folder=args.build_folder,
914 package_folder=args.package_folder,
915 install_folder=args.install_folder,
916 profile_names=args.profile,
917 env=args.env,
918 settings=args.settings,
919 options=args.options,
920 force=args.force,
921 user=user,
922 channel=channel,
923 lockfile=args.lockfile)
924 except ConanException as exc:
925 info = exc.info
926 raise
927 finally:
928 if args.json and info:
929 self._outputer.json_output(info, args.json, cwd)
930
931 def export(self, *args):
932 """
933 Copies the recipe (conanfile.py & associated files) to your local cache.
934
935 Use the 'reference' param to specify a user and channel where to export
936 it. Once the recipe is in the local cache it can be shared, reused and
937 to any remote with the 'conan upload' command.
938 """
939 parser = argparse.ArgumentParser(description=self.export.__doc__,
940 prog="conan export",
941 formatter_class=SmartFormatter)
942 parser.add_argument("path", help=_PATH_HELP)
943 parser.add_argument("reference", nargs='?', default=None,
944 help="user/channel, or Pkg/version@user/channel (if name "
945 "and version are not declared in the conanfile.py")
946 parser.add_argument('-k', '-ks', '--keep-source', default=False, action='store_true',
947 help=_KEEP_SOURCE_HELP)
948 parser.add_argument("-l", "--lockfile", action=OnceArgument, nargs='?', const=".",
949 help="Path to a lockfile or folder containing 'conan.lock' file. "
950 "Lockfile will be updated with the exported package")
951
952 args = parser.parse_args(*args)
953 self._warn_python2()
954 name, version, user, channel, _ = get_reference_fields(args.reference,
955 user_channel_input=True)
956
957 if any([user, channel]) and not all([user, channel]):
958 # Or user/channel or nothing, but not partial
959 raise ConanException("Invalid parameter '%s', "
960 "specify the full reference or user/channel" % args.reference)
961
962 return self._conan.export(path=args.path,
963 name=name, version=version, user=user, channel=channel,
964 keep_source=args.keep_source, lockfile=args.lockfile)
965
966 def remove(self, *args):
967 """
968 Removes packages or binaries matching pattern from local cache or remote.
969
970 It can also be used to remove temporary source or build folders in the
971 local conan cache. If no remote is specified, the removal will be done
972 by default in the local conan cache.
973 """
974 parser = argparse.ArgumentParser(description=self.remove.__doc__,
975 prog="conan remove",
976 formatter_class=SmartFormatter)
977 parser.add_argument('pattern_or_reference', nargs="?", help=_PATTERN_OR_REFERENCE_HELP)
978 parser.add_argument('-b', '--builds', nargs="*", action=Extender,
979 help=("By default, remove all the build folders or select one, "
980 "specifying the package ID"))
981 parser.add_argument('-f', '--force', default=False, action='store_true',
982 help='Remove without requesting a confirmation')
983 parser.add_argument("-l", "--locks", default=False, action="store_true",
984 help="Remove locks")
985 parser.add_argument("-o", "--outdated", default=False, action="store_true",
986 help="Remove only outdated from recipe packages. "
987 "This flag can only be used with a reference")
988 parser.add_argument('-p', '--packages', nargs="*", action=Extender,
989 help="Select package to remove specifying the package ID")
990 parser.add_argument('-q', '--query', default=None, action=OnceArgument, help=_QUERY_HELP)
991 parser.add_argument('-r', '--remote', action=OnceArgument,
992 help='Will remove from the specified remote')
993 parser.add_argument('-s', '--src', default=False, action="store_true",
994 help='Remove source folders')
995 parser.add_argument('-t', '--system-reqs', default=False, action="store_true",
996 help='Remove system_reqs folders')
997 args = parser.parse_args(*args)
998
999 self._warn_python2()
1000
1001 if args.packages is not None and args.query:
1002 raise ConanException("'-q' and '-p' parameters can't be used at the same time")
1003
1004 if args.builds is not None and args.query:
1005 raise ConanException("'-q' and '-b' parameters can't be used at the same time")
1006
1007 if args.outdated and not args.pattern_or_reference:
1008 raise ConanException("'--outdated' argument can only be used with a reference")
1009
1010 if args.locks:
1011 if args.pattern_or_reference:
1012 raise ConanException("Specifying a pattern is not supported when removing locks")
1013 self._conan.remove_locks()
1014 self._out.info("Cache locks removed")
1015 return
1016 elif args.system_reqs:
1017 if args.packages:
1018 raise ConanException("'-t' and '-p' parameters can't be used at the same time")
1019 if not args.pattern_or_reference:
1020 raise ConanException("Please specify a valid pattern or reference to be cleaned")
1021
1022 if check_valid_ref(args.pattern_or_reference):
1023 return self._conan.remove_system_reqs(args.pattern_or_reference)
1024
1025 return self._conan.remove_system_reqs_by_pattern(args.pattern_or_reference)
1026 else:
1027 if not args.pattern_or_reference:
1028 raise ConanException('Please specify a pattern to be removed ("*" for all)')
1029
1030 return self._conan.remove(pattern=args.pattern_or_reference, query=args.query,
1031 packages=args.packages, builds=args.builds, src=args.src,
1032 force=args.force, remote_name=args.remote, outdated=args.outdated)
1033
1034 def copy(self, *args):
1035 """
1036 Copies conan recipes and packages to another user/channel.
1037
1038 Useful to promote packages (e.g. from "beta" to "stable") or transfer
1039 them from one user to another.
1040 """
1041 parser = argparse.ArgumentParser(description=self.copy.__doc__,
1042 prog="conan copy",
1043 formatter_class=SmartFormatter)
1044 parser.add_argument("reference", default="",
1045 help='package reference. e.g., MyPackage/1.2@user/channel')
1046 parser.add_argument("user_channel", default="",
1047 help='Destination user/channel. e.g., lasote/testing')
1048 parser.add_argument("-p", "--package", nargs=1, action=Extender,
1049 help='copy specified package ID '
1050 '[DEPRECATED: use full reference instead]')
1051 parser.add_argument("--all", action='store_true', default=False,
1052 help='Copy all packages from the specified package recipe')
1053 parser.add_argument("--force", action='store_true', default=False,
1054 help='Override destination packages and the package recipe')
1055 args = parser.parse_args(*args)
1056
1057 try:
1058 pref = PackageReference.loads(args.reference, validate=True)
1059 except ConanException:
1060 reference = args.reference
1061 packages_list = args.package
1062
1063 if packages_list:
1064 self._out.warn("Usage of `--package` argument is deprecated."
1065 " Use a full reference instead: "
1066 "`conan copy [...] {}:{}`".format(reference, packages_list[0]))
1067
1068 if args.all and packages_list:
1069 raise ConanException("Cannot specify both --all and --package")
1070 else:
1071 reference = repr(pref.ref)
1072 packages_list = [pref.id]
1073 if args.package:
1074 raise ConanException("Use a full package reference (preferred) or the `--package`"
1075 " command argument, but not both.")
1076
1077 if args.all:
1078 raise ConanException("'--all' argument cannot be used together with full reference")
1079
1080 self._warn_python2()
1081
1082 return self._conan.copy(reference=reference, user_channel=args.user_channel,
1083 force=args.force, packages=packages_list or args.all)
1084
1085 def user(self, *args):
1086 """
1087 Authenticates against a remote with user/pass, caching the auth token.
1088
1089 Useful to avoid the user and password being requested later. e.g. while
1090 you're uploading a package. You can have one user for each remote.
1091 Changing the user, or introducing the password is only necessary to
1092 perform changes in remote packages.
1093 """
1094 # FIXME: Difficult and confusing CLI. Better with:
1095 # - conan user clean -> clean users
1096 # - conan user list ('remote') -> list users (of a remote)
1097 # - conan user auth 'remote' ('user') ('password') -> login a remote (w/o user or pass)
1098 # - conan user set 'user' 'remote' -> set user for a remote (not login) necessary??
1099 parser = argparse.ArgumentParser(description=self.user.__doc__,
1100 prog="conan user",
1101 formatter_class=SmartFormatter)
1102 parser.add_argument("name", nargs='?', default=None,
1103 help='Username you want to use. If no name is provided it will show the'
1104 ' current user')
1105 parser.add_argument('-c', '--clean', default=False, action='store_true',
1106 help='Remove user and tokens for all remotes')
1107 parser.add_argument("-p", "--password", nargs='?', const="", type=str, action=OnceArgument,
1108 help='User password. Use double quotes if password with spacing, '
1109 'and escape quotes if existing. If empty, the password is '
1110 'requested interactively (not exposed)')
1111 parser.add_argument("-r", "--remote", help='Use the specified remote server',
1112 action=OnceArgument)
1113 parser.add_argument("-j", "--json", default=None, action=OnceArgument,
1114 help='json file path where the user list will be written to')
1115 parser.add_argument("-s", "--skip-auth", default=False, action='store_true',
1116 help='Skips the authentication with the server if there are local '
1117 'stored credentials. It doesn\'t check if the '
1118 'current credentials are valid or not')
1119 args = parser.parse_args(*args)
1120
1121 if args.clean and any((args.name, args.remote, args.password, args.json, args.skip_auth)):
1122 raise ConanException("'--clean' argument cannot be used together with 'name', "
1123 "'--password', '--remote', '--json' or '--skip.auth'")
1124 elif args.json and any((args.name, args.password)):
1125 raise ConanException("'--json' cannot be used together with 'name' or '--password'")
1126
1127 cwd = os.getcwd()
1128 info = None
1129
1130 try:
1131 if args.clean: # clean users
1132 self._conan.users_clean()
1133 elif not args.name and args.password is None: # list users
1134 info = self._conan.users_list(args.remote)
1135 self._outputer.print_user_list(info)
1136 elif args.password is None: # set user for remote (no password indicated)
1137 remote_name, prev_user, user = self._conan.user_set(args.name, args.remote)
1138 self._outputer.print_user_set(remote_name, prev_user, user)
1139 else: # login a remote
1140 remote_name = args.remote or self._conan.get_default_remote().name
1141 name = args.name
1142 password = args.password
1143 remote_name, prev_user, user = self._conan.authenticate(name,
1144 remote_name=remote_name,
1145 password=password,
1146 skip_auth=args.skip_auth)
1147
1148 self._outputer.print_user_set(remote_name, prev_user, user)
1149 except ConanException as exc:
1150 info = exc.info
1151 raise
1152 finally:
1153 if args.json and info:
1154 self._outputer.json_output(info, args.json, cwd)
1155
1156 def search(self, *args):
1157 """
1158 Searches package recipes and binaries in the local cache or in a remote.
1159
1160 If you provide a pattern, then it will search for existing package
1161 recipes matching it. If a full reference is provided
1162 (pkg/0.1@user/channel) then the existing binary packages for that
1163 reference will be displayed. If no remote is specified, the search
1164 will be done in the local cache. Search is case sensitive, exact case
1165 has to be used. For case insensitive file systems, like Windows, case
1166 sensitive search can be forced with '--case-sensitive'.
1167 """
1168 parser = argparse.ArgumentParser(description=self.search.__doc__,
1169 prog="conan search",
1170 formatter_class=SmartFormatter)
1171 parser.add_argument('pattern_or_reference', nargs='?', help=_PATTERN_OR_REFERENCE_HELP)
1172 parser.add_argument('-o', '--outdated', default=False, action='store_true',
1173 help="Show only outdated from recipe packages. "
1174 "This flag can only be used with a reference")
1175 parser.add_argument('-q', '--query', default=None, action=OnceArgument, help=_QUERY_HELP)
1176 parser.add_argument('-r', '--remote', action=OnceArgument,
1177 help="Remote to search in. '-r all' searches all remotes")
1178 parser.add_argument('--case-sensitive', default=False, action='store_true',
1179 help='Make a case-sensitive search. Use it to guarantee '
1180 'case-sensitive '
1181 'search in Windows or other case-insensitive file systems')
1182 parser.add_argument('--raw', default=False, action='store_true',
1183 help='Print just the list of recipes')
1184 parser.add_argument('--table', action=OnceArgument,
1185 help="Outputs html file with a table of binaries. Only valid for a "
1186 "reference search")
1187 parser.add_argument("-j", "--json", default=None, action=OnceArgument,
1188 help='json file path where the search information will be written to')
1189 parser.add_argument("-rev", "--revisions", default=False, action='store_true',
1190 help='Get a list of revisions for a reference or a '
1191 'package reference.')
1192
1193 args = parser.parse_args(*args)
1194
1195 if args.table and args.json:
1196 raise ConanException("'--table' argument cannot be used together with '--json'")
1197
1198 # Searching foo/bar is considered a pattern (FIXME: 2.0) so use strict mode to disambiguate
1199 is_reference = check_valid_ref(args.pattern_or_reference, strict_mode=True)
1200
1201 if is_reference:
1202 ref = ConanFileReference.loads(args.pattern_or_reference)
1203 else:
1204 ref = None
1205 if args.query:
1206 raise ConanException("-q parameter only allowed with a valid recipe reference, "
1207 "not with a pattern")
1208 cwd = os.getcwd()
1209 info = None
1210
1211 try:
1212 if args.revisions:
1213 try:
1214 pref = PackageReference.loads(args.pattern_or_reference)
1215 except (TypeError, ConanException, AttributeError):
1216 pass
1217 else:
1218 info = self._conan.get_package_revisions(repr(pref), remote_name=args.remote)
1219
1220 if not info:
1221 if not ref:
1222 msg = "With --revision, specify a reference (e.g {ref}) or a package " \
1223 "reference with " \
1224 "recipe revision (e.g {ref}#3453453453:d50a0d523d98c15bb147b18f" \
1225 "a7d203887c38be8b)".format(ref=_REFERENCE_EXAMPLE)
1226 raise ConanException(msg)
1227 info = self._conan.get_recipe_revisions(repr(ref),
1228 remote_name=args.remote)
1229 self._outputer.print_revisions(ref, info, remote_name=args.remote)
1230 return
1231
1232 if ref:
1233 info = self._conan.search_packages(repr(ref), query=args.query,
1234 remote_name=args.remote,
1235 outdated=args.outdated)
1236 # search is done for one reference
1237 self._outputer.print_search_packages(info["results"], ref, args.query,
1238 args.table, outdated=args.outdated)
1239 else:
1240 if args.table:
1241 raise ConanException("'--table' argument can only be used with a reference")
1242 elif args.outdated:
1243 raise ConanException("'--outdated' argument can only be used with a reference")
1244
1245 info = self._conan.search_recipes(args.pattern_or_reference,
1246 remote_name=args.remote,
1247 case_sensitive=args.case_sensitive)
1248 # Deprecate 2.0: Dirty check if search is done for all remotes or for remote "all"
1249 try:
1250 remote_all = self._conan.get_remote_by_name("all")
1251 except NoRemoteAvailable:
1252 remote_all = None
1253 all_remotes_search = (remote_all is None and args.remote == "all")
1254 self._outputer.print_search_references(info["results"], args.pattern_or_reference,
1255 args.raw, all_remotes_search)
1256 except ConanException as exc:
1257 info = exc.info
1258 raise
1259 finally:
1260 if args.json and info:
1261 self._outputer.json_output(info, args.json, cwd)
1262
1263 def upload(self, *args):
1264 """
1265 Uploads a recipe and binary packages to a remote.
1266
1267 If no remote is specified, the first configured remote (by default conan-center, use
1268 'conan remote list' to list the remotes) will be used.
1269 """
1270 parser = argparse.ArgumentParser(description=self.upload.__doc__,
1271 prog="conan upload",
1272 formatter_class=SmartFormatter)
1273 parser.add_argument('pattern_or_reference', help=_PATTERN_REF_OR_PREF_HELP)
1274 parser.add_argument("-p", "--package", default=None,
1275 help="Package ID [DEPRECATED: use full reference instead]",
1276 action=OnceArgument)
1277 parser.add_argument('-q', '--query', default=None, action=OnceArgument,
1278 help="Only upload packages matching a specific query. " + _QUERY_HELP)
1279 parser.add_argument("-r", "--remote", action=OnceArgument,
1280 help='upload to this specific remote')
1281 parser.add_argument("--all", action='store_true', default=False,
1282 help='Upload both package recipe and packages')
1283 parser.add_argument("--skip-upload", action='store_true', default=False,
1284 help='Do not upload anything, just run the checks and the compression')
1285 parser.add_argument("--force", action='store_true', default=False,
1286 help='Do not check conan recipe date, override remote with local')
1287 parser.add_argument("--check", action='store_true', default=False,
1288 help='Perform an integrity check, using the manifests, before upload')
1289 parser.add_argument('-c', '--confirm', default=False, action='store_true',
1290 help='Upload all matching recipes without confirmation')
1291 parser.add_argument('--retry', default=None, type=int, action=OnceArgument,
1292 help="In case of fail retries to upload again the specified times.")
1293 parser.add_argument('--retry-wait', default=None, type=int, action=OnceArgument,
1294 help='Waits specified seconds before retry again')
1295 parser.add_argument("-no", "--no-overwrite", nargs="?", type=str, choices=["all", "recipe"],
1296 action=OnceArgument, const="all",
1297 help="Uploads package only if recipe is the same as the remote one")
1298 parser.add_argument("-j", "--json", default=None, action=OnceArgument,
1299 help='json file path where the upload information will be written to')
1300
1301 args = parser.parse_args(*args)
1302
1303 try:
1304 pref = PackageReference.loads(args.pattern_or_reference, validate=True)
1305 except ConanException:
1306 reference = args.pattern_or_reference
1307 package_id = args.package
1308
1309 if package_id:
1310 self._out.warn("Usage of `--package` argument is deprecated."
1311 " Use a full reference instead: "
1312 "`conan upload [...] {}:{}`".format(reference, package_id))
1313
1314 if args.query and package_id:
1315 raise ConanException("'--query' argument cannot be used together with '--package'")
1316 else:
1317 reference = repr(pref.ref)
1318 package_id = pref.id
1319
1320 if args.package:
1321 raise ConanException("Use a full package reference (preferred) or the `--package`"
1322 " command argument, but not both.")
1323 if args.query:
1324 raise ConanException("'--query' argument cannot be used together with "
1325 "full reference")
1326
1327 if args.force and args.no_overwrite:
1328 raise ConanException("'--no-overwrite' argument cannot be used together with '--force'")
1329 if args.force and args.skip_upload:
1330 raise ConanException("'--skip-upload' argument cannot be used together with '--force'")
1331 if args.no_overwrite and args.skip_upload:
1332 raise ConanException("'--skip-upload' argument cannot be used together "
1333 "with '--no-overwrite'")
1334
1335 self._warn_python2()
1336
1337 if args.force:
1338 policy = UPLOAD_POLICY_FORCE
1339 elif args.no_overwrite == "all":
1340 policy = UPLOAD_POLICY_NO_OVERWRITE
1341 elif args.no_overwrite == "recipe":
1342 policy = UPLOAD_POLICY_NO_OVERWRITE_RECIPE
1343 elif args.skip_upload:
1344 policy = UPLOAD_POLICY_SKIP
1345 else:
1346 policy = None
1347
1348 info = None
1349 try:
1350 info = self._conan.upload(pattern=reference, package=package_id,
1351 query=args.query, remote_name=args.remote,
1352 all_packages=args.all, policy=policy,
1353 confirm=args.confirm, retry=args.retry,
1354 retry_wait=args.retry_wait, integrity_check=args.check)
1355
1356 except ConanException as exc:
1357 info = exc.info
1358 raise
1359 finally:
1360 if args.json and info:
1361 self._outputer.json_output(info, args.json, os.getcwd())
1362
1363 def remote(self, *args):
1364 """
1365 Manages the remote list and the package recipes associated to a remote.
1366 """
1367 parser = argparse.ArgumentParser(description=self.remote.__doc__,
1368 prog="conan remote",
1369 formatter_class=SmartFormatter)
1370 subparsers = parser.add_subparsers(dest='subcommand', help='sub-command help')
1371 subparsers.required = True
1372
1373 # create the parser for the "a" command
1374 parser_list = subparsers.add_parser('list', help='List current remotes')
1375 parser_list.add_argument("-raw", "--raw", action='store_true', default=False,
1376 help='Raw format. Valid for "remotes.txt" file for '
1377 '"conan config install"')
1378 parser_add = subparsers.add_parser('add', help='Add a remote')
1379 parser_add.add_argument('remote', help='Name of the remote')
1380 parser_add.add_argument('url', help='URL of the remote')
1381 parser_add.add_argument('verify_ssl', nargs="?", default="True",
1382 help='Verify SSL certificated. Default True')
1383 parser_add.add_argument("-i", "--insert", nargs="?", const=0, type=int, action=OnceArgument,
1384 help="insert remote at specific index")
1385 parser_add.add_argument("-f", "--force", default=False, action='store_true',
1386 help="Force addition, will update if existing")
1387 parser_rm = subparsers.add_parser('remove', help='Remove a remote')
1388 parser_rm.add_argument('remote', help='Name of the remote')
1389 parser_upd = subparsers.add_parser('update', help='Update the remote url')
1390 parser_upd.add_argument('remote', help='Name of the remote')
1391
1392 parser_upd.add_argument('url', help='URL')
1393 parser_upd.add_argument('verify_ssl', nargs="?", default="True",
1394 help='Verify SSL certificated. Default True')
1395 parser_upd.add_argument("-i", "--insert", nargs="?", const=0, type=int, action=OnceArgument,
1396 help="Insert remote at specific index")
1397 parser_rename = subparsers.add_parser('rename', help='Update the remote name')
1398 parser_rename.add_argument('remote', help='The old remote name')
1399 parser_rename.add_argument('new_remote', help='The new remote name')
1400
1401 subparsers.add_parser('list_ref',
1402 help='List the package recipes and its associated remotes')
1403 parser_padd = subparsers.add_parser('add_ref',
1404 help="Associate a recipe's reference to a remote")
1405 parser_padd.add_argument('reference', help='Package recipe reference')
1406 parser_padd.add_argument('remote', help='Name of the remote')
1407 parser_prm = subparsers.add_parser('remove_ref',
1408 help="Dissociate a recipe's reference and its remote")
1409 parser_prm.add_argument('reference', help='Package recipe reference')
1410 parser_pupd = subparsers.add_parser('update_ref', help="Update the remote associated with "
1411 "a package recipe")
1412 parser_pupd.add_argument('reference', help='Package recipe reference')
1413 parser_pupd.add_argument('remote', help='Name of the remote')
1414
1415 list_pref = subparsers.add_parser('list_pref', help='List the package binaries and '
1416 'its associated remotes')
1417 list_pref.add_argument('reference', help='Package recipe reference')
1418
1419 add_pref = subparsers.add_parser('add_pref',
1420 help="Associate a package reference to a remote")
1421 add_pref.add_argument('package_reference', help='Binary package reference')
1422 add_pref.add_argument('remote', help='Name of the remote')
1423
1424 remove_pref = subparsers.add_parser('remove_pref', help="Dissociate a package's reference "
1425 "and its remote")
1426 remove_pref.add_argument('package_reference', help='Binary package reference')
1427
1428 update_pref = subparsers.add_parser('update_pref', help="Update the remote associated with "
1429 "a binary package")
1430 update_pref.add_argument('package_reference', help='Bianary package reference')
1431 update_pref.add_argument('remote', help='Name of the remote')
1432
1433 subparsers.add_parser('clean', help="Clean the list of remotes and all "
1434 "recipe-remote associations")
1435
1436 args = parser.parse_args(*args)
1437
1438 reference = args.reference if hasattr(args, 'reference') else None
1439 package_reference = args.package_reference if hasattr(args, 'package_reference') else None
1440
1441 verify_ssl = get_bool_from_text(args.verify_ssl) if hasattr(args, 'verify_ssl') else False
1442
1443 remote_name = args.remote if hasattr(args, 'remote') else None
1444 new_remote = args.new_remote if hasattr(args, 'new_remote') else None
1445 url = args.url if hasattr(args, 'url') else None
1446
1447 if args.subcommand == "list":
1448 remotes = self._conan.remote_list()
1449 self._outputer.remote_list(remotes, args.raw)
1450 elif args.subcommand == "add":
1451 return self._conan.remote_add(remote_name, url, verify_ssl, args.insert, args.force)
1452 elif args.subcommand == "remove":
1453 return self._conan.remote_remove(remote_name)
1454 elif args.subcommand == "rename":
1455 return self._conan.remote_rename(remote_name, new_remote)
1456 elif args.subcommand == "update":
1457 return self._conan.remote_update(remote_name, url, verify_ssl, args.insert)
1458 elif args.subcommand == "list_ref":
1459 refs = self._conan.remote_list_ref()
1460 self._outputer.remote_ref_list(refs)
1461 elif args.subcommand == "add_ref":
1462 return self._conan.remote_add_ref(reference, remote_name)
1463 elif args.subcommand == "remove_ref":
1464 return self._conan.remote_remove_ref(reference)
1465 elif args.subcommand == "update_ref":
1466 return self._conan.remote_update_ref(reference, remote_name)
1467 elif args.subcommand == "list_pref":
1468 refs = self._conan.remote_list_pref(reference)
1469 self._outputer.remote_pref_list(refs)
1470 elif args.subcommand == "add_pref":
1471 return self._conan.remote_add_pref(package_reference, remote_name)
1472 elif args.subcommand == "remove_pref":
1473 return self._conan.remote_remove_pref(package_reference)
1474 elif args.subcommand == "update_pref":
1475 return self._conan.remote_update_pref(package_reference, remote_name)
1476 elif args.subcommand == "clean":
1477 return self._conan.remote_clean()
1478
1479 def profile(self, *args):
1480 """
1481 Lists profiles in the '.conan/profiles' folder, or shows profile details.
1482
1483 The 'list' subcommand will always use the default user 'conan/profiles' folder. But the
1484 'show' subcommand is able to resolve absolute and relative paths, as well as to map names to
1485 '.conan/profiles' folder, in the same way as the '--profile' install argument.
1486 """
1487 parser = argparse.ArgumentParser(description=self.profile.__doc__,
1488 prog="conan profile",
1489 formatter_class=SmartFormatter)
1490 subparsers = parser.add_subparsers(dest='subcommand')
1491 subparsers.required = True
1492
1493 # create the parser for the "profile" command
1494 subparsers.add_parser('list', help='List current profiles')
1495 parser_show = subparsers.add_parser('show', help='Show the values defined for a profile')
1496 parser_show.add_argument('profile', help="name of the profile in the '.conan/profiles' "
1497 "folder or path to a profile file")
1498
1499 parser_new = subparsers.add_parser('new', help='Creates a new empty profile')
1500 parser_new.add_argument('profile', help="Name for the profile in the '.conan/profiles' "
1501 "folder or path and name for a profile file")
1502 parser_new.add_argument("--detect", action='store_true', default=False,
1503 help='Autodetect settings and fill [settings] section')
1504 parser_new.add_argument("--force", action='store_true', default=False,
1505 help='Overwrite existing profile if existing')
1506
1507 parser_update = subparsers.add_parser('update', help='Update a profile with desired value')
1508 parser_update.add_argument('item',
1509 help="'item=value' to update. e.g., settings.compiler=gcc")
1510 parser_update.add_argument('profile', help="Name of the profile in the '.conan/profiles' "
1511 "folder or path to a profile file")
1512
1513 parser_get = subparsers.add_parser('get', help='Get a profile key')
1514 parser_get.add_argument('item', help='Key of the value to get, e.g.: settings.compiler')
1515 parser_get.add_argument('profile', help="Name of the profile in the '.conan/profiles' "
1516 "folder or path to a profile file")
1517
1518 parser_remove = subparsers.add_parser('remove', help='Remove a profile key')
1519 parser_remove.add_argument('item', help='key, e.g.: settings.compiler')
1520 parser_remove.add_argument('profile', help="Name of the profile in the '.conan/profiles' "
1521 "folder or path to a profile file")
1522
1523 args = parser.parse_args(*args)
1524
1525 profile = args.profile if hasattr(args, 'profile') else None
1526
1527 if args.subcommand == "list":
1528 profiles = self._conan.profile_list()
1529 self._outputer.profile_list(profiles)
1530 elif args.subcommand == "show":
1531 profile_text = self._conan.read_profile(profile)
1532 self._outputer.print_profile(profile, profile_text)
1533 elif args.subcommand == "new":
1534 self._conan.create_profile(profile, args.detect, args.force)
1535 elif args.subcommand == "update":
1536 try:
1537 key, value = args.item.split("=", 1)
1538 except ValueError:
1539 raise ConanException("Please specify key=value")
1540 self._conan.update_profile(profile, key, value)
1541 elif args.subcommand == "get":
1542 key = args.item
1543 self._out.writeln(self._conan.get_profile_key(profile, key))
1544 elif args.subcommand == "remove":
1545 self._conan.delete_profile_key(profile, args.item)
1546
1547 def get(self, *args):
1548 """
1549 Gets a file or list a directory of a given reference or package.
1550 """
1551 parser = argparse.ArgumentParser(description=self.get.__doc__,
1552 prog="conan get",
1553 formatter_class=SmartFormatter)
1554 parser.add_argument('reference', help=_REF_OR_PREF_HELP)
1555 parser.add_argument('path',
1556 help='Path to the file or directory. If not specified will get the '
1557 'conanfile if only a reference is specified and a conaninfo.txt '
1558 'file contents if the package is also specified',
1559 default=None, nargs="?")
1560 parser.add_argument("-p", "--package", default=None,
1561 help="Package ID [DEPRECATED: use full reference instead]",
1562 action=OnceArgument)
1563 parser.add_argument("-r", "--remote", action=OnceArgument,
1564 help='Get from this specific remote')
1565 parser.add_argument("-raw", "--raw", action='store_true', default=False,
1566 help='Do not decorate the text')
1567 args = parser.parse_args(*args)
1568
1569 try:
1570 pref = PackageReference.loads(args.reference, validate=True)
1571 except ConanException:
1572 reference = args.reference
1573 package_id = args.package
1574
1575 if package_id:
1576 self._out.warn("Usage of `--package` argument is deprecated."
1577 " Use a full reference instead: "
1578 "`conan get [...] {}:{}`".format(reference, package_id))
1579 else:
1580 reference = repr(pref.ref)
1581 package_id = pref.id
1582 if args.package:
1583 raise ConanException("Use a full package reference (preferred) or the `--package`"
1584 " command argument, but not both.")
1585
1586 ret, path = self._conan.get_path(reference, package_id, args.path, args.remote)
1587 if isinstance(ret, list):
1588 self._outputer.print_dir_list(ret, path, args.raw)
1589 else:
1590 self._outputer.print_file_contents(ret, path, args.raw)
1591
1592 def alias(self, *args):
1593 """
1594 Creates and exports an 'alias package recipe'.
1595
1596 An "alias" package is a symbolic name (reference) for another package
1597 (target). When some package depends on an alias, the target one will be
1598 retrieved and used instead, so the alias reference, the symbolic name,
1599 does not appear in the final dependency graph.
1600 """
1601 parser = argparse.ArgumentParser(description=self.alias.__doc__,
1602 prog="conan alias",
1603 formatter_class=SmartFormatter)
1604 parser.add_argument('reference', help='Alias reference. e.g.: mylib/1.X@user/channel')
1605 parser.add_argument('target', help='Target reference. e.g.: mylib/1.12@user/channel')
1606 args = parser.parse_args(*args)
1607
1608 self._warn_python2()
1609
1610 self._conan.export_alias(args.reference, args.target)
1611
1612 def workspace(self, *args):
1613 """
1614 Manages a workspace (a set of packages consumed from the user workspace that
1615 belongs to the same project).
1616
1617 Use this command to manage a Conan workspace, use the subcommand 'install' to
1618 create the workspace from a file.
1619 """
1620 parser = argparse.ArgumentParser(description=self.workspace.__doc__,
1621 prog="conan workspace",
1622 formatter_class=SmartFormatter)
1623 subparsers = parser.add_subparsers(dest='subcommand', help='sub-command help')
1624 subparsers.required = True
1625
1626 install_parser = subparsers.add_parser('install',
1627 help='same as a "conan install" command'
1628 ' but using the workspace data from the file. '
1629 'If no file is provided, it will look for a '
1630 'file named "conanws.yml"')
1631 install_parser.add_argument('path', help='path to workspace definition file (it will look'
1632 ' for a "conanws.yml" inside if a directory is'
1633 ' given)')
1634 _add_common_install_arguments(install_parser, build_help=_help_build_policies)
1635 install_parser.add_argument("-if", "--install-folder", action=OnceArgument,
1636 help="Folder where the workspace files will be created"
1637 " (default to current working directory)")
1638
1639 args = parser.parse_args(*args)
1640
1641 if args.subcommand == "install":
1642 self._conan.workspace_install(args.path, args.settings, args.options, args.env,
1643 args.remote, args.build,
1644 args.profile, args.update,
1645 install_folder=args.install_folder)
1646
1647 def editable(self, *args):
1648 """
1649 Manages editable packages (package that resides in the user workspace, but
1650 are consumed as if they were in the cache).
1651
1652 Use the subcommands 'add', 'remove' and 'list' to create, remove an list
1653 packages currently installed in this mode.
1654 """
1655 parser = argparse.ArgumentParser(description=self.editable.__doc__,
1656 prog="conan editable",
1657 formatter_class=SmartFormatter)
1658 subparsers = parser.add_subparsers(dest='subcommand', help='sub-command help')
1659 subparsers.required = True
1660
1661 add_parser = subparsers.add_parser('add', help='Put a package in editable mode')
1662 add_parser.add_argument('path', help='Path to the package folder in the user workspace')
1663 add_parser.add_argument('reference', help='Package reference e.g.: mylib/1.X@user/channel')
1664 add_parser.add_argument("-l", "--layout",
1665 help='Relative or absolute path to a file containing the layout.'
1666 ' Relative paths will be resolved first relative to current dir, '
1667 'then to local cache "layouts" folder')
1668
1669 remove_parser = subparsers.add_parser('remove', help='Disable editable mode for a package')
1670 remove_parser.add_argument('reference',
1671 help='Package reference e.g.: mylib/1.X@user/channel')
1672
1673 subparsers.add_parser('list', help='List packages in editable mode')
1674
1675 args = parser.parse_args(*args)
1676 self._warn_python2()
1677
1678 if args.subcommand == "add":
1679 self._conan.editable_add(args.path, args.reference, args.layout, cwd=os.getcwd())
1680 self._out.success("Reference '{}' in editable mode".format(args.reference))
1681 elif args.subcommand == "remove":
1682 ret = self._conan.editable_remove(args.reference)
1683 if ret:
1684 self._out.success("Removed editable mode for reference '{}'".format(args.reference))
1685 else:
1686 self._out.warn("Reference '{}' was not installed "
1687 "as editable".format(args.reference))
1688 elif args.subcommand == "list":
1689 for k, v in self._conan.editable_list().items():
1690 self._out.info("%s" % k)
1691 self._out.writeln(" Path: %s" % v["path"])
1692 self._out.writeln(" Layout: %s" % v["layout"])
1693
1694 def graph(self, *args):
1695 """
1696 Generates and manipulates lock files.
1697 """
1698 parser = argparse.ArgumentParser(description=self.graph.__doc__,
1699 prog="conan graph",
1700 formatter_class=SmartFormatter)
1701 subparsers = parser.add_subparsers(dest='subcommand', help='sub-command help')
1702 subparsers.required = True
1703
1704 # create the parser for the "a" command
1705 merge_cmd = subparsers.add_parser('update-lock', help='merge two lockfiles')
1706 merge_cmd.add_argument('old_lockfile', help='path to previous lockfile')
1707 merge_cmd.add_argument('new_lockfile', help='path to modified lockfile')
1708
1709 build_order_cmd = subparsers.add_parser('build-order', help='Returns build-order')
1710 build_order_cmd.add_argument('lockfile', help='lockfile folder')
1711 build_order_cmd.add_argument("-b", "--build", action=Extender, nargs="?",
1712 help="nodes to build")
1713 build_order_cmd.add_argument("--json", action=OnceArgument,
1714 help="generate output file in json format")
1715
1716 lock_cmd = subparsers.add_parser('lock', help='create a lockfile')
1717 lock_cmd.add_argument("path_or_reference", help="Path to a folder containing a recipe"
1718 " (conanfile.py or conanfile.txt) or to a recipe file. e.g., "
1719 "./my_project/conanfile.txt. It could also be a reference")
1720 lock_cmd.add_argument("-l", "--lockfile", action=OnceArgument,
1721 help="Path to lockfile to be created. If not specified 'conan.lock'"
1722 " will be created in current folder")
1723 _add_common_install_arguments(lock_cmd, build_help="Packages to build from source",
1724 lockfile=False)
1725
1726 args = parser.parse_args(*args)
1727 self._warn_python2()
1728
1729 if args.subcommand == "update-lock":
1730 self._conan.update_lock(args.old_lockfile, args.new_lockfile)
1731 elif args.subcommand == "build-order":
1732 build_order = self._conan.build_order(args.lockfile, args.build)
1733 self._out.writeln(build_order)
1734 if args.json:
1735 json_file = _make_abs_path(args.json)
1736 save(json_file, json.dumps(build_order, indent=True))
1737 elif args.subcommand == "lock":
1738 self._conan.create_lock(args.path_or_reference,
1739 remote_name=args.remote,
1740 settings=args.settings,
1741 options=args.options,
1742 env=args.env,
1743 profile_names=args.profile,
1744 update=args.update,
1745 lockfile=args.lockfile,
1746 build=args.build)
1747
1748 def _show_help(self):
1749 """
1750 Prints a summary of all commands.
1751 """
1752 grps = [("Consumer commands", ("install", "config", "get", "info", "search")),
1753 ("Creator commands", ("new", "create", "upload", "export", "export-pkg", "test")),
1754 ("Package development commands", ("source", "build", "package", "editable",
1755 "workspace")),
1756 ("Misc commands", ("profile", "remote", "user", "imports", "copy", "remove",
1757 "alias", "download", "inspect", "help", "graph"))]
1758
1759 def check_all_commands_listed():
1760 """Keep updated the main directory, raise if don't"""
1761 all_commands = self._commands()
1762 all_in_grps = [command for _, command_list in grps for command in command_list]
1763 if set(all_in_grps) != set(all_commands):
1764 diff = set(all_commands) - set(all_in_grps)
1765 raise Exception("Some command is missing in the main help: %s" % ",".join(diff))
1766 return all_commands
1767
1768 commands = check_all_commands_listed()
1769 max_len = max((len(c) for c in commands)) + 1
1770 fmt = ' %-{}s'.format(max_len)
1771
1772 for group_name, comm_names in grps:
1773 self._out.writeln(group_name, Color.BRIGHT_MAGENTA)
1774 for name in comm_names:
1775 # future-proof way to ensure tabular formatting
1776 self._out.write(fmt % name, Color.GREEN)
1777
1778 # Help will be all the lines up to the first empty one
1779 docstring_lines = commands[name].__doc__.split('\n')
1780 start = False
1781 data = []
1782 for line in docstring_lines:
1783 line = line.strip()
1784 if not line:
1785 if start:
1786 break
1787 start = True
1788 continue
1789 data.append(line)
1790
1791 import textwrap
1792 txt = textwrap.fill(' '.join(data), 80, subsequent_indent=" "*(max_len+2))
1793 self._out.writeln(txt)
1794
1795 self._out.writeln("")
1796 self._out.writeln('Conan commands. Type "conan <command> -h" for help', Color.BRIGHT_YELLOW)
1797
1798 def _commands(self):
1799 """ returns a list of available commands
1800 """
1801 result = {}
1802 for m in inspect.getmembers(self, predicate=inspect.ismethod):
1803 method_name = m[0]
1804 if not method_name.startswith('_'):
1805 if "export_pkg" == method_name:
1806 method_name = "export-pkg"
1807 method = m[1]
1808 if method.__doc__ and not method.__doc__.startswith('HIDDEN'):
1809 result[method_name] = method
1810 return result
1811
1812 def _warn_python2(self):
1813 if six.PY2:
1814 self._out.writeln("")
1815 self._out.writeln("Python 2 will soon be deprecated. It is strongly "
1816 "recommended to use Python 3 with Conan:", front=Color.BRIGHT_YELLOW)
1817 self._out.writeln("https://docs.conan.io/en/latest/installation.html"
1818 "#python-2-deprecation-notice", front=Color.BRIGHT_YELLOW)
1819 self._out.writeln("")
1820
1821 def run(self, *args):
1822 """HIDDEN: entry point for executing commands, dispatcher to class
1823 methods
1824 """
1825 ret_code = SUCCESS
1826 try:
1827 try:
1828 command = args[0][0]
1829 commands = self._commands()
1830 method = commands[command]
1831 except KeyError as exc:
1832 if command in ["-v", "--version"]:
1833 self._out.success("Conan version %s" % client_version)
1834 return False
1835 self._warn_python2()
1836 self._show_help()
1837 if command in ["-h", "--help"]:
1838 return False
1839 raise ConanException("Unknown command %s" % str(exc))
1840 except IndexError: # No parameters
1841 self._show_help()
1842 return False
1843 method(args[0][1:])
1844 except KeyboardInterrupt as exc:
1845 logger.error(exc)
1846 ret_code = SUCCESS
1847 except SystemExit as exc:
1848 if exc.code != 0:
1849 logger.error(exc)
1850 self._out.error("Exiting with code: %d" % exc.code)
1851 ret_code = exc.code
1852 except ConanInvalidConfiguration as exc:
1853 ret_code = ERROR_INVALID_CONFIGURATION
1854 self._out.error(exc)
1855 except ConanException as exc:
1856 ret_code = ERROR_GENERAL
1857 self._out.error(exc)
1858 except Exception as exc:
1859 import traceback
1860 print(traceback.format_exc())
1861 ret_code = ERROR_GENERAL
1862 msg = exception_message_safe(exc)
1863 self._out.error(msg)
1864
1865 return ret_code
1866
1867
1868 def _add_manifests_arguments(parser):
1869 parser.add_argument("-m", "--manifests", const=default_manifest_folder, nargs="?",
1870 help='Install dependencies manifests in folder for later verify.'
1871 ' Default folder is .conan_manifests, but can be changed',
1872 action=OnceArgument)
1873 parser.add_argument("-mi", "--manifests-interactive", const=default_manifest_folder,
1874 nargs="?",
1875 help='Install dependencies manifests in folder for later verify, '
1876 'asking user for confirmation. '
1877 'Default folder is .conan_manifests, but can be changed',
1878 action=OnceArgument)
1879 parser.add_argument("-v", "--verify", const=default_manifest_folder, nargs="?",
1880 help='Verify dependencies manifests against stored ones',
1881 action=OnceArgument)
1882
1883
1884 def _add_common_install_arguments(parser, build_help, lockfile=True):
1885 if build_help:
1886 parser.add_argument("-b", "--build", action=Extender, nargs="?", help=build_help)
1887
1888 parser.add_argument("-e", "--env", nargs=1, action=Extender,
1889 help='Environment variables that will be set during the package build, '
1890 '-e CXX=/usr/bin/clang++')
1891 parser.add_argument("-o", "--options", nargs=1, action=Extender,
1892 help='Define options values, e.g., -o Pkg:with_qt=true')
1893 parser.add_argument("-pr", "--profile", default=None, action=Extender,
1894 help='Apply the specified profile to the install command')
1895 parser.add_argument("-r", "--remote", action=OnceArgument,
1896 help='Look in the specified remote server')
1897 parser.add_argument("-s", "--settings", nargs=1, action=Extender,
1898 help='Settings to build the package, overwriting the defaults. e.g., '
1899 '-s compiler=gcc')
1900 parser.add_argument("-u", "--update", action='store_true', default=False,
1901 help="Check updates exist from upstream remotes")
1902 if lockfile:
1903 parser.add_argument("-l", "--lockfile", action=OnceArgument, nargs='?', const=".",
1904 help="Path to a lockfile or folder containing 'conan.lock' file. "
1905 "Lockfile can be updated if packages change")
1906
1907
1908 _help_build_policies = '''Optional, use it to choose if you want to build from sources:
1909
1910 --build Build all from sources, do not use binary packages.
1911 --build=never Never build, use binary packages or fail if a binary package is not found.
1912 --build=missing Build from code if a binary package is not found.
1913 --build=cascade Will build from code all the nodes with some dependency being built
1914 (for any reason). Can be used together with any other build policy.
1915 Useful to make sure that any new change introduced in a dependency is
1916 incorporated by building again the package.
1917 --build=outdated Build from code if the binary is not built with the current recipe or
1918 when missing binary package.
1919 --build=[pattern] Build always these packages from source, but never build the others.
1920 Allows multiple --build parameters. 'pattern' is a fnmatch file pattern
1921 of a package reference.
1922
1923 Default behavior: If you don't specify anything, it will be similar to '--build=never', but
1924 package recipes can override it with their 'build_policy' attribute in the conanfile.py.
1925 '''
1926
1927
1928 def main(args):
1929 """ main entry point of the conan application, using a Command to
1930 parse parameters
1931
1932 Exit codes for conan command:
1933
1934 0: Success (done)
1935 1: General ConanException error (done)
1936 2: Migration error
1937 3: Ctrl+C
1938 4: Ctrl+Break
1939 5: SIGTERM
1940 6: Invalid configuration (done)
1941 """
1942 try:
1943 conan_api, _, _ = Conan.factory()
1944 except ConanMigrationError: # Error migrating
1945 sys.exit(ERROR_MIGRATION)
1946 except ConanException as e:
1947 sys.stderr.write("Error in Conan initialization: {}".format(e))
1948 sys.exit(ERROR_GENERAL)
1949
1950 command = Command(conan_api)
1951 current_dir = get_cwd()
1952 try:
1953 import signal
1954
1955 def ctrl_c_handler(_, __):
1956 print('You pressed Ctrl+C!')
1957 sys.exit(USER_CTRL_C)
1958
1959 def sigterm_handler(_, __):
1960 print('Received SIGTERM!')
1961 sys.exit(ERROR_SIGTERM)
1962
1963 def ctrl_break_handler(_, __):
1964 print('You pressed Ctrl+Break!')
1965 sys.exit(USER_CTRL_BREAK)
1966
1967 signal.signal(signal.SIGINT, ctrl_c_handler)
1968 signal.signal(signal.SIGTERM, sigterm_handler)
1969
1970 if sys.platform == 'win32':
1971 signal.signal(signal.SIGBREAK, ctrl_break_handler)
1972 error = command.run(args)
1973 finally:
1974 os.chdir(current_dir)
1975 sys.exit(error)
1976
[end of conans/client/command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| conan-io/conan | 56a5b42691907598535ff9e61ac8eac0fb251305 | build_requirements is ignored
I have A package, which build_requires B package. And C package requires A, build_requires B. When I execute "conan install" for C, conan will skip B. If I remove requires A, conan will not skip B. What I want is conan will install A and B. Any help you can provide would be great.
Thanks
To help us debug your issue please explain:
To help us debug your issue please explain:
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
| Hi @xyz1001
I am trying to reproduce your case, but so far no success. Please check the following test, that is passing:
```python
class BuildRequiresTest(unittest.TestCase):
def test_consumer(self):
# https://github.com/conan-io/conan/issues/5425
t = TestClient()
t.save({"conanfile.py": str(TestConanFile("ToolB", "0.1"))})
t.run("create . ToolB/0.1@user/testing")
t.save({"conanfile.py": str(TestConanFile("LibA", "0.1",
build_requires=["ToolB/0.1@user/testing"]))})
t.run("create . LibA/0.1@user/testing")
t.save({"conanfile.py": str(TestConanFile("LibC", "0.1",
requires=["LibA/0.1@user/testing"],
build_requires=["ToolB/0.1@user/testing"]))})
t.run("install .")
self.assertIn("ToolB/0.1@user/testing from local cache", t.out)
```
As you can see, the build require to ToolB is not being skipped. Could you please double check it? Maybe a more complete and reproducible case would help. Thanks!
I am sorry, LibA is private_requires ToolB. I modified the test case:
```
class BuildRequiresTest(unittest.TestCase):
def test_consumer(self):
# https://github.com/conan-io/conan/issues/5425
t = TestClient()
t.save({"conanfile.py": str(TestConanFile("ToolB", "0.1"))})
t.run("create . ToolB/0.1@user/testing")
t.save({"conanfile.py": str(TestConanFile("LibA", "0.1",
private_requires=[("ToolB/0.1@user/testing")]))})
t.run("create . LibA/0.1@user/testing")
t.save({"conanfile.py": str(TestConanFile("LibC", "0.1",
requires=[
"LibA/0.1@user/testing"],
build_requires=["ToolB/0.1@user/testing"]))})
t.run("install .")
self.assertIn("ToolB/0.1@user/testing from local cache", t.out)
```
I try the test case and it is passed. However, In my project `XXX`, it did print `ToolB/0.1@user/testing from local cache`, but the conanbuildinfo.txt has not any info about the `ToolB`. Here is the `conan install` output:
```
conanfile.py (XXX/None@None/None): Installing package
Requirements
catch2/2.4.2@bincrafters/stable from 'conan-local' - Cache
fmt/5.2.1@bincrafters/stable from 'conan-local' - Cache
xxx_logger/1.2.13@screenshare/stable from 'conan-local' - Cache
spdlog/1.2.1@bincrafters/stable from 'conan-local' - Cache
Packages
catch2/2.4.2@bincrafters/stable:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Skip
fmt/5.2.1@bincrafters/stable:038f8796e196b3dba76fcc5fd4ef5d3d9c6866ec - Cache
xxx_logger/1.2.13@screenshare/stable:aa971e8736e335273eb99282f27319bdaa20df9d - Cache
spdlog/1.2.1@bincrafters/stable:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache
Build requirements
catch2/2.4.2@bincrafters/stable from 'conan-local' - Cache
Build requirements packages
catch2/2.4.2@bincrafters/stable:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Skip
fmt/5.2.1@bincrafters/stable: Already installed!
spdlog/1.2.1@bincrafters/stable: Already installed!
xxx_logger/1.2.13@screenshare/stable: Already installed!
```
catch2 -> ToolB
xxx_logger -> LibA
XXX -> LibC
here is the conanbuildinfo.txt.
```
[includedirs]
/home/xyz1001/.conan/data/xxx_logger/1.2.13/screenshare/stable/package/aa971e8736e335273eb99282f27319bdaa20df9d/include
/home/xyz1001/.conan/data/spdlog/1.2.1/bincrafters/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/include
/home/xyz1001/.conan/data/fmt/5.2.1/bincrafters/stable/package/038f8796e196b3dba76fcc5fd4ef5d3d9c6866ec/include
[libdirs]
/home/xyz1001/.conan/data/xxx_logger/1.2.13/screenshare/stable/package/aa971e8736e335273eb99282f27319bdaa20df9d/lib
/home/xyz1001/.conan/data/spdlog/1.2.1/bincrafters/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/lib
/home/xyz1001/.conan/data/fmt/5.2.1/bincrafters/stable/package/038f8796e196b3dba76fcc5fd4ef5d3d9c6866ec/lib
[bindirs]
[resdirs]
[builddirs]
/home/xyz1001/.conan/data/xxx_logger/1.2.13/screenshare/stable/package/aa971e8736e335273eb99282f27319bdaa20df9d/
/home/xyz1001/.conan/data/spdlog/1.2.1/bincrafters/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/
/home/xyz1001/.conan/data/fmt/5.2.1/bincrafters/stable/package/038f8796e196b3dba76fcc5fd4ef5d3d9c6866ec/
[libs]
xxx_logger
pthread
fmtd
[defines]
SPDLOG_FMT_EXTERNAL
[cppflags]
[cxxflags]
[cflags]
[sharedlinkflags]
[exelinkflags]
[sysroot]
[includedirs_xxx_logger]
/home/xyz1001/.conan/data/xxx_logger/1.2.13/screenshare/stable/package/aa971e8736e335273eb99282f27319bdaa20df9d/include
[libdirs_xxx_logger]
/home/xyz1001/.conan/data/xxx_logger/1.2.13/screenshare/stable/package/aa971e8736e335273eb99282f27319bdaa20df9d/lib
[bindirs_xxx_logger]
[resdirs_xxx_logger]
[builddirs_xxx_logger]
/home/xyz1001/.conan/data/xxx_logger/1.2.13/screenshare/stable/package/aa971e8736e335273eb99282f27319bdaa20df9d/
[libs_xxx_logger]
xxx_logger
pthread
[defines_xxx_logger]
[cppflags_xxx_logger]
[cxxflags_xxx_logger]
[cflags_xxx_logger]
[sharedlinkflags_xxx_logger]
[exelinkflags_xxx_logger]
[sysroot_xxx_logger]
[rootpath_xxx_logger]
/home/xyz1001/.conan/data/xxx_logger/1.2.13/screenshare/stable/package/aa971e8736e335273eb99282f27319bdaa20df9d
[includedirs_spdlog]
/home/xyz1001/.conan/data/spdlog/1.2.1/bincrafters/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/include
[libdirs_spdlog]
/home/xyz1001/.conan/data/spdlog/1.2.1/bincrafters/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/lib
[bindirs_spdlog]
[resdirs_spdlog]
[builddirs_spdlog]
/home/xyz1001/.conan/data/spdlog/1.2.1/bincrafters/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/
[libs_spdlog]
pthread
[defines_spdlog]
SPDLOG_FMT_EXTERNAL
[cppflags_spdlog]
[cxxflags_spdlog]
[cflags_spdlog]
[sharedlinkflags_spdlog]
[exelinkflags_spdlog]
[sysroot_spdlog]
[rootpath_spdlog]
/home/xyz1001/.conan/data/spdlog/1.2.1/bincrafters/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9
[includedirs_fmt]
/home/xyz1001/.conan/data/fmt/5.2.1/bincrafters/stable/package/038f8796e196b3dba76fcc5fd4ef5d3d9c6866ec/include
[libdirs_fmt]
/home/xyz1001/.conan/data/fmt/5.2.1/bincrafters/stable/package/038f8796e196b3dba76fcc5fd4ef5d3d9c6866ec/lib
[bindirs_fmt]
[resdirs_fmt]
[builddirs_fmt]
/home/xyz1001/.conan/data/fmt/5.2.1/bincrafters/stable/package/038f8796e196b3dba76fcc5fd4ef5d3d9c6866ec/
[libs_fmt]
fmtd
[defines_fmt]
[cppflags_fmt]
[cxxflags_fmt]
[cflags_fmt]
[sharedlinkflags_fmt]
[exelinkflags_fmt]
[sysroot_fmt]
[rootpath_fmt]
/home/xyz1001/.conan/data/fmt/5.2.1/bincrafters/stable/package/038f8796e196b3dba76fcc5fd4ef5d3d9c6866ec
[USER_xxx_logger]
[USER_spdlog]
[USER_fmt]
[ENV_xxx_logger]
[ENV_spdlog]
[ENV_fmt]
```
Confirmed this is an unfortunate bug, coming from a mixture of build-requirements and private requirements. It seems not trivial, it would take some time to fix.
In the meanwhile, I would strongly suggest to reconsider the usage of ``private`` requirements. We are discouraging its use (as you can see they are barely documented), should be only for some extreme cases, like needing to wrap 2 different versions of the same library. What would be the case of ``private`` requirement of ``catch`` library?
| 2019-07-29T07:06:58Z | <patch>
diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py
--- a/conans/client/graph/graph_binaries.py
+++ b/conans/client/graph/graph_binaries.py
@@ -39,7 +39,6 @@ def _evaluate_node(self, node, build_mode, update, evaluated_nodes, remotes):
return
ref, conanfile = node.ref, node.conanfile
- pref = node.pref
# If it has lock
locked = node.graph_lock_node
if locked and locked.pref.id == node.package_id:
@@ -53,7 +52,13 @@ def _evaluate_node(self, node, build_mode, update, evaluated_nodes, remotes):
if previous_nodes:
previous_nodes.append(node)
previous_node = previous_nodes[0]
- node.binary = previous_node.binary
+ # The previous node might have been skipped, but current one not necessarily
+ # keep the original node.binary value (before being skipped), and if it will be
+ # defined as SKIP again by self._handle_private(node) if it is really private
+ if previous_node.binary == BINARY_SKIP:
+ node.binary = previous_node.binary_non_skip
+ else:
+ node.binary = previous_node.binary
node.binary_remote = previous_node.binary_remote
node.prev = previous_node.prev
return
@@ -229,6 +234,8 @@ def _handle_private(self, node):
# Current closure contains own node to be skipped
for n in neigh.public_closure.values():
if n.private:
+ # store the binary origin before being overwritten by SKIP
+ n.binary_non_skip = n.binary
n.binary = BINARY_SKIP
self._handle_private(n)
</patch> | [] | [] | |||
PrefectHQ__prefect-2646 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement Depth-First Execution with Mapping
Currently each "level" of a mapped pipeline is executed before proceeding to the next level. This is undesirable especially for pipelines where it's important that each "branch" of the pipeline finish as quickly as possible.
To implement DFE, we'll need to rearrange two things:
- how mapped work gets submitted (it should start being submitted from the Flow Runner not the Task Runner)
- in order to submit work to Dask and let Dask handle the DFE scheduling, we'll want to refactor how we walk the DAG and wait to determine the width of a pipeline before we submit it (because mapping is fully dynamic we can only ascertain this width at runtime)
We'll need to be vigilant about:
- performance
- retries
- result handling
</issue>
<code>
[start of README.md]
1 <p align="center" style="margin-bottom:40px;">
2 <img src="https://uploads-ssl.webflow.com/5ba446b0e783e26d5a2f2382/5c942c9ca934ec5c88588297_primary-color-vertical.svg" height=350 style="max-height: 350px;">
3 </p>
4
5 <p align="center">
6 <a href=https://circleci.com/gh/PrefectHQ/prefect/tree/master>
7 <img src="https://circleci.com/gh/PrefectHQ/prefect/tree/master.svg?style=shield&circle-token=28689a55edc3c373486aaa5f11a1af3e5fc53344">
8 </a>
9
10 <a href="https://codecov.io/gh/PrefectHQ/prefect">
11 <img src="https://codecov.io/gh/PrefectHQ/prefect/branch/master/graph/badge.svg" />
12 </a>
13
14 <a href=https://github.com/ambv/black>
15 <img src="https://img.shields.io/badge/code%20style-black-000000.svg">
16 </a>
17
18 <a href="https://pypi.org/project/prefect/">
19 <img src="https://img.shields.io/pypi/dm/prefect.svg?color=%2327B1FF&label=installs&logoColor=%234D606E">
20 </a>
21
22 <a href="https://hub.docker.com/r/prefecthq/prefect">
23 <img src="https://img.shields.io/docker/pulls/prefecthq/prefect.svg?color=%2327B1FF&logoColor=%234D606E">
24 </a>
25
26 <a href="https://join.slack.com/t/prefect-community/shared_invite/enQtODQ3MTA2MjI4OTgyLTliYjEyYzljNTc2OThlMDE4YmViYzk3NDU4Y2EzMWZiODM0NmU3NjM0NjIyNWY0MGIxOGQzODMxNDMxYWYyOTE">
27 <img src="https://prefect-slackin.herokuapp.com/badge.svg">
28 </a>
29
30 </p>
31
32 ## Hello, world! 👋
33
34 We've rebuilt data engineering for the data science era.
35
36 Prefect is a new workflow management system, designed for modern infrastructure and powered by the open-source Prefect Core workflow engine. Users organize `Tasks` into `Flows`, and Prefect takes care of the rest.
37
38 Read the [docs](https://docs.prefect.io); get the [code](#installation); ask us [anything](https://join.slack.com/t/prefect-community/shared_invite/enQtODQ3MTA2MjI4OTgyLTliYjEyYzljNTc2OThlMDE4YmViYzk3NDU4Y2EzMWZiODM0NmU3NjM0NjIyNWY0MGIxOGQzODMxNDMxYWYyOTE)!
39
40 ### Welcome to Workflows
41
42 Prefect's Pythonic API should feel familiar for newcomers. Mark functions as tasks and call them on each other to build up a flow.
43
44 ```python
45 from prefect import task, Flow, Parameter
46
47
48 @task(log_stdout=True)
49 def say_hello(name):
50 print("Hello, {}!".format(name))
51
52
53 with Flow("My First Flow") as flow:
54 name = Parameter('name')
55 say_hello(name)
56
57
58 flow.run(name='world') # "Hello, world!"
59 flow.run(name='Marvin') # "Hello, Marvin!"
60 ```
61
62 For more detail, please see the [Core docs](https://docs.prefect.io/core/)
63
64 ### UI and Server
65
66 <p align="center" style="margin-bottom:40px;">
67 <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/orchestration/ui/dashboard-overview.png" height=440 style="max-height: 440px;">
68 </p>
69
70 In addition to the [Prefect Cloud](https://www.prefect.io/cloud) platform, Prefect includes an open-source server and UI for orchestrating and managing flows. The local server stores flow metadata in a Postgres database and exposes a GraphQL API.
71
72 Before running the server for the first time, run `prefect backend server` to configure Prefect for local orchestration. Please note the server requires [Docker](https://www.docker.com/) and [Docker Compose](https://docs.docker.com/compose/install/) to be running.
73
74 To start the server, UI, and all required infrastructure, run:
75
76 ```
77 prefect server start
78 ```
79
80 Once all components are running, you can view the UI by visiting [http://localhost:8080](http://localhost:8080).
81
82 Please note that executing flows from the server requires at least one Prefect Agent to be running: `prefect agent start`.
83
84 Finally, to register any flow with the server, call `flow.register()`. For more detail, please see the [orchestration docs](https://docs.prefect.io/orchestration/).
85
86 ## "...Prefect?"
87
88 From the Latin _praefectus_, meaning "one who is in charge", a prefect is an official who oversees a domain and makes sure that the rules are followed. Similarly, Prefect is responsible for making sure that workflows execute properly.
89
90 It also happens to be the name of a roving researcher for that wholly remarkable book, _The Hitchhiker's Guide to the Galaxy_.
91
92 ## Integrations
93
94 Thanks to Prefect's growing task library and deep ecosystem integrations, building data applications is easier than ever.
95
96 Something missing? Open a [feature request](https://github.com/PrefectHQ/prefect/issues/new/choose) or [contribute a PR](https://docs.prefect.io/core/development/overview.html)! Prefect was designed to make adding new functionality extremely easy, whether you build on top of the open-source package or maintain an internal task library for your team.
97
98 ### Task Library
99
100 | | | | | |
101 | :---: | :---: | :---: | :---: | :---: |
102 | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/airtable.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Airtable</p>](https://docs.prefect.io/core/task_library/airtable.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/aws.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>AWS</p>](https://docs.prefect.io/core/task_library/aws.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/azure.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Azure</p>](https://docs.prefect.io/core/task_library/azure.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/azure_ml.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Azure ML</p>](https://docs.prefect.io/core/task_library/azureml.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/dbt.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>DBT</p>](https://docs.prefect.io/core/task_library/dbt.html) |
103 | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/docker.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Docker</p>](https://docs.prefect.io/core/task_library/docker.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/dropbox.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Dropbox</p>](https://docs.prefect.io/core/task_library/dropbox.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/email.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Email</p>](https://docs.prefect.io/core/task_library/email.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/google_cloud.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Google Cloud</p>](https://docs.prefect.io/core/task_library/gcp.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/github.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>GitHub</p>](https://docs.prefect.io/core/task_library/github.html) |
104 | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/jira.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Jira</p>](https://docs.prefect.io/core/task_library/jira.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/kubernetes.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Kubernetes</p>](https://docs.prefect.io/core/task_library/kubernetes.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/postgres.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>PostgreSQL</p>](https://docs.prefect.io/core/task_library/postgres.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/python.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Python</p>](https://docs.prefect.io/core/task_library/function.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/pushbullet.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Pushbullet</p>](https://docs.prefect.io/core/task_library/pushbullet.html) |
105 | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/redis.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Redis</p>](https://docs.prefect.io/core/task_library/redis.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/rss.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>RSS</p>](https://docs.prefect.io/core/task_library/rss.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/shell.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Shell</p>](https://docs.prefect.io/core/task_library/shell.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/slack.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Slack</p>](https://docs.prefect.io/core/task_library/slack.html)| <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/snowflake.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Snowflake</p>](https://docs.prefect.io/core/task_library/snowflake.html) |
106 | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/spacy.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>SpaCy</p>](https://docs.prefect.io/core/task_library/spacy.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/sqlite.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>SQLite</p>](https://docs.prefect.io/core/task_library/sqlite.html) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/twitter.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Twitter</p>](https://docs.prefect.io/core/task_library/twitter.html) |
107
108 ### Deployment & Execution
109
110 | | | | | |
111 | :---: | :---: | :---: | :---: | :---: |
112 | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/azure.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Azure</p>](https://azure.microsoft.com/en-us/) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/aws.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>AWS</p>](https://aws.amazon.com/) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/dask.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Dask</p>](https://dask.org/) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/docker.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Docker</p>](https://www.docker.com/) | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/google_cloud.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Google Cloud</p>](https://cloud.google.com/)
113 <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/kubernetes.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Kubernetes</p>](https://kubernetes.io/) | | | | <img src="https://raw.githubusercontent.com/PrefectHQ/prefect/master/docs/.vuepress/public/logos/shell.png" height=128 width=128 style="max-height: 128px; max-width: 128px;"> [<p>Universal Deploy</p>](https://medium.com/the-prefect-blog/introducing-prefect-universal-deploy-7992283e5911)
114
115 ## Resources
116
117 Prefect provides a variety of resources to help guide you to a successful outcome.
118
119 We are committed to ensuring a positive environment, and all interactions are governed by our [Code of Conduct](https://docs.prefect.io/core/code_of_conduct.html).
120
121 ### Documentation
122
123 Prefect's documentation -- including concepts, tutorials, and a full API reference -- is always available at [docs.prefect.io](https://docs.prefect.io).
124
125 Instructions for contributing to documentation can be found in the [development guide](https://docs.prefect.io/core/development/documentation.html).
126
127 ### Slack Community
128
129 Join our [Slack](https://join.slack.com/t/prefect-community/shared_invite/enQtODQ3MTA2MjI4OTgyLTliYjEyYzljNTc2OThlMDE4YmViYzk3NDU4Y2EzMWZiODM0NmU3NjM0NjIyNWY0MGIxOGQzODMxNDMxYWYyOTE) to chat about Prefect, ask questions, and share tips.
130
131 ### Blog
132
133 Visit the [Prefect Blog](https://medium.com/the-prefect-blog) for updates and insights from the Prefect team.
134
135 ### Support
136
137 Prefect offers a variety of community and premium [support options](https://www.prefect.io/support) for users of both Prefect Core and Prefect Cloud.
138
139 ### Contributing
140
141 Read about Prefect's [community](https://docs.prefect.io/core/community.html) or dive in to the [development guides](https://docs.prefect.io/core/development/overview.html) for information about contributions, documentation, code style, and testing.
142
143 ## Installation
144
145 ### Requirements
146
147 Prefect requires Python 3.6+. If you're new to Python, we recommend installing the [Anaconda distribution](https://www.anaconda.com/distribution/).
148
149 ### Latest Release
150
151 To install Prefect, run:
152
153 ```bash
154 pip install prefect
155 ```
156
157 or, if you prefer to use `conda`:
158
159 ```bash
160 conda install -c conda-forge prefect
161 ```
162
163 or `pipenv`:
164
165 ```bash
166 pipenv install --pre prefect
167 ```
168
169 ### Bleeding Edge
170
171 For development or just to try out the latest features, you may want to install Prefect directly from source.
172
173 Please note that the master branch of Prefect is not guaranteed to be compatible with Prefect Cloud or the local server.
174
175 ```bash
176 git clone https://github.com/PrefectHQ/prefect.git
177 pip install ./prefect
178 ```
179
180 ## License
181
182 Prefect is variously licensed under the [Apache Software License Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) or the [Prefect Community License](https://www.prefect.io/legal/prefect-community-license).
183
184 All code except the `/server` directory is Apache 2.0-licensed unless otherwise noted. The `/server` directory is licensed under the Prefect Community License.
185
[end of README.md]
[start of src/prefect/engine/executors/dask.py]
1 import logging
2 import uuid
3 import warnings
4 from contextlib import contextmanager
5 from typing import TYPE_CHECKING, Any, Callable, Iterator, List, Union
6
7 from prefect import context
8 from prefect.engine.executors.base import Executor
9 from prefect.utilities.importtools import import_object
10
11 if TYPE_CHECKING:
12 import dask
13 from distributed import Future
14
15
16 # XXX: remove when deprecation of DaskExecutor kwargs is done
17 _valid_client_kwargs = {
18 "timeout",
19 "set_as_default",
20 "scheduler_file",
21 "security",
22 "name",
23 "direct_to_workers",
24 "heartbeat_interval",
25 }
26
27
28 class DaskExecutor(Executor):
29 """
30 An executor that runs all functions using the `dask.distributed` scheduler.
31
32 By default a temporary `distributed.LocalCluster` is created (and
33 subsequently torn down) within the `start()` contextmanager. To use a
34 different cluster class (e.g.
35 [`dask_kubernetes.KubeCluster`](https://kubernetes.dask.org/)), you can
36 specify `cluster_class`/`cluster_kwargs`.
37
38 Alternatively, if you already have a dask cluster running, you can provide
39 the address of the scheduler via the `address` kwarg.
40
41 Note that if you have tasks with tags of the form `"dask-resource:KEY=NUM"`
42 they will be parsed and passed as
43 [Worker Resources](https://distributed.dask.org/en/latest/resources.html)
44 of the form `{"KEY": float(NUM)}` to the Dask Scheduler.
45
46 Args:
47 - address (string, optional): address of a currently running dask
48 scheduler; if one is not provided, a temporary cluster will be
49 created in `executor.start()`. Defaults to `None`.
50 - cluster_class (string or callable, optional): the cluster class to use
51 when creating a temporary dask cluster. Can be either the full
52 class name (e.g. `"distributed.LocalCluster"`), or the class itself.
53 - cluster_kwargs (dict, optional): addtional kwargs to pass to the
54 `cluster_class` when creating a temporary dask cluster.
55 - adapt_kwargs (dict, optional): additional kwargs to pass to ``cluster.adapt`
56 when creating a temporary dask cluster. Note that adaptive scaling
57 is only enabled if `adapt_kwargs` are provided.
58 - client_kwargs (dict, optional): additional kwargs to use when creating a
59 [`dask.distributed.Client`](https://distributed.dask.org/en/latest/api.html#client).
60 - debug (bool, optional): When running with a local cluster, setting
61 `debug=True` will increase dask's logging level, providing
62 potentially useful debug info. Defaults to the `debug` value in
63 your Prefect configuration.
64 - **kwargs: DEPRECATED
65
66 Example:
67
68 Using a temporary local dask cluster:
69
70 ```python
71 executor = DaskExecutor()
72 ```
73
74 Using a temporary cluster running elsewhere. Any Dask cluster class should
75 work, here we use [dask-cloudprovider](https://cloudprovider.dask.org):
76
77 ```python
78 executor = DaskExecutor(
79 cluster_class="dask_cloudprovider.FargateCluster",
80 cluster_kwargs={
81 "image": "prefecthq/prefect:latest",
82 "n_workers": 5,
83 ...
84 },
85 )
86 ```
87
88 Connecting to an existing dask cluster
89
90 ```python
91 executor = DaskExecutor(address="192.0.2.255:8786")
92 ```
93 """
94
95 def __init__(
96 self,
97 address: str = None,
98 cluster_class: Union[str, Callable] = None,
99 cluster_kwargs: dict = None,
100 adapt_kwargs: dict = None,
101 client_kwargs: dict = None,
102 debug: bool = None,
103 **kwargs: Any
104 ):
105 if address is None:
106 address = context.config.engine.executor.dask.address or None
107 # XXX: deprecated
108 if address == "local":
109 warnings.warn(
110 "`address='local'` is deprecated. To use a local cluster, leave the "
111 "`address` field empty."
112 )
113 address = None
114
115 # XXX: deprecated
116 local_processes = kwargs.pop("local_processes", None)
117 if local_processes is None:
118 local_processes = context.config.engine.executor.dask.get(
119 "local_processes", None
120 )
121 if local_processes is not None:
122 warnings.warn(
123 "`local_processes` is deprecated, please use "
124 "`cluster_kwargs={'processes': local_processes}`. The default is "
125 "now `local_processes=True`."
126 )
127
128 if address is not None:
129 if cluster_class is not None or cluster_kwargs is not None:
130 raise ValueError(
131 "Cannot specify `address` and `cluster_class`/`cluster_kwargs`"
132 )
133 else:
134 if cluster_class is None:
135 cluster_class = context.config.engine.executor.dask.cluster_class
136 if isinstance(cluster_class, str):
137 cluster_class = import_object(cluster_class)
138 if cluster_kwargs is None:
139 cluster_kwargs = {}
140 else:
141 cluster_kwargs = cluster_kwargs.copy()
142
143 from distributed.deploy.local import LocalCluster
144
145 if cluster_class == LocalCluster:
146 if debug is None:
147 debug = context.config.debug
148 cluster_kwargs.setdefault(
149 "silence_logs", logging.CRITICAL if not debug else logging.WARNING
150 )
151 if local_processes is not None:
152 cluster_kwargs.setdefault("processes", local_processes)
153 for_cluster = set(kwargs).difference(_valid_client_kwargs)
154 if for_cluster:
155 warnings.warn(
156 "Forwarding executor kwargs to `LocalCluster` is now handled by the "
157 "`cluster_kwargs` parameter, please update accordingly"
158 )
159 for k in for_cluster:
160 cluster_kwargs[k] = kwargs.pop(k)
161
162 if adapt_kwargs is None:
163 adapt_kwargs = {}
164
165 if client_kwargs is None:
166 client_kwargs = {}
167 if kwargs:
168 warnings.warn(
169 "Forwarding executor kwargs to `Client` is now handled by the "
170 "`client_kwargs` parameter, please update accordingly"
171 )
172 client_kwargs.update(kwargs)
173
174 self.address = address
175 self.is_started = False
176 self.cluster_class = cluster_class
177 self.cluster_kwargs = cluster_kwargs
178 self.adapt_kwargs = adapt_kwargs
179 self.client_kwargs = client_kwargs
180
181 super().__init__()
182
183 @contextmanager
184 def start(self) -> Iterator[None]:
185 """
186 Context manager for initializing execution.
187
188 Creates a `dask.distributed.Client` and yields it.
189 """
190 # import dask client here to decrease our import times
191 from distributed import Client
192
193 try:
194 if self.address is not None:
195 with Client(self.address, **self.client_kwargs) as client:
196 self.client = client
197 self.is_started = True
198 yield self.client
199 else:
200 with self.cluster_class(**self.cluster_kwargs) as cluster: # type: ignore
201 if self.adapt_kwargs:
202 cluster.adapt(**self.adapt_kwargs)
203 with Client(cluster, **self.client_kwargs) as client:
204 self.client = client
205 self.is_started = True
206 yield self.client
207 finally:
208 self.client = None
209 self.is_started = False
210
211 def _prep_dask_kwargs(self) -> dict:
212 dask_kwargs = {"pure": False} # type: dict
213
214 # set a key for the dask scheduler UI
215 if context.get("task_full_name"):
216 key = "{}-{}".format(context.get("task_full_name", ""), str(uuid.uuid4()))
217 dask_kwargs.update(key=key)
218
219 # infer from context if dask resources are being utilized
220 dask_resource_tags = [
221 tag
222 for tag in context.get("task_tags", [])
223 if tag.lower().startswith("dask-resource")
224 ]
225 if dask_resource_tags:
226 resources = {}
227 for tag in dask_resource_tags:
228 prefix, val = tag.split("=")
229 resources.update({prefix.split(":")[1]: float(val)})
230 dask_kwargs.update(resources=resources)
231
232 return dask_kwargs
233
234 def __getstate__(self) -> dict:
235 state = self.__dict__.copy()
236 if "client" in state:
237 del state["client"]
238 return state
239
240 def __setstate__(self, state: dict) -> None:
241 self.__dict__.update(state)
242
243 def submit(self, fn: Callable, *args: Any, **kwargs: Any) -> "Future":
244 """
245 Submit a function to the executor for execution. Returns a Future object.
246
247 Args:
248 - fn (Callable): function that is being submitted for execution
249 - *args (Any): arguments to be passed to `fn`
250 - **kwargs (Any): keyword arguments to be passed to `fn`
251
252 Returns:
253 - Future: a Future-like object that represents the computation of `fn(*args, **kwargs)`
254 """
255 # import dask functions here to decrease our import times
256 from distributed import fire_and_forget, worker_client
257
258 dask_kwargs = self._prep_dask_kwargs()
259 kwargs.update(dask_kwargs)
260
261 if self.is_started and hasattr(self, "client"):
262 future = self.client.submit(fn, *args, **kwargs)
263 elif self.is_started:
264 with worker_client(separate_thread=True) as client:
265 future = client.submit(fn, *args, **kwargs)
266 else:
267 raise ValueError("This executor has not been started.")
268
269 fire_and_forget(future)
270 return future
271
272 def map(self, fn: Callable, *args: Any, **kwargs: Any) -> List["Future"]:
273 """
274 Submit a function to be mapped over its iterable arguments.
275
276 Args:
277 - fn (Callable): function that is being submitted for execution
278 - *args (Any): arguments that the function will be mapped over
279 - **kwargs (Any): additional keyword arguments that will be passed to the Dask Client
280
281 Returns:
282 - List[Future]: a list of Future-like objects that represent each computation of
283 fn(*a), where a = zip(*args)[i]
284
285 """
286 if not args:
287 return []
288
289 # import dask functions here to decrease our import times
290 from distributed import fire_and_forget, worker_client
291
292 dask_kwargs = self._prep_dask_kwargs()
293 kwargs.update(dask_kwargs)
294
295 if self.is_started and hasattr(self, "client"):
296 futures = self.client.map(fn, *args, **kwargs)
297 elif self.is_started:
298 with worker_client(separate_thread=True) as client:
299 futures = client.map(fn, *args, **kwargs)
300 return client.gather(futures)
301 else:
302 raise ValueError("This executor has not been started.")
303
304 fire_and_forget(futures)
305 return futures
306
307 def wait(self, futures: Any) -> Any:
308 """
309 Resolves the Future objects to their values. Blocks until the computation is complete.
310
311 Args:
312 - futures (Any): single or iterable of future-like objects to compute
313
314 Returns:
315 - Any: an iterable of resolved futures with similar shape to the input
316 """
317 # import dask functions here to decrease our import times
318 from distributed import worker_client
319
320 if self.is_started and hasattr(self, "client"):
321 return self.client.gather(futures)
322 elif self.is_started:
323 with worker_client(separate_thread=True) as client:
324 return client.gather(futures)
325 else:
326 raise ValueError("This executor has not been started.")
327
328
329 class LocalDaskExecutor(Executor):
330 """
331 An executor that runs all functions locally using `dask` and a configurable dask scheduler. Note that
332 this executor is known to occasionally run tasks twice when using multi-level mapping.
333
334 Prefect's mapping feature will not work in conjunction with setting `scheduler="processes"`.
335
336 Args:
337 - scheduler (str): The local dask scheduler to use; common options are "synchronous", "threads" and "processes". Defaults to "threads".
338 - **kwargs (Any): Additional keyword arguments to pass to dask config
339 """
340
341 def __init__(self, scheduler: str = "threads", **kwargs: Any):
342 self.scheduler = scheduler
343 self.kwargs = kwargs
344 super().__init__()
345
346 @contextmanager
347 def start(self) -> Iterator:
348 """
349 Context manager for initializing execution.
350
351 Configures `dask` and yields the `dask.config` contextmanager.
352 """
353 # import dask here to reduce prefect import times
354 import dask
355
356 with dask.config.set(scheduler=self.scheduler, **self.kwargs) as cfg:
357 yield cfg
358
359 def submit(self, fn: Callable, *args: Any, **kwargs: Any) -> "dask.delayed":
360 """
361 Submit a function to the executor for execution. Returns a `dask.delayed` object.
362
363 Args:
364 - fn (Callable): function that is being submitted for execution
365 - *args (Any): arguments to be passed to `fn`
366 - **kwargs (Any): keyword arguments to be passed to `fn`
367
368 Returns:
369 - dask.delayed: a `dask.delayed` object that represents the computation of `fn(*args, **kwargs)`
370 """
371 # import dask here to reduce prefect import times
372 import dask
373
374 return dask.delayed(fn)(*args, **kwargs)
375
376 def map(self, fn: Callable, *args: Any) -> List["dask.delayed"]:
377 """
378 Submit a function to be mapped over its iterable arguments.
379
380 Args:
381 - fn (Callable): function that is being submitted for execution
382 - *args (Any): arguments that the function will be mapped over
383
384 Returns:
385 - List[dask.delayed]: the result of computating the function over the arguments
386
387 """
388 if self.scheduler == "processes":
389 raise RuntimeError(
390 "LocalDaskExecutor cannot map if scheduler='processes'. Please set to either 'synchronous' or 'threads'."
391 )
392
393 results = []
394 for args_i in zip(*args):
395 results.append(self.submit(fn, *args_i))
396 return results
397
398 def wait(self, futures: Any) -> Any:
399 """
400 Resolves a `dask.delayed` object to its values. Blocks until the computation is complete.
401
402 Args:
403 - futures (Any): iterable of `dask.delayed` objects to compute
404
405 Returns:
406 - Any: an iterable of resolved futures
407 """
408 # import dask here to reduce prefect import times
409 import dask
410
411 with dask.config.set(scheduler=self.scheduler, **self.kwargs):
412 return dask.compute(futures)[0]
413
[end of src/prefect/engine/executors/dask.py]
[start of src/prefect/engine/flow_runner.py]
1 from typing import (
2 Any,
3 Callable,
4 Dict,
5 Iterable,
6 NamedTuple,
7 Optional,
8 Set,
9 Union,
10 )
11
12 import pendulum
13
14 import prefect
15 from prefect.core import Edge, Flow, Task
16 from prefect.engine.result import Result
17 from prefect.engine.results import ConstantResult
18 from prefect.engine.runner import ENDRUN, Runner, call_state_handlers
19 from prefect.engine.state import (
20 Cancelled,
21 Failed,
22 Mapped,
23 Pending,
24 Retrying,
25 Running,
26 Scheduled,
27 State,
28 Success,
29 )
30 from prefect.utilities.collections import flatten_seq
31 from prefect.utilities.executors import run_with_heartbeat
32
33 FlowRunnerInitializeResult = NamedTuple(
34 "FlowRunnerInitializeResult",
35 [
36 ("state", State),
37 ("task_states", Dict[Task, State]),
38 ("context", Dict[str, Any]),
39 ("task_contexts", Dict[Task, Dict[str, Any]]),
40 ],
41 )
42
43
44 class FlowRunner(Runner):
45 """
46 FlowRunners handle the execution of Flows and determine the State of a Flow
47 before, during and after the Flow is run.
48
49 In particular, through the FlowRunner you can specify which tasks should be
50 the first tasks to run, which tasks should be returned after the Flow is finished,
51 and what states each task should be initialized with.
52
53 Args:
54 - flow (Flow): the `Flow` to be run
55 - task_runner_cls (TaskRunner, optional): The class used for running
56 individual Tasks. Defaults to [TaskRunner](task_runner.html)
57 - state_handlers (Iterable[Callable], optional): A list of state change handlers
58 that will be called whenever the flow changes state, providing an
59 opportunity to inspect or modify the new state. The handler
60 will be passed the flow runner instance, the old (prior) state, and the new
61 (current) state, with the following signature:
62 `state_handler(fr: FlowRunner, old_state: State, new_state: State) -> Optional[State]`
63 If multiple functions are passed, then the `new_state` argument will be the
64 result of the previous handler.
65
66 Note: new FlowRunners are initialized within the call to `Flow.run()` and in general,
67 this is the endpoint through which FlowRunners will be interacted with most frequently.
68
69 Example:
70 ```python
71 @task
72 def say_hello():
73 print('hello')
74
75 with Flow("My Flow") as f:
76 say_hello()
77
78 fr = FlowRunner(flow=f)
79 flow_state = fr.run()
80 ```
81 """
82
83 def __init__(
84 self,
85 flow: Flow,
86 task_runner_cls: type = None,
87 state_handlers: Iterable[Callable] = None,
88 ):
89 self.context = prefect.context.to_dict()
90 self.flow = flow
91 if task_runner_cls is None:
92 task_runner_cls = prefect.engine.get_default_task_runner_class()
93 self.task_runner_cls = task_runner_cls
94 super().__init__(state_handlers=state_handlers)
95
96 def __repr__(self) -> str:
97 return "<{}: {}>".format(type(self).__name__, self.flow.name)
98
99 def call_runner_target_handlers(self, old_state: State, new_state: State) -> State:
100 """
101 A special state handler that the FlowRunner uses to call its flow's state handlers.
102 This method is called as part of the base Runner's `handle_state_change()` method.
103
104 Args:
105 - old_state (State): the old (previous) state
106 - new_state (State): the new (current) state
107
108 Returns:
109 - State: the new state
110 """
111 self.logger.debug(
112 "Flow '{name}': Handling state change from {old} to {new}".format(
113 name=self.flow.name,
114 old=type(old_state).__name__,
115 new=type(new_state).__name__,
116 )
117 )
118 for handler in self.flow.state_handlers:
119 new_state = handler(self.flow, old_state, new_state) or new_state
120
121 return new_state
122
123 def initialize_run( # type: ignore
124 self,
125 state: Optional[State],
126 task_states: Dict[Task, State],
127 context: Dict[str, Any],
128 task_contexts: Dict[Task, Dict[str, Any]],
129 parameters: Dict[str, Any],
130 ) -> FlowRunnerInitializeResult:
131 """
132 Initializes the Task run by initializing state and context appropriately.
133
134 If the provided state is a Submitted state, the state it wraps is extracted.
135
136 Args:
137 - state (Optional[State]): the initial state of the run
138 - task_states (Dict[Task, State]): a dictionary of any initial task states
139 - context (Dict[str, Any], optional): prefect.Context to use for execution
140 to use for each Task run
141 - task_contexts (Dict[Task, Dict[str, Any]], optional): contexts that will be provided to each task
142 - parameters(dict): the parameter values for the run
143
144 Returns:
145 - NamedTuple: a tuple of initialized objects:
146 `(state, task_states, context, task_contexts)`
147 """
148
149 # overwrite context parameters one-by-one
150 if parameters:
151 context_params = context.setdefault("parameters", {})
152 for param, value in parameters.items():
153 context_params[param] = value
154
155 context.update(flow_name=self.flow.name)
156 context.setdefault("scheduled_start_time", pendulum.now("utc"))
157
158 # add various formatted dates to context
159 now = pendulum.now("utc")
160 dates = {
161 "date": now,
162 "today": now.strftime("%Y-%m-%d"),
163 "yesterday": now.add(days=-1).strftime("%Y-%m-%d"),
164 "tomorrow": now.add(days=1).strftime("%Y-%m-%d"),
165 "today_nodash": now.strftime("%Y%m%d"),
166 "yesterday_nodash": now.add(days=-1).strftime("%Y%m%d"),
167 "tomorrow_nodash": now.add(days=1).strftime("%Y%m%d"),
168 }
169 for key, val in dates.items():
170 context.setdefault(key, val)
171
172 for task in self.flow.tasks:
173 task_contexts.setdefault(task, {}).update(
174 task_name=task.name, task_slug=task.slug
175 )
176 state, context = super().initialize_run(state=state, context=context)
177 return FlowRunnerInitializeResult(
178 state=state,
179 task_states=task_states,
180 context=context,
181 task_contexts=task_contexts,
182 )
183
184 def run(
185 self,
186 state: State = None,
187 task_states: Dict[Task, State] = None,
188 return_tasks: Iterable[Task] = None,
189 parameters: Dict[str, Any] = None,
190 task_runner_state_handlers: Iterable[Callable] = None,
191 executor: "prefect.engine.executors.Executor" = None,
192 context: Dict[str, Any] = None,
193 task_contexts: Dict[Task, Dict[str, Any]] = None,
194 ) -> State:
195 """
196 The main endpoint for FlowRunners. Calling this method will perform all
197 computations contained within the Flow and return the final state of the Flow.
198
199 Args:
200 - state (State, optional): starting state for the Flow. Defaults to
201 `Pending`
202 - task_states (dict, optional): dictionary of task states to begin
203 computation with, with keys being Tasks and values their corresponding state
204 - return_tasks ([Task], optional): list of Tasks to include in the
205 final returned Flow state. Defaults to `None`
206 - parameters (dict, optional): dictionary of any needed Parameter
207 values, with keys being strings representing Parameter names and values being
208 their corresponding values
209 - task_runner_state_handlers (Iterable[Callable], optional): A list of state change
210 handlers that will be provided to the task_runner, and called whenever a task changes
211 state.
212 - executor (Executor, optional): executor to use when performing
213 computation; defaults to the executor specified in your prefect configuration
214 - context (Dict[str, Any], optional): prefect.Context to use for execution
215 to use for each Task run
216 - task_contexts (Dict[Task, Dict[str, Any]], optional): contexts that will be provided to each task
217
218 Returns:
219 - State: `State` representing the final post-run state of the `Flow`.
220
221 """
222
223 self.logger.info("Beginning Flow run for '{}'".format(self.flow.name))
224
225 # make copies to avoid modifying user inputs
226 task_states = dict(task_states or {})
227 context = dict(context or {})
228 task_contexts = dict(task_contexts or {})
229 parameters = dict(parameters or {})
230 if executor is None:
231 executor = prefect.engine.get_default_executor_class()()
232
233 try:
234 state, task_states, context, task_contexts = self.initialize_run(
235 state=state,
236 task_states=task_states,
237 context=context,
238 task_contexts=task_contexts,
239 parameters=parameters,
240 )
241
242 with prefect.context(context):
243 state = self.check_flow_is_pending_or_running(state)
244 state = self.check_flow_reached_start_time(state)
245 state = self.set_flow_to_running(state)
246 state = self.get_flow_run_state(
247 state,
248 task_states=task_states,
249 task_contexts=task_contexts,
250 return_tasks=return_tasks,
251 task_runner_state_handlers=task_runner_state_handlers,
252 executor=executor,
253 )
254
255 except ENDRUN as exc:
256 state = exc.state
257
258 except KeyboardInterrupt:
259 self.logger.debug("Interrupt signal raised, cancelling Flow run.")
260 state = Cancelled(message="Interrupt signal raised, cancelling flow run.")
261
262 # All other exceptions are trapped and turned into Failed states
263 except Exception as exc:
264 self.logger.exception(
265 "Unexpected error while running flow: {}".format(repr(exc))
266 )
267 if prefect.context.get("raise_on_exception"):
268 raise exc
269 new_state = Failed(
270 message="Unexpected error while running flow: {}".format(repr(exc)),
271 result=exc,
272 )
273 state = self.handle_state_change(state or Pending(), new_state)
274
275 return state
276
277 @call_state_handlers
278 def check_flow_reached_start_time(self, state: State) -> State:
279 """
280 Checks if the Flow is in a Scheduled state and, if it is, ensures that the scheduled
281 time has been reached.
282
283 Args:
284 - state (State): the current state of this Flow
285
286 Returns:
287 - State: the state of the flow after performing the check
288
289 Raises:
290 - ENDRUN: if the flow is Scheduled with a future scheduled time
291 """
292 if isinstance(state, Scheduled):
293 if state.start_time and state.start_time > pendulum.now("utc"):
294 self.logger.debug(
295 "Flow '{name}': start_time has not been reached; ending run.".format(
296 name=self.flow.name
297 )
298 )
299 raise ENDRUN(state)
300 return state
301
302 @call_state_handlers
303 def check_flow_is_pending_or_running(self, state: State) -> State:
304 """
305 Checks if the flow is in either a Pending state or Running state. Either are valid
306 starting points (because we allow simultaneous runs of the same flow run).
307
308 Args:
309 - state (State): the current state of this flow
310
311 Returns:
312 - State: the state of the flow after running the check
313
314 Raises:
315 - ENDRUN: if the flow is not pending or running
316 """
317
318 # the flow run is already finished
319 if state.is_finished() is True:
320 self.logger.info("Flow run has already finished.")
321 raise ENDRUN(state)
322
323 # the flow run must be either pending or running (possibly redundant with above)
324 elif not (state.is_pending() or state.is_running()):
325 self.logger.info("Flow is not ready to run.")
326 raise ENDRUN(state)
327
328 return state
329
330 @call_state_handlers
331 def set_flow_to_running(self, state: State) -> State:
332 """
333 Puts Pending flows in a Running state; leaves Running flows Running.
334
335 Args:
336 - state (State): the current state of this flow
337
338 Returns:
339 - State: the state of the flow after running the check
340
341 Raises:
342 - ENDRUN: if the flow is not pending or running
343 """
344 if state.is_pending():
345 self.logger.info("Starting flow run.")
346 return Running(message="Running flow.")
347 elif state.is_running():
348 return state
349 else:
350 raise ENDRUN(state)
351
352 @run_with_heartbeat
353 @call_state_handlers
354 def get_flow_run_state(
355 self,
356 state: State,
357 task_states: Dict[Task, State],
358 task_contexts: Dict[Task, Dict[str, Any]],
359 return_tasks: Set[Task],
360 task_runner_state_handlers: Iterable[Callable],
361 executor: "prefect.engine.executors.base.Executor",
362 ) -> State:
363 """
364 Runs the flow.
365
366 Args:
367 - state (State): starting state for the Flow. Defaults to
368 `Pending`
369 - task_states (dict): dictionary of task states to begin
370 computation with, with keys being Tasks and values their corresponding state
371 - task_contexts (Dict[Task, Dict[str, Any]]): contexts that will be provided to each task
372 - return_tasks ([Task], optional): list of Tasks to include in the
373 final returned Flow state. Defaults to `None`
374 - task_runner_state_handlers (Iterable[Callable]): A list of state change
375 handlers that will be provided to the task_runner, and called whenever a task changes
376 state.
377 - executor (Executor): executor to use when performing
378 computation; defaults to the executor provided in your prefect configuration
379
380 Returns:
381 - State: `State` representing the final post-run state of the `Flow`.
382
383 """
384
385 if not state.is_running():
386 self.logger.info("Flow is not in a Running state.")
387 raise ENDRUN(state)
388
389 if return_tasks is None:
390 return_tasks = set()
391 if set(return_tasks).difference(self.flow.tasks):
392 raise ValueError("Some tasks in return_tasks were not found in the flow.")
393
394 # -- process each task in order
395
396 with executor.start():
397
398 for task in self.flow.sorted_tasks():
399
400 task_state = task_states.get(task)
401 if task_state is None and isinstance(
402 task, prefect.tasks.core.constants.Constant
403 ):
404 task_states[task] = task_state = Success(result=task.value)
405
406 # if the state is finished, don't run the task, just use the provided state
407 if (
408 isinstance(task_state, State)
409 and task_state.is_finished()
410 and not task_state.is_cached()
411 and not task_state.is_mapped()
412 ):
413 continue
414
415 upstream_states = {} # type: Dict[Edge, Union[State, Iterable]]
416
417 # -- process each edge to the task
418 for edge in self.flow.edges_to(task):
419 upstream_states[edge] = task_states.get(
420 edge.upstream_task, Pending(message="Task state not available.")
421 )
422
423 # augment edges with upstream constants
424 for key, val in self.flow.constants[task].items():
425 edge = Edge(
426 upstream_task=prefect.tasks.core.constants.Constant(val),
427 downstream_task=task,
428 key=key,
429 )
430 upstream_states[edge] = Success(
431 "Auto-generated constant value",
432 result=ConstantResult(value=val),
433 )
434
435 # -- run the task
436
437 with prefect.context(task_full_name=task.name, task_tags=task.tags):
438 task_states[task] = executor.submit(
439 self.run_task,
440 task=task,
441 state=task_state,
442 upstream_states=upstream_states,
443 context=dict(prefect.context, **task_contexts.get(task, {})),
444 task_runner_state_handlers=task_runner_state_handlers,
445 executor=executor,
446 )
447
448 # ---------------------------------------------
449 # Collect results
450 # ---------------------------------------------
451
452 # terminal tasks determine if the flow is finished
453 terminal_tasks = self.flow.terminal_tasks()
454
455 # reference tasks determine flow state
456 reference_tasks = self.flow.reference_tasks()
457
458 # wait until all terminal tasks are finished
459 final_tasks = terminal_tasks.union(reference_tasks).union(return_tasks)
460 final_states = executor.wait(
461 {
462 t: task_states.get(t, Pending("Task not evaluated by FlowRunner."))
463 for t in final_tasks
464 }
465 )
466
467 # also wait for any children of Mapped tasks to finish, and add them
468 # to the dictionary to determine flow state
469 all_final_states = final_states.copy()
470 for t, s in list(final_states.items()):
471 if s.is_mapped():
472 s.map_states = executor.wait(s.map_states)
473 s.result = [ms.result for ms in s.map_states]
474 all_final_states[t] = s.map_states
475
476 assert isinstance(final_states, dict)
477
478 key_states = set(flatten_seq([all_final_states[t] for t in reference_tasks]))
479 terminal_states = set(
480 flatten_seq([all_final_states[t] for t in terminal_tasks])
481 )
482 return_states = {t: final_states[t] for t in return_tasks}
483
484 state = self.determine_final_state(
485 state=state,
486 key_states=key_states,
487 return_states=return_states,
488 terminal_states=terminal_states,
489 )
490
491 return state
492
493 def determine_final_state(
494 self,
495 state: State,
496 key_states: Set[State],
497 return_states: Dict[Task, State],
498 terminal_states: Set[State],
499 ) -> State:
500 """
501 Implements the logic for determining the final state of the flow run.
502
503 Args:
504 - state (State): the current state of the Flow
505 - key_states (Set[State]): the states which will determine the success / failure of the flow run
506 - return_states (Dict[Task, State]): states to return as results
507 - terminal_states (Set[State]): the states of the terminal tasks for this flow
508
509 Returns:
510 - State: the final state of the flow run
511 """
512 # check that the flow is finished
513 if not all(s.is_finished() for s in terminal_states):
514 self.logger.info("Flow run RUNNING: terminal tasks are incomplete.")
515 state.result = return_states
516
517 # check if any key task failed
518 elif any(s.is_failed() for s in key_states):
519 self.logger.info("Flow run FAILED: some reference tasks failed.")
520 state = Failed(message="Some reference tasks failed.", result=return_states)
521
522 # check if all reference tasks succeeded
523 elif all(s.is_successful() for s in key_states):
524 self.logger.info("Flow run SUCCESS: all reference tasks succeeded")
525 state = Success(
526 message="All reference tasks succeeded.", result=return_states
527 )
528
529 # check for any unanticipated state that is finished but neither success nor failed
530 else:
531 self.logger.info("Flow run SUCCESS: no reference tasks failed")
532 state = Success(message="No reference tasks failed.", result=return_states)
533
534 return state
535
536 def run_task(
537 self,
538 task: Task,
539 state: State,
540 upstream_states: Dict[Edge, State],
541 context: Dict[str, Any],
542 task_runner_state_handlers: Iterable[Callable],
543 executor: "prefect.engine.executors.Executor",
544 ) -> State:
545 """
546
547 Runs a specific task. This method is intended to be called by submitting it to
548 an executor.
549
550 Args:
551 - task (Task): the task to run
552 - state (State): starting state for the Flow. Defaults to
553 `Pending`
554 - upstream_states (Dict[Edge, State]): dictionary of upstream states
555 - context (Dict[str, Any]): a context dictionary for the task run
556 - task_runner_state_handlers (Iterable[Callable]): A list of state change
557 handlers that will be provided to the task_runner, and called whenever a task changes
558 state.
559 - executor (Executor): executor to use when performing
560 computation; defaults to the executor provided in your prefect configuration
561
562 Returns:
563 - State: `State` representing the final post-run state of the `Flow`.
564
565 """
566 with prefect.context(self.context):
567 default_result = task.result or self.flow.result
568 task_runner = self.task_runner_cls(
569 task=task,
570 state_handlers=task_runner_state_handlers,
571 result=default_result or Result(),
572 default_result=self.flow.result,
573 )
574
575 # if this task reduces over a mapped state, make sure its children have finished
576 for edge, upstream_state in upstream_states.items():
577
578 # if the upstream state is Mapped, wait until its results are all available
579 if not edge.mapped and upstream_state.is_mapped():
580 assert isinstance(upstream_state, Mapped) # mypy assert
581 upstream_state.map_states = executor.wait(upstream_state.map_states)
582 upstream_state.result = [
583 s.result for s in upstream_state.map_states
584 ]
585
586 return task_runner.run(
587 state=state,
588 upstream_states=upstream_states,
589 context=context,
590 executor=executor,
591 )
592
[end of src/prefect/engine/flow_runner.py]
[start of src/prefect/engine/task_runner.py]
1 import copy
2 from contextlib import redirect_stdout
3 import itertools
4 import json
5 from typing import (
6 Any,
7 Callable,
8 Dict,
9 Iterable,
10 List,
11 NamedTuple,
12 Optional,
13 Set,
14 Tuple,
15 )
16
17 import pendulum
18
19 import prefect
20 from prefect import config
21 from prefect.core import Edge, Task
22 from prefect.engine import signals
23 from prefect.engine.result import NoResult, Result
24 from prefect.engine.results import PrefectResult
25 from prefect.engine.runner import ENDRUN, Runner, call_state_handlers
26 from prefect.engine.state import (
27 Cached,
28 Cancelled,
29 Failed,
30 Looped,
31 Mapped,
32 Paused,
33 Pending,
34 Resume,
35 Retrying,
36 Running,
37 Scheduled,
38 Skipped,
39 State,
40 Submitted,
41 Success,
42 TimedOut,
43 TriggerFailed,
44 )
45 from prefect.utilities.executors import (
46 RecursiveCall,
47 run_with_heartbeat,
48 tail_recursive,
49 )
50
51
52 TaskRunnerInitializeResult = NamedTuple(
53 "TaskRunnerInitializeResult", [("state", State), ("context", Dict[str, Any])]
54 )
55
56
57 class TaskRunner(Runner):
58 """
59 TaskRunners handle the execution of Tasks and determine the State of a Task
60 before, during and after the Task is run.
61
62 In particular, through the TaskRunner you can specify the states of any upstream dependencies
63 and what state the Task should be initialized with.
64
65 Args:
66 - task (Task): the Task to be run / executed
67 - state_handlers (Iterable[Callable], optional): A list of state change handlers
68 that will be called whenever the task changes state, providing an
69 opportunity to inspect or modify the new state. The handler
70 will be passed the task runner instance, the old (prior) state, and the new
71 (current) state, with the following signature: `state_handler(TaskRunner, old_state, new_state) -> Optional[State]`;
72 If multiple functions are passed, then the `new_state` argument will be the
73 result of the previous handler.
74 - result (Result, optional): the result type to use for retrieving and storing state results
75 during execution (if the Task doesn't already have one)
76 - default_result (Result, optional): the fallback result type to use for retrieving and storing state results
77 during execution (to be used on upstream inputs if they don't provide their own results)
78 """
79
80 def __init__(
81 self,
82 task: Task,
83 state_handlers: Iterable[Callable] = None,
84 result: Result = None,
85 default_result: Result = None,
86 ):
87 self.context = prefect.context.to_dict()
88 self.task = task
89
90 # if the result was provided off the parent Flow object
91 # we want to use the task's target as the target location
92 if task.result:
93 self.result = task.result
94 else:
95 self.result = Result().copy() if result is None else result.copy()
96 if self.task.target:
97 self.result.location = self.task.target
98 self.default_result = default_result or Result()
99 super().__init__(state_handlers=state_handlers)
100
101 def __repr__(self) -> str:
102 return "<{}: {}>".format(type(self).__name__, self.task.name)
103
104 def call_runner_target_handlers(self, old_state: State, new_state: State) -> State:
105 """
106 A special state handler that the TaskRunner uses to call its task's state handlers.
107 This method is called as part of the base Runner's `handle_state_change()` method.
108
109 Args:
110 - old_state (State): the old (previous) state
111 - new_state (State): the new (current) state
112
113 Returns:
114 - State: the new state
115 """
116 self.logger.debug(
117 "Task '{name}': Handling state change from {old} to {new}".format(
118 name=prefect.context.get("task_full_name", self.task.name),
119 old=type(old_state).__name__,
120 new=type(new_state).__name__,
121 )
122 )
123 for handler in self.task.state_handlers:
124 new_state = handler(self.task, old_state, new_state) or new_state
125
126 return new_state
127
128 def initialize_run( # type: ignore
129 self, state: Optional[State], context: Dict[str, Any]
130 ) -> TaskRunnerInitializeResult:
131 """
132 Initializes the Task run by initializing state and context appropriately.
133
134 If the task is being retried, then we retrieve the run count from the initial Retry
135 state. Otherwise, we assume the run count is 1. The run count is stored in context as
136 task_run_count.
137
138 Also, if the task is being resumed through a `Resume` state, updates context to have `resume=True`.
139
140 Args:
141 - state (Optional[State]): the initial state of the run
142 - context (Dict[str, Any]): the context to be updated with relevant information
143
144 Returns:
145 - tuple: a tuple of the updated state, context, upstream_states, and inputs objects
146 """
147 state, context = super().initialize_run(state=state, context=context)
148
149 if isinstance(state, Retrying):
150 run_count = state.run_count + 1
151 else:
152 run_count = state.context.get("task_run_count", 1)
153
154 if isinstance(state, Resume):
155 context.update(resume=True)
156
157 if "_loop_count" in state.cached_inputs: # type: ignore
158 loop_result = state.cached_inputs.pop("_loop_result")
159 if loop_result.value is None and loop_result.location is not None:
160 loop_result_value = self.result.read(loop_result.location).value
161 else:
162 loop_result_value = loop_result.value
163 loop_context = {
164 "task_loop_count": json.loads(
165 state.cached_inputs.pop( # type: ignore
166 "_loop_count"
167 ).location
168 ), # type: ignore
169 "task_loop_result": loop_result_value,
170 }
171 context.update(loop_context)
172
173 context.update(
174 task_run_count=run_count,
175 task_name=self.task.name,
176 task_tags=self.task.tags,
177 task_slug=self.task.slug,
178 )
179 context.setdefault("checkpointing", config.flows.checkpointing)
180
181 map_index = context.get("map_index", None)
182 if isinstance(map_index, int) and context.get("task_full_name"):
183 context.update(
184 logger=prefect.utilities.logging.get_logger(
185 context.get("task_full_name")
186 )
187 )
188 else:
189 context.update(logger=self.task.logger)
190
191 return TaskRunnerInitializeResult(state=state, context=context)
192
193 @tail_recursive
194 def run(
195 self,
196 state: State = None,
197 upstream_states: Dict[Edge, State] = None,
198 context: Dict[str, Any] = None,
199 executor: "prefect.engine.executors.Executor" = None,
200 ) -> State:
201 """
202 The main endpoint for TaskRunners. Calling this method will conditionally execute
203 `self.task.run` with any provided inputs, assuming the upstream dependencies are in a
204 state which allow this Task to run.
205
206 Args:
207 - state (State, optional): initial `State` to begin task run from;
208 defaults to `Pending()`
209 - upstream_states (Dict[Edge, State]): a dictionary
210 representing the states of any tasks upstream of this one. The keys of the
211 dictionary should correspond to the edges leading to the task.
212 - context (dict, optional): prefect Context to use for execution
213 - executor (Executor, optional): executor to use when performing
214 computation; defaults to the executor specified in your prefect configuration
215
216 Returns:
217 - `State` object representing the final post-run state of the Task
218 """
219 upstream_states = upstream_states or {}
220 context = context or {}
221 map_index = context.setdefault("map_index", None)
222 context["task_full_name"] = "{name}{index}".format(
223 name=self.task.name,
224 index=("" if map_index is None else "[{}]".format(map_index)),
225 )
226
227 if executor is None:
228 executor = prefect.engine.get_default_executor_class()()
229
230 # if mapped is true, this task run is going to generate a Mapped state. It won't
231 # actually run, but rather spawn children tasks to map over its inputs. We
232 # detect this case by checking for:
233 # - upstream edges that are `mapped`
234 # - no `map_index` (which indicates that this is the child task, not the parent)
235 mapped = any([e.mapped for e in upstream_states]) and map_index is None
236 task_inputs = {} # type: Dict[str, Any]
237
238 try:
239 # initialize the run
240 state, context = self.initialize_run(state, context)
241
242 # run state transformation pipeline
243 with prefect.context(context):
244
245 if prefect.context.get("task_loop_count") is None:
246 self.logger.info(
247 "Task '{name}': Starting task run...".format(
248 name=context["task_full_name"]
249 )
250 )
251
252 # check to make sure the task is in a pending state
253 state = self.check_task_is_ready(state)
254
255 # check if the task has reached its scheduled time
256 state = self.check_task_reached_start_time(state)
257
258 # Tasks never run if the upstream tasks haven't finished
259 state = self.check_upstream_finished(
260 state, upstream_states=upstream_states
261 )
262
263 # check if any upstream tasks skipped (and if we need to skip)
264 state = self.check_upstream_skipped(
265 state, upstream_states=upstream_states
266 )
267
268 # populate / hydrate all result objects
269 state, upstream_states = self.load_results(
270 state=state, upstream_states=upstream_states
271 )
272
273 # if the task is mapped, process the mapped children and exit
274 if mapped:
275 state = self.run_mapped_task(
276 state=state,
277 upstream_states=upstream_states,
278 context=context,
279 executor=executor,
280 )
281
282 state = self.wait_for_mapped_task(state=state, executor=executor)
283
284 self.logger.debug(
285 "Task '{name}': task has been mapped; ending run.".format(
286 name=context["task_full_name"]
287 )
288 )
289 raise ENDRUN(state)
290
291 # retrieve task inputs from upstream and also explicitly passed inputs
292 task_inputs = self.get_task_inputs(
293 state=state, upstream_states=upstream_states
294 )
295
296 if self.task.target:
297 # check to see if there is a Result at the task's target
298 state = self.check_target(state, inputs=task_inputs)
299 else:
300 # check to see if the task has a cached result
301 state = self.check_task_is_cached(state, inputs=task_inputs)
302
303 # check if the task's trigger passes
304 # triggers can raise Pauses, which require task_inputs to be available for caching
305 # so we run this after the previous step
306 state = self.check_task_trigger(state, upstream_states=upstream_states)
307
308 # set the task state to running
309 state = self.set_task_to_running(state, inputs=task_inputs)
310
311 # run the task
312 state = self.get_task_run_state(
313 state, inputs=task_inputs, timeout_handler=executor.timeout_handler
314 )
315
316 # cache the output, if appropriate
317 state = self.cache_result(state, inputs=task_inputs)
318
319 # check if the task needs to be retried
320 state = self.check_for_retry(state, inputs=task_inputs)
321
322 state = self.check_task_is_looping(
323 state,
324 inputs=task_inputs,
325 upstream_states=upstream_states,
326 context=context,
327 executor=executor,
328 )
329
330 # for pending signals, including retries and pauses we need to make sure the
331 # task_inputs are set
332 except (ENDRUN, signals.PrefectStateSignal) as exc:
333 exc.state.cached_inputs = task_inputs or {}
334 state = exc.state
335 except RecursiveCall as exc:
336 raise exc
337
338 except Exception as exc:
339 msg = "Task '{name}': unexpected error while running task: {exc}".format(
340 name=context["task_full_name"], exc=repr(exc)
341 )
342 self.logger.exception(msg)
343 state = Failed(message=msg, result=exc, cached_inputs=task_inputs)
344 if prefect.context.get("raise_on_exception"):
345 raise exc
346
347 # to prevent excessive repetition of this log
348 # since looping relies on recursively calling self.run
349 # TODO: figure out a way to only log this one single time instead of twice
350 if prefect.context.get("task_loop_count") is None:
351 # wrapping this final log in prefect.context(context) ensures
352 # that any run-context, including task-run-ids, are respected
353 with prefect.context(context):
354 self.logger.info(
355 "Task '{name}': finished task run for task with final state: '{state}'".format(
356 name=context["task_full_name"], state=type(state).__name__
357 )
358 )
359
360 return state
361
362 @call_state_handlers
363 def check_upstream_finished(
364 self, state: State, upstream_states: Dict[Edge, State]
365 ) -> State:
366 """
367 Checks if the upstream tasks have all finshed.
368
369 Args:
370 - state (State): the current state of this task
371 - upstream_states (Dict[Edge, Union[State, List[State]]]): the upstream states
372
373 Returns:
374 - State: the state of the task after running the check
375
376 Raises:
377 - ENDRUN: if upstream tasks are not finished.
378 """
379 all_states = set() # type: Set[State]
380 for edge, upstream_state in upstream_states.items():
381 # if the upstream state is Mapped, and this task is also mapped,
382 # we want each individual child to determine if it should
383 # proceed or not based on its upstream parent in the mapping
384 if isinstance(upstream_state, Mapped) and not edge.mapped:
385 all_states.update(upstream_state.map_states)
386 else:
387 all_states.add(upstream_state)
388
389 if not all(s.is_finished() for s in all_states):
390 self.logger.debug(
391 "Task '{name}': not all upstream states are finished; ending run.".format(
392 name=prefect.context.get("task_full_name", self.task.name)
393 )
394 )
395 raise ENDRUN(state)
396 return state
397
398 @call_state_handlers
399 def check_upstream_skipped(
400 self, state: State, upstream_states: Dict[Edge, State]
401 ) -> State:
402 """
403 Checks if any of the upstream tasks have skipped.
404
405 Args:
406 - state (State): the current state of this task
407 - upstream_states (Dict[Edge, State]): the upstream states
408
409 Returns:
410 - State: the state of the task after running the check
411 """
412
413 all_states = set() # type: Set[State]
414 for edge, upstream_state in upstream_states.items():
415
416 # if the upstream state is Mapped, and this task is also mapped,
417 # we want each individual child to determine if it should
418 # skip or not based on its upstream parent in the mapping
419 if isinstance(upstream_state, Mapped) and not edge.mapped:
420 all_states.update(upstream_state.map_states)
421 else:
422 all_states.add(upstream_state)
423
424 if self.task.skip_on_upstream_skip and any(s.is_skipped() for s in all_states):
425 self.logger.debug(
426 "Task '{name}': Upstream states were skipped; ending run.".format(
427 name=prefect.context.get("task_full_name", self.task.name)
428 )
429 )
430 raise ENDRUN(
431 state=Skipped(
432 message=(
433 "Upstream task was skipped; if this was not the intended "
434 "behavior, consider changing `skip_on_upstream_skip=False` "
435 "for this task."
436 )
437 )
438 )
439 return state
440
441 @call_state_handlers
442 def check_task_trigger(
443 self, state: State, upstream_states: Dict[Edge, State]
444 ) -> State:
445 """
446 Checks if the task's trigger function passes.
447
448 Args:
449 - state (State): the current state of this task
450 - upstream_states (Dict[Edge, Union[State, List[State]]]): the upstream states
451
452 Returns:
453 - State: the state of the task after running the check
454
455 Raises:
456 - ENDRUN: if the trigger raises an error
457 """
458 try:
459 if not self.task.trigger(upstream_states):
460 raise signals.TRIGGERFAIL(message="Trigger failed")
461
462 except signals.PrefectStateSignal as exc:
463
464 self.logger.debug(
465 "Task '{name}': {signal} signal raised during execution.".format(
466 name=prefect.context.get("task_full_name", self.task.name),
467 signal=type(exc).__name__,
468 )
469 )
470 if prefect.context.get("raise_on_exception"):
471 raise exc
472 raise ENDRUN(exc.state)
473
474 # Exceptions are trapped and turned into TriggerFailed states
475 except Exception as exc:
476 self.logger.exception(
477 "Task '{name}': unexpected error while evaluating task trigger: {exc}".format(
478 exc=repr(exc),
479 name=prefect.context.get("task_full_name", self.task.name),
480 )
481 )
482 if prefect.context.get("raise_on_exception"):
483 raise exc
484 raise ENDRUN(
485 TriggerFailed(
486 "Unexpected error while checking task trigger: {}".format(
487 repr(exc)
488 ),
489 result=exc,
490 )
491 )
492
493 return state
494
495 @call_state_handlers
496 def check_task_is_ready(self, state: State) -> State:
497 """
498 Checks to make sure the task is ready to run (Pending or Mapped).
499
500 Args:
501 - state (State): the current state of this task
502
503 Returns:
504 - State: the state of the task after running the check
505
506 Raises:
507 - ENDRUN: if the task is not ready to run
508 """
509
510 # the task is ready
511 if state.is_pending():
512 return state
513
514 # the task is mapped, in which case we still proceed so that the children tasks
515 # are generated (note that if the children tasks)
516 elif state.is_mapped():
517 self.logger.debug(
518 "Task '{name}': task is mapped, but run will proceed so children are generated.".format(
519 name=prefect.context.get("task_full_name", self.task.name)
520 )
521 )
522 return state
523
524 # this task is already running
525 elif state.is_running():
526 self.logger.debug(
527 "Task '{name}': task is already running.".format(
528 name=prefect.context.get("task_full_name", self.task.name)
529 )
530 )
531 raise ENDRUN(state)
532
533 elif state.is_cached():
534 return state
535
536 # this task is already finished
537 elif state.is_finished():
538 self.logger.debug(
539 "Task '{name}': task is already finished.".format(
540 name=prefect.context.get("task_full_name", self.task.name)
541 )
542 )
543 raise ENDRUN(state)
544
545 # this task is not pending
546 else:
547 self.logger.debug(
548 "Task '{name}' is not ready to run or state was unrecognized ({state}).".format(
549 name=prefect.context.get("task_full_name", self.task.name),
550 state=state,
551 )
552 )
553 raise ENDRUN(state)
554
555 @call_state_handlers
556 def check_task_reached_start_time(self, state: State) -> State:
557 """
558 Checks if a task is in a Scheduled state and, if it is, ensures that the scheduled
559 time has been reached. Note: Scheduled states include Retry states. Scheduled
560 states with no start time (`start_time = None`) are never considered ready;
561 they must be manually placed in another state.
562
563 Args:
564 - state (State): the current state of this task
565
566 Returns:
567 - State: the state of the task after performing the check
568
569 Raises:
570 - ENDRUN: if the task is Scheduled with a future scheduled time
571 """
572 if isinstance(state, Scheduled):
573 # handle case where no start_time is set
574 if state.start_time is None:
575 self.logger.debug(
576 "Task '{name}' is scheduled without a known start_time; ending run.".format(
577 name=prefect.context.get("task_full_name", self.task.name)
578 )
579 )
580 raise ENDRUN(state)
581
582 # handle case where start time is in the future
583 elif state.start_time and state.start_time > pendulum.now("utc"):
584 self.logger.debug(
585 "Task '{name}': start_time has not been reached; ending run.".format(
586 name=prefect.context.get("task_full_name", self.task.name)
587 )
588 )
589 raise ENDRUN(state)
590
591 return state
592
593 def get_task_inputs(
594 self, state: State, upstream_states: Dict[Edge, State]
595 ) -> Dict[str, Result]:
596 """
597 Given the task's current state and upstream states, generates the inputs for this task.
598 Upstream state result values are used. If the current state has `cached_inputs`, they
599 will override any upstream values.
600
601 Args:
602 - state (State): the task's current state.
603 - upstream_states (Dict[Edge, State]): the upstream state_handlers
604
605 Returns:
606 - Dict[str, Result]: the task inputs
607
608 """
609 task_inputs = {} # type: Dict[str, Result]
610
611 for edge, upstream_state in upstream_states.items():
612 # construct task inputs
613 if edge.key is not None:
614 task_inputs[edge.key] = upstream_state._result # type: ignore
615
616 if state.is_pending() and state.cached_inputs:
617 task_inputs.update(
618 {
619 k: r
620 for k, r in state.cached_inputs.items()
621 if task_inputs.get(k, NoResult) == NoResult
622 }
623 )
624
625 return task_inputs
626
627 def load_results(
628 self, state: State, upstream_states: Dict[Edge, State]
629 ) -> Tuple[State, Dict[Edge, State]]:
630 """
631 Given the task's current state and upstream states, populates all relevant result objects for this task run.
632
633 Args:
634 - state (State): the task's current state.
635 - upstream_states (Dict[Edge, State]): the upstream state_handlers
636
637 Returns:
638 - Tuple[State, dict]: a tuple of (state, upstream_states)
639
640 """
641 return state, upstream_states
642
643 @call_state_handlers
644 def check_target(self, state: State, inputs: Dict[str, Result]) -> State:
645 """
646 Checks if a Result exists at the task's target.
647
648 Args:
649 - state (State): the current state of this task
650 - inputs (Dict[str, Result]): a dictionary of inputs whose keys correspond
651 to the task's `run()` arguments.
652
653 Returns:
654 - State: the state of the task after running the check
655 """
656 result = self.result
657 target = self.task.target
658
659 if result and target:
660 if result.exists(target, **prefect.context):
661 new_res = result.read(target.format(**prefect.context))
662 cached_state = Cached(
663 result=new_res,
664 cached_inputs=inputs,
665 cached_result_expiration=None,
666 cached_parameters=prefect.context.get("parameters"),
667 message=f"Result found at task target {target}",
668 )
669 return cached_state
670
671 return state
672
673 @call_state_handlers
674 def check_task_is_cached(self, state: State, inputs: Dict[str, Result]) -> State:
675 """
676 Checks if task is cached and whether the cache is still valid.
677
678 Args:
679 - state (State): the current state of this task
680 - inputs (Dict[str, Result]): a dictionary of inputs whose keys correspond
681 to the task's `run()` arguments.
682
683 Returns:
684 - State: the state of the task after running the check
685
686 Raises:
687 - ENDRUN: if the task is not ready to run
688 """
689 if state.is_cached():
690 assert isinstance(state, Cached) # mypy assert
691 sanitized_inputs = {key: res.value for key, res in inputs.items()}
692 if self.task.cache_validator(
693 state, sanitized_inputs, prefect.context.get("parameters")
694 ):
695 return state
696 else:
697 state = Pending("Cache was invalid; ready to run.")
698
699 if self.task.cache_for is not None:
700 candidate_states = []
701 if prefect.context.get("caches"):
702 candidate_states = prefect.context.caches.get(
703 self.task.cache_key or self.task.name, []
704 )
705 sanitized_inputs = {key: res.value for key, res in inputs.items()}
706 for candidate in candidate_states:
707 if self.task.cache_validator(
708 candidate, sanitized_inputs, prefect.context.get("parameters")
709 ):
710 return candidate
711
712 if self.task.cache_for is not None:
713 self.logger.warning(
714 "Task '{name}': can't use cache because it "
715 "is now invalid".format(
716 name=prefect.context.get("task_full_name", self.task.name)
717 )
718 )
719 return state or Pending("Cache was invalid; ready to run.")
720
721 def run_mapped_task(
722 self,
723 state: State,
724 upstream_states: Dict[Edge, State],
725 context: Dict[str, Any],
726 executor: "prefect.engine.executors.Executor",
727 ) -> State:
728 """
729 If the task is being mapped, submits children tasks for execution. Returns a `Mapped` state.
730
731 Args:
732 - state (State): the current task state
733 - upstream_states (Dict[Edge, State]): the upstream states
734 - context (dict, optional): prefect Context to use for execution
735 - executor (Executor): executor to use when performing computation
736
737 Returns:
738 - State: the state of the task after running the check
739
740 Raises:
741 - ENDRUN: if the current state is not `Running`
742 """
743
744 map_upstream_states = []
745
746 # we don't know how long the iterables are, but we want to iterate until we reach
747 # the end of the shortest one
748 counter = itertools.count()
749
750 # infinite loop, if upstream_states has any entries
751 while True and upstream_states:
752 i = next(counter)
753 states = {}
754
755 try:
756
757 for edge, upstream_state in upstream_states.items():
758
759 # if the edge is not mapped over, then we take its state
760 if not edge.mapped:
761 states[edge] = upstream_state
762
763 # if the edge is mapped and the upstream state is Mapped, then we are mapping
764 # over a mapped task. In this case, we take the appropriately-indexed upstream
765 # state from the upstream tasks's `Mapped.map_states` array.
766 # Note that these "states" might actually be futures at this time; we aren't
767 # blocking until they finish.
768 elif edge.mapped and upstream_state.is_mapped():
769 states[edge] = upstream_state.map_states[i] # type: ignore
770
771 # Otherwise, we are mapping over the result of a "vanilla" task. In this
772 # case, we create a copy of the upstream state but set the result to the
773 # appropriately-indexed item from the upstream task's `State.result`
774 # array.
775 else:
776 states[edge] = copy.copy(upstream_state)
777
778 # if the current state is already Mapped, then we might be executing
779 # a re-run of the mapping pipeline. In that case, the upstream states
780 # might not have `result` attributes (as any required results could be
781 # in the `cached_inputs` attribute of one of the child states).
782 # Therefore, we only try to get a result if EITHER this task's
783 # state is not already mapped OR the upstream result is not None.
784 if not state.is_mapped() or upstream_state._result != NoResult:
785 if not hasattr(upstream_state.result, "__getitem__"):
786 raise TypeError(
787 "Cannot map over unsubscriptable object of type {t}: {preview}...".format(
788 t=type(upstream_state.result),
789 preview=repr(upstream_state.result)[:10],
790 )
791 )
792 upstream_result = upstream_state._result.from_value( # type: ignore
793 upstream_state.result[i]
794 )
795 states[edge].result = upstream_result
796 elif state.is_mapped():
797 if i >= len(state.map_states): # type: ignore
798 raise IndexError()
799
800 # only add this iteration if we made it through all iterables
801 map_upstream_states.append(states)
802
803 # index error means we reached the end of the shortest iterable
804 except IndexError:
805 break
806
807 def run_fn(
808 state: State, map_index: int, upstream_states: Dict[Edge, State]
809 ) -> State:
810 map_context = context.copy()
811 map_context.update(map_index=map_index)
812 with prefect.context(self.context):
813 return self.run(
814 upstream_states=upstream_states,
815 # if we set the state here, then it will not be processed by `initialize_run()`
816 state=state,
817 context=map_context,
818 executor=executor,
819 )
820
821 # generate initial states, if available
822 if isinstance(state, Mapped):
823 initial_states = list(state.map_states) # type: List[Optional[State]]
824 else:
825 initial_states = []
826 initial_states.extend([None] * (len(map_upstream_states) - len(initial_states)))
827
828 current_state = Mapped(
829 message="Preparing to submit {} mapped tasks.".format(len(initial_states)),
830 map_states=initial_states, # type: ignore
831 )
832 state = self.handle_state_change(old_state=state, new_state=current_state)
833 if state is not current_state:
834 return state
835
836 # map over the initial states, a counter representing the map_index, and also the mapped upstream states
837 map_states = executor.map(
838 run_fn, initial_states, range(len(map_upstream_states)), map_upstream_states
839 )
840
841 self.logger.debug(
842 "{} mapped tasks submitted for execution.".format(len(map_states))
843 )
844 new_state = Mapped(
845 message="Mapped tasks submitted for execution.", map_states=map_states
846 )
847 return self.handle_state_change(old_state=state, new_state=new_state)
848
849 @call_state_handlers
850 def wait_for_mapped_task(
851 self, state: State, executor: "prefect.engine.executors.Executor"
852 ) -> State:
853 """
854 Blocks until a mapped state's children have finished running.
855
856 Args:
857 - state (State): the current `Mapped` state
858 - executor (Executor): the run's executor
859
860 Returns:
861 - State: the new state
862 """
863 if state.is_mapped():
864 assert isinstance(state, Mapped) # mypy assert
865 state.map_states = executor.wait(state.map_states)
866 return state
867
868 @call_state_handlers
869 def set_task_to_running(self, state: State, inputs: Dict[str, Result]) -> State:
870 """
871 Sets the task to running
872
873 Args:
874 - state (State): the current state of this task
875 - inputs (Dict[str, Result]): a dictionary of inputs whose keys correspond
876 to the task's `run()` arguments.
877
878 Returns:
879 - State: the state of the task after running the check
880
881 Raises:
882 - ENDRUN: if the task is not ready to run
883 """
884 if not state.is_pending():
885 self.logger.debug(
886 "Task '{name}': can't set state to Running because it "
887 "isn't Pending; ending run.".format(
888 name=prefect.context.get("task_full_name", self.task.name)
889 )
890 )
891 raise ENDRUN(state)
892
893 new_state = Running(message="Starting task run.", cached_inputs=inputs)
894 return new_state
895
896 @run_with_heartbeat
897 @call_state_handlers
898 def get_task_run_state(
899 self,
900 state: State,
901 inputs: Dict[str, Result],
902 timeout_handler: Optional[Callable] = None,
903 ) -> State:
904 """
905 Runs the task and traps any signals or errors it raises.
906 Also checkpoints the result of a successful task, if `task.checkpoint` is `True`.
907
908 Args:
909 - state (State): the current state of this task
910 - inputs (Dict[str, Result], optional): a dictionary of inputs whose keys correspond
911 to the task's `run()` arguments.
912 - timeout_handler (Callable, optional): function for timing out
913 task execution, with call signature `handler(fn, *args, **kwargs)`. Defaults to
914 `prefect.utilities.executors.timeout_handler`
915
916 Returns:
917 - State: the state of the task after running the check
918
919 Raises:
920 - signals.PAUSE: if the task raises PAUSE
921 - ENDRUN: if the task is not ready to run
922 """
923 if not state.is_running():
924 self.logger.debug(
925 "Task '{name}': can't run task because it's not in a "
926 "Running state; ending run.".format(
927 name=prefect.context.get("task_full_name", self.task.name)
928 )
929 )
930
931 raise ENDRUN(state)
932
933 value = None
934 try:
935 self.logger.debug(
936 "Task '{name}': Calling task.run() method...".format(
937 name=prefect.context.get("task_full_name", self.task.name)
938 )
939 )
940 timeout_handler = (
941 timeout_handler or prefect.utilities.executors.timeout_handler
942 )
943 raw_inputs = {k: r.value for k, r in inputs.items()}
944
945 if getattr(self.task, "log_stdout", False):
946 with redirect_stdout(prefect.utilities.logging.RedirectToLog(self.logger)): # type: ignore
947 value = timeout_handler(
948 self.task.run, timeout=self.task.timeout, **raw_inputs
949 )
950 else:
951 value = timeout_handler(
952 self.task.run, timeout=self.task.timeout, **raw_inputs
953 )
954
955 except KeyboardInterrupt:
956 self.logger.debug("Interrupt signal raised, cancelling task run.")
957 state = Cancelled(message="Interrupt signal raised, cancelling task run.")
958 return state
959
960 # inform user of timeout
961 except TimeoutError as exc:
962 if prefect.context.get("raise_on_exception"):
963 raise exc
964 state = TimedOut(
965 "Task timed out during execution.", result=exc, cached_inputs=inputs
966 )
967 return state
968
969 except signals.LOOP as exc:
970 new_state = exc.state
971 assert isinstance(new_state, Looped)
972 new_state.result = self.result.from_value(value=new_state.result)
973 new_state.cached_inputs = inputs
974 new_state.message = exc.state.message or "Task is looping ({})".format(
975 new_state.loop_count
976 )
977 return new_state
978
979 ## checkpoint tasks if a result is present, except for when the user has opted out by disabling checkpointing
980 if (
981 prefect.context.get("checkpointing") is True
982 and self.task.checkpoint is not False
983 and value is not None
984 ):
985 try:
986 result = self.result.write(value, filename="output", **prefect.context)
987 except NotImplementedError:
988 result = self.result.from_value(value=value)
989 else:
990 result = self.result.from_value(value=value)
991
992 state = Success(
993 result=result, message="Task run succeeded.", cached_inputs=inputs
994 )
995 return state
996
997 @call_state_handlers
998 def cache_result(self, state: State, inputs: Dict[str, Result]) -> State:
999 """
1000 Caches the result of a successful task, if appropriate. Alternatively,
1001 if the task is failed, caches the inputs.
1002
1003 Tasks are cached if:
1004 - task.cache_for is not None
1005 - the task state is Successful
1006 - the task state is not Skipped (which is a subclass of Successful)
1007
1008 Args:
1009 - state (State): the current state of this task
1010 - inputs (Dict[str, Result], optional): a dictionary of inputs whose keys correspond
1011 to the task's `run()` arguments.
1012
1013 Returns:
1014 - State: the state of the task after running the check
1015
1016 """
1017 state.cached_inputs = inputs
1018
1019 if (
1020 state.is_successful()
1021 and not state.is_skipped()
1022 and self.task.cache_for is not None
1023 ):
1024 expiration = pendulum.now("utc") + self.task.cache_for
1025 cached_state = Cached(
1026 result=state._result,
1027 cached_inputs=inputs,
1028 cached_result_expiration=expiration,
1029 cached_parameters=prefect.context.get("parameters"),
1030 message=state.message,
1031 )
1032 return cached_state
1033
1034 return state
1035
1036 @call_state_handlers
1037 def check_for_retry(self, state: State, inputs: Dict[str, Result]) -> State:
1038 """
1039 Checks to see if a FAILED task should be retried.
1040
1041 Args:
1042 - state (State): the current state of this task
1043 - inputs (Dict[str, Result], optional): a dictionary of inputs whose keys correspond
1044 to the task's `run()` arguments.
1045
1046 Returns:
1047 - State: the state of the task after running the check
1048 """
1049 if state.is_failed():
1050 run_count = prefect.context.get("task_run_count", 1)
1051 if prefect.context.get("task_loop_count") is not None:
1052
1053 loop_result = self.result.from_value(
1054 value=prefect.context.get("task_loop_result")
1055 )
1056
1057 ## checkpoint tasks if a result is present, except for when the user has opted out by disabling checkpointing
1058 if (
1059 prefect.context.get("checkpointing") is True
1060 and self.task.checkpoint is not False
1061 and loop_result.value is not None
1062 ):
1063 try:
1064 value = prefect.context.get("task_loop_result")
1065 loop_result = self.result.write(
1066 value, filename="output", **prefect.context
1067 )
1068 except NotImplementedError:
1069 pass
1070
1071 loop_context = {
1072 "_loop_count": PrefectResult(
1073 location=json.dumps(prefect.context["task_loop_count"]),
1074 ),
1075 "_loop_result": loop_result,
1076 }
1077 inputs.update(loop_context)
1078 if run_count <= self.task.max_retries:
1079 start_time = pendulum.now("utc") + self.task.retry_delay
1080 msg = "Retrying Task (after attempt {n} of {m})".format(
1081 n=run_count, m=self.task.max_retries + 1
1082 )
1083 retry_state = Retrying(
1084 start_time=start_time,
1085 cached_inputs=inputs,
1086 message=msg,
1087 run_count=run_count,
1088 )
1089 return retry_state
1090
1091 return state
1092
1093 def check_task_is_looping(
1094 self,
1095 state: State,
1096 inputs: Dict[str, Result] = None,
1097 upstream_states: Dict[Edge, State] = None,
1098 context: Dict[str, Any] = None,
1099 executor: "prefect.engine.executors.Executor" = None,
1100 ) -> State:
1101 """
1102 Checks to see if the task is in a `Looped` state and if so, rerun the pipeline with an incremeneted `loop_count`.
1103
1104 Args:
1105 - state (State, optional): initial `State` to begin task run from;
1106 defaults to `Pending()`
1107 - inputs (Dict[str, Result], optional): a dictionary of inputs whose keys correspond
1108 to the task's `run()` arguments.
1109 - upstream_states (Dict[Edge, State]): a dictionary
1110 representing the states of any tasks upstream of this one. The keys of the
1111 dictionary should correspond to the edges leading to the task.
1112 - context (dict, optional): prefect Context to use for execution
1113 - executor (Executor, optional): executor to use when performing
1114 computation; defaults to the executor specified in your prefect configuration
1115
1116 Returns:
1117 - `State` object representing the final post-run state of the Task
1118 """
1119 if state.is_looped():
1120 assert isinstance(state, Looped) # mypy assert
1121 assert isinstance(context, dict) # mypy assert
1122 msg = "Looping task (on loop index {})".format(state.loop_count)
1123 context.update(
1124 {
1125 "task_loop_result": state.result,
1126 "task_loop_count": state.loop_count + 1,
1127 }
1128 )
1129 context.update(task_run_version=prefect.context.get("task_run_version"))
1130 new_state = Pending(message=msg, cached_inputs=inputs)
1131 raise RecursiveCall(
1132 self.run,
1133 self,
1134 new_state,
1135 upstream_states=upstream_states,
1136 context=context,
1137 executor=executor,
1138 )
1139
1140 return state
1141
[end of src/prefect/engine/task_runner.py]
[start of src/prefect/environments/execution/dask/cloud_provider.py]
1 from typing import Any, Callable, Dict, List, Type
2 from urllib.parse import urlparse
3
4 import prefect
5 from distributed.deploy.cluster import Cluster
6 from distributed.security import Security
7 from prefect import Client
8 from prefect.environments.execution.dask.remote import RemoteDaskEnvironment
9
10
11 class DaskCloudProviderEnvironment(RemoteDaskEnvironment):
12 """
13 DaskCloudProviderEnvironment creates Dask clusters using the Dask Cloud Provider
14 project. For each flow run, a new Dask cluster will be dynamically created and the
15 flow will run using a `RemoteDaskEnvironment` with the Dask scheduler address
16 from the newly created Dask cluster. You can specify the number of Dask workers
17 manually (for example, passing the kwarg `n_workers`) or enable adaptive mode by
18 passing `adaptive_min_workers` and, optionally, `adaptive_max_workers`. This
19 environment aims to provide a very easy path to Dask scalability for users of
20 cloud platforms, like AWS.
21
22 **NOTE:** AWS Fargate Task (not Prefect Task) startup time can be slow, depending
23 on docker image size. Total startup time for a Dask scheduler and workers can
24 be several minutes. This environment is a much better fit for production
25 deployments of scheduled Flows where there's little sensitivity to startup
26 time. `DaskCloudProviderEnvironment` is a particularly good fit for automated
27 deployment of Flows in a CI/CD pipeline where the infrastructure for each Flow
28 should be as independent as possible, e.g. each Flow could have its own docker
29 image, dynamically create the Dask cluster to run on, etc. However, for
30 development and interactive testing, creating a Dask cluster manually with Dask
31 Cloud Provider and then using `RemoteDaskEnvironment` or just `DaskExecutor`
32 with your flows will result in a much better development experience.
33
34 (Dask Cloud Provider currently only supports AWS using either Fargate or ECS.
35 Support for AzureML is coming soon.)
36
37 *IMPORTANT* By default, Dask Cloud Provider may create a Dask cluster in some
38 environments (e.g. Fargate) that is accessible via a public IP, without any
39 authentication, and configured to NOT encrypt network traffic. Please be
40 conscious of security issues if you test this environment. (Also see pull
41 requests [85](https://github.com/dask/dask-cloudprovider/pull/85) and
42 [91](https://github.com/dask/dask-cloudprovider/pull/91) in the Dask Cloud
43 Provider project.)
44
45 Args:
46 - provider_class (class): Class of a provider from the Dask Cloud Provider
47 projects. Current supported options are `ECSCluster` and `FargateCluster`.
48 - adaptive_min_workers (int, optional): Minimum number of workers for adaptive
49 mode. If this value is None, then adaptive mode will not be used and you
50 should pass `n_workers` or the appropriate kwarg for the provider class you
51 are using.
52 - adaptive_max_workers (int, optional): Maximum number of workers for adaptive
53 mode.
54 - security (Type[Security], optional): a Dask Security object from `distributed.security.Security`.
55 Use this to connect to a Dask cluster that is enabled with TLS encryption.
56 For more on using TLS with Dask see https://distributed.dask.org/en/latest/tls.html
57 - executor_kwargs (dict, optional): a dictionary of kwargs to be passed to
58 the executor; defaults to an empty dictionary
59 - labels (List[str], optional): a list of labels, which are arbitrary string identifiers used by Prefect
60 Agents when polling for work
61 - on_execute (Callable[[Dict[str, Any], Dict[str, Any]], None], optional): a function callback which will
62 be called before the flow begins to run. The callback function can examine the Flow run
63 parameters and modify kwargs to be passed to the Dask Cloud Provider class's constructor prior
64 to launching the Dask cluster for the Flow run. This allows for dynamically sizing the cluster based
65 on the Flow run parameters, e.g. settings n_workers. The callback function's signature should be:
66 `def on_execute(parameters: Dict[str, Any], provider_kwargs: Dict[str, Any]) -> None:`
67 The callback function may modify provider_kwargs (e.g. `provider_kwargs["n_workers"] = 3`) and any
68 relevant changes will be used when creating the Dask cluster via a Dask Cloud Provider class.
69 - on_start (Callable, optional): a function callback which will be called before the flow begins to run
70 - on_exit (Callable, optional): a function callback which will be called after the flow finishes its run
71 - **kwargs (dict, optional): additional keyword arguments to pass to boto3 for
72 `register_task_definition` and `run_task`
73 """
74
75 def __init__( # type: ignore
76 self,
77 provider_class: Type[Cluster],
78 adaptive_min_workers: int = None,
79 adaptive_max_workers: int = None,
80 security: Security = None,
81 executor_kwargs: Dict[str, Any] = None,
82 labels: List[str] = None,
83 on_execute: Callable[[Dict[str, Any], Dict[str, Any]], None] = None,
84 on_start: Callable = None,
85 on_exit: Callable = None,
86 **kwargs
87 ) -> None:
88 self._provider_class = provider_class
89 self._adaptive_min_workers = adaptive_min_workers
90 self._adaptive_max_workers = adaptive_max_workers
91 self._on_execute = on_execute
92 self._provider_kwargs = kwargs
93 if "skip_cleanup" not in self._provider_kwargs:
94 # Prefer this default (if not provided) to avoid deregistering task definitions
95 # See this issue in Dask Cloud Provider: https://github.com/dask/dask-cloudprovider/issues/94
96 self._provider_kwargs["skip_cleanup"] = True
97 self._security = security
98 if self._security:
99 # We'll use the security config object both for our Dask Client connection *and*
100 # for the particular Dask Cloud Provider (e.g. Fargate) to use with *its* Dask
101 # Client when it connects to the scheduler after cluster creation. So we
102 # put it in _provider_kwargs so it gets passed to the Dask Cloud Provider's constructor
103 self._provider_kwargs["security"] = self._security
104 self.cluster = None
105 super().__init__(
106 address="", # The scheduler address will be set after cluster creation
107 executor_kwargs=executor_kwargs,
108 labels=labels,
109 on_start=on_start,
110 on_exit=on_exit,
111 security=self._security,
112 )
113
114 @property
115 def dependencies(self) -> list:
116 return ["dask_cloudprovider"]
117
118 def _create_dask_cluster(self) -> None:
119 self.logger.info("Creating Dask cluster using {}".format(self._provider_class))
120 self.cluster = self._provider_class(**self._provider_kwargs)
121 if self.cluster and self.cluster.scheduler and self.cluster.scheduler.address:
122 self.logger.info(
123 "Dask cluster created. Sheduler address: {} Dashboard: http://{}:8787 "
124 "(unless port was changed from default of 8787)".format(
125 self.cluster.scheduler.address,
126 urlparse(self.cluster.scheduler.address).hostname,
127 ) # TODO submit PR to Dask Cloud Provider allowing discovery of dashboard port
128 )
129
130 self.executor_kwargs["address"] = self.cluster.scheduler.address # type: ignore
131 else:
132 if self.cluster:
133 self.cluster.close()
134 raise Exception(
135 "Unable to determine the Dask scheduler address after cluster creation. "
136 "Tearting down cluster and terminating setup."
137 )
138 if self._adaptive_min_workers:
139 self.logger.info(
140 "Enabling adaptive mode with min_workers={} max_workers={}".format(
141 self._adaptive_min_workers, self._adaptive_max_workers
142 )
143 )
144 self.cluster.adapt( # type: ignore
145 minimum=self._adaptive_min_workers, maximum=self._adaptive_max_workers
146 )
147
148 def execute( # type: ignore
149 self, storage: "Storage", flow_location: str, **kwargs: Any # type: ignore
150 ) -> None:
151 flow_run_info = None
152 flow_run_id = prefect.context.get("flow_run_id")
153 if self._on_execute:
154 # If an on_execute Callable has been provided, retrieve the flow run parameters
155 # and then allow the Callable a chance to update _provider_kwargs. This allows
156 # better sizing of the cluster resources based on parameters for this Flow run.
157 try:
158 client = Client()
159 flow_run_info = client.get_flow_run_info(flow_run_id)
160 parameters = flow_run_info.parameters or {} # type: ignore
161 self._on_execute(parameters, self._provider_kwargs)
162 except Exception as exc:
163 self.logger.info(
164 "Failed to retrieve flow run info with error: {}".format(repr(exc))
165 )
166 if "image" not in self._provider_kwargs or not self._provider_kwargs.get(
167 "image"
168 ):
169 # If image is not specified, use the Flow's image so that dependencies are
170 # identical on all containers: Flow runner, Dask scheduler, and Dask workers
171 flow_id = prefect.context.get("flow_id")
172 try:
173 client = Client()
174 if not flow_id: # We've observed cases where flow_id is None
175 if not flow_run_info:
176 flow_run_info = client.get_flow_run_info(flow_run_id)
177 flow_id = flow_run_info.flow_id
178 flow_info = client.graphql(
179 """query {
180 flow(where: {id: {_eq: "%s"}}) {
181 storage
182 }
183 }"""
184 % flow_id
185 )
186 storage_info = flow_info["data"]["flow"][0]["storage"]
187 image = "{}/{}:{}".format(
188 storage_info["registry_url"],
189 storage_info["image_name"],
190 storage_info["image_tag"],
191 )
192 self.logger.info(
193 "Using Flow's Docker image for Dask scheduler & workers: {}".format(
194 image
195 )
196 )
197 self._provider_kwargs["image"] = image
198 except Exception as exc:
199 self.logger.info(
200 "Failed to retrieve flow info with error: {}".format(repr(exc))
201 )
202
203 self._create_dask_cluster()
204
205 self.logger.info(
206 "Executing on dynamically created Dask Cluster with scheduler address: {}".format(
207 self.executor_kwargs["address"]
208 )
209 )
210 super().execute(storage, flow_location, **kwargs)
211
[end of src/prefect/environments/execution/dask/cloud_provider.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| PrefectHQ/prefect | 35aa1de018a983cf972c9c30a77159ac7f2de18d | Implement Depth-First Execution with Mapping
Currently each "level" of a mapped pipeline is executed before proceeding to the next level. This is undesirable especially for pipelines where it's important that each "branch" of the pipeline finish as quickly as possible.
To implement DFE, we'll need to rearrange two things:
- how mapped work gets submitted (it should start being submitted from the Flow Runner not the Task Runner)
- in order to submit work to Dask and let Dask handle the DFE scheduling, we'll want to refactor how we walk the DAG and wait to determine the width of a pipeline before we submit it (because mapping is fully dynamic we can only ascertain this width at runtime)
We'll need to be vigilant about:
- performance
- retries
- result handling
| 2020-05-24T02:51:51Z | <patch>
diff --git a/src/prefect/engine/cloud/task_runner.py b/src/prefect/engine/cloud/task_runner.py
--- a/src/prefect/engine/cloud/task_runner.py
+++ b/src/prefect/engine/cloud/task_runner.py
@@ -339,7 +339,7 @@ def run(
state: State = None,
upstream_states: Dict[Edge, State] = None,
context: Dict[str, Any] = None,
- executor: "prefect.engine.executors.Executor" = None,
+ is_mapped_parent: bool = False,
) -> State:
"""
The main endpoint for TaskRunners. Calling this method will conditionally execute
@@ -354,8 +354,8 @@ def run(
representing the states of any tasks upstream of this one. The keys of the
dictionary should correspond to the edges leading to the task.
- context (dict, optional): prefect Context to use for execution
- - executor (Executor, optional): executor to use when performing
- computation; defaults to the executor specified in your prefect configuration
+ - is_mapped_parent (bool): a boolean indicating whether this task run is the run of a parent
+ mapped task
Returns:
- `State` object representing the final post-run state of the Task
@@ -365,7 +365,7 @@ def run(
state=state,
upstream_states=upstream_states,
context=context,
- executor=executor,
+ is_mapped_parent=is_mapped_parent,
)
while (end_state.is_retrying() or end_state.is_queued()) and (
end_state.start_time <= pendulum.now("utc").add(minutes=10) # type: ignore
@@ -388,6 +388,6 @@ def run(
state=end_state,
upstream_states=upstream_states,
context=context,
- executor=executor,
+ is_mapped_parent=is_mapped_parent,
)
return end_state
diff --git a/src/prefect/engine/executors/__init__.py b/src/prefect/engine/executors/__init__.py
--- a/src/prefect/engine/executors/__init__.py
+++ b/src/prefect/engine/executors/__init__.py
@@ -8,9 +8,6 @@
has completed running
- `wait(object)`: resolves any objects returned by `executor.submit` to
their values; this function _will_ block until execution of `object` is complete
-- `map(fn, *args, upstream_states, **kwargs)`: submit function to be mapped
- over based on the edge information contained in `upstream_states`. Any "mapped" Edge
- will be converted into multiple function submissions, one for each value of the upstream mapped tasks.
Currently, the available executor options are:
diff --git a/src/prefect/engine/executors/base.py b/src/prefect/engine/executors/base.py
--- a/src/prefect/engine/executors/base.py
+++ b/src/prefect/engine/executors/base.py
@@ -1,8 +1,6 @@
import uuid
from contextlib import contextmanager
-from typing import Any, Callable, Iterator, List
-
-from prefect.utilities.executors import timeout_handler
+from typing import Any, Callable, Iterator
class Executor:
@@ -10,8 +8,6 @@ class Executor:
Base Executor class that all other executors inherit from.
"""
- timeout_handler = staticmethod(timeout_handler)
-
def __init__(self) -> None:
self.executor_id = type(self).__name__ + ": " + str(uuid.uuid4())
@@ -28,20 +24,6 @@ def start(self) -> Iterator[None]:
"""
yield
- def map(self, fn: Callable, *args: Any) -> List[Any]:
- """
- Submit a function to be mapped over its iterable arguments.
-
- Args:
- - fn (Callable): function that is being submitted for execution
- - *args (Any): arguments that the function will be mapped over
-
- Returns:
- - List[Any]: the result of computating the function over the arguments
-
- """
- raise NotImplementedError()
-
def submit(self, fn: Callable, *args: Any, **kwargs: Any) -> Any:
"""
Submit a function to the executor for execution. Returns a future-like object.
diff --git a/src/prefect/engine/executors/dask.py b/src/prefect/engine/executors/dask.py
--- a/src/prefect/engine/executors/dask.py
+++ b/src/prefect/engine/executors/dask.py
@@ -2,7 +2,7 @@
import uuid
import warnings
from contextlib import contextmanager
-from typing import TYPE_CHECKING, Any, Callable, Iterator, List, Union
+from typing import Any, Callable, Iterator, TYPE_CHECKING, Union
from prefect import context
from prefect.engine.executors.base import Executor
@@ -63,8 +63,6 @@ class name (e.g. `"distributed.LocalCluster"`), or the class itself.
your Prefect configuration.
- **kwargs: DEPRECATED
- Example:
-
Using a temporary local dask cluster:
```python
@@ -269,41 +267,6 @@ def submit(self, fn: Callable, *args: Any, **kwargs: Any) -> "Future":
fire_and_forget(future)
return future
- def map(self, fn: Callable, *args: Any, **kwargs: Any) -> List["Future"]:
- """
- Submit a function to be mapped over its iterable arguments.
-
- Args:
- - fn (Callable): function that is being submitted for execution
- - *args (Any): arguments that the function will be mapped over
- - **kwargs (Any): additional keyword arguments that will be passed to the Dask Client
-
- Returns:
- - List[Future]: a list of Future-like objects that represent each computation of
- fn(*a), where a = zip(*args)[i]
-
- """
- if not args:
- return []
-
- # import dask functions here to decrease our import times
- from distributed import fire_and_forget, worker_client
-
- dask_kwargs = self._prep_dask_kwargs()
- kwargs.update(dask_kwargs)
-
- if self.is_started and hasattr(self, "client"):
- futures = self.client.map(fn, *args, **kwargs)
- elif self.is_started:
- with worker_client(separate_thread=True) as client:
- futures = client.map(fn, *args, **kwargs)
- return client.gather(futures)
- else:
- raise ValueError("This executor has not been started.")
-
- fire_and_forget(futures)
- return futures
-
def wait(self, futures: Any) -> Any:
"""
Resolves the Future objects to their values. Blocks until the computation is complete.
@@ -331,8 +294,6 @@ class LocalDaskExecutor(Executor):
An executor that runs all functions locally using `dask` and a configurable dask scheduler. Note that
this executor is known to occasionally run tasks twice when using multi-level mapping.
- Prefect's mapping feature will not work in conjunction with setting `scheduler="processes"`.
-
Args:
- scheduler (str): The local dask scheduler to use; common options are "synchronous", "threads" and "processes". Defaults to "threads".
- **kwargs (Any): Additional keyword arguments to pass to dask config
@@ -373,28 +334,6 @@ def submit(self, fn: Callable, *args: Any, **kwargs: Any) -> "dask.delayed":
return dask.delayed(fn)(*args, **kwargs)
- def map(self, fn: Callable, *args: Any) -> List["dask.delayed"]:
- """
- Submit a function to be mapped over its iterable arguments.
-
- Args:
- - fn (Callable): function that is being submitted for execution
- - *args (Any): arguments that the function will be mapped over
-
- Returns:
- - List[dask.delayed]: the result of computating the function over the arguments
-
- """
- if self.scheduler == "processes":
- raise RuntimeError(
- "LocalDaskExecutor cannot map if scheduler='processes'. Please set to either 'synchronous' or 'threads'."
- )
-
- results = []
- for args_i in zip(*args):
- results.append(self.submit(fn, *args_i))
- return results
-
def wait(self, futures: Any) -> Any:
"""
Resolves a `dask.delayed` object to its values. Blocks until the computation is complete.
diff --git a/src/prefect/engine/executors/local.py b/src/prefect/engine/executors/local.py
--- a/src/prefect/engine/executors/local.py
+++ b/src/prefect/engine/executors/local.py
@@ -1,4 +1,4 @@
-from typing import Any, Callable, List
+from typing import Any, Callable
from prefect.engine.executors.base import Executor
@@ -23,23 +23,6 @@ def submit(self, fn: Callable, *args: Any, **kwargs: Any) -> Any:
"""
return fn(*args, **kwargs)
- def map(self, fn: Callable, *args: Any) -> List[Any]:
- """
- Submit a function to be mapped over its iterable arguments.
-
- Args:
- - fn (Callable): function that is being submitted for execution
- - *args (Any): arguments that the function will be mapped over
-
- Returns:
- - List[Any]: the result of computating the function over the arguments
-
- """
- results = []
- for args_i in zip(*args):
- results.append(fn(*args_i))
- return results
-
def wait(self, futures: Any) -> Any:
"""
Returns the results of the provided futures.
diff --git a/src/prefect/engine/flow_runner.py b/src/prefect/engine/flow_runner.py
--- a/src/prefect/engine/flow_runner.py
+++ b/src/prefect/engine/flow_runner.py
@@ -10,7 +10,6 @@
)
import pendulum
-
import prefect
from prefect.core import Edge, Flow, Task
from prefect.engine.result import Result
@@ -28,7 +27,10 @@
Success,
)
from prefect.utilities.collections import flatten_seq
-from prefect.utilities.executors import run_with_heartbeat
+from prefect.utilities.executors import (
+ run_with_heartbeat,
+ prepare_upstream_states_for_mapping,
+)
FlowRunnerInitializeResult = NamedTuple(
"FlowRunnerInitializeResult",
@@ -381,6 +383,11 @@ def get_flow_run_state(
- State: `State` representing the final post-run state of the `Flow`.
"""
+ # this dictionary is used for tracking the states of "children" mapped tasks;
+ # when running on Dask, we want to avoid serializing futures, so instead
+ # of storing child task states in the `map_states` attribute we instead store
+ # in this dictionary and only after they are resolved do we attach them to the Mapped state
+ mapped_children = dict() # type: Dict[Task, list]
if not state.is_running():
self.logger.info("Flow is not in a Running state.")
@@ -396,14 +403,19 @@ def get_flow_run_state(
with executor.start():
for task in self.flow.sorted_tasks():
-
task_state = task_states.get(task)
+
+ # if a task is a constant task, we already know its return value
+ # no need to use up resources by running it through a task runner
if task_state is None and isinstance(
task, prefect.tasks.core.constants.Constant
):
task_states[task] = task_state = Success(result=task.value)
# if the state is finished, don't run the task, just use the provided state
+ # if the state is cached / mapped, we still want to run the task runner pipeline steps
+ # to either ensure the cache is still valid / or to recreate the mapped pipeline for
+ # possible retries
if (
isinstance(task_state, State)
and task_state.is_finished()
@@ -412,7 +424,12 @@ def get_flow_run_state(
):
continue
- upstream_states = {} # type: Dict[Edge, Union[State, Iterable]]
+ upstream_states = {} # type: Dict[Edge, State]
+
+ # this dictionary is used exclusively for "reduce" tasks
+ # in particular we store the states / futures corresponding to
+ # the upstream children, and if running on Dask, let Dask resolve them at the appropriate time
+ upstream_mapped_states = {} # type: Dict[Edge, list]
# -- process each edge to the task
for edge in self.flow.edges_to(task):
@@ -420,6 +437,13 @@ def get_flow_run_state(
edge.upstream_task, Pending(message="Task state not available.")
)
+ # this checks whether the task is a "reduce" task for a mapped pipeline
+ # and if so, collects the appropriate upstream children
+ if not edge.mapped and isinstance(upstream_states[edge], Mapped):
+ upstream_mapped_states[edge] = mapped_children.get(
+ edge.upstream_task, []
+ )
+
# augment edges with upstream constants
for key, val in self.flow.constants[task].items():
edge = Edge(
@@ -432,9 +456,80 @@ def get_flow_run_state(
result=ConstantResult(value=val),
)
- # -- run the task
+ # handle mapped tasks
+ if any([edge.mapped for edge in upstream_states.keys()]):
- with prefect.context(task_full_name=task.name, task_tags=task.tags):
+ ## wait on upstream states to determine the width of the pipeline
+ ## this is the key to depth-first execution
+ upstream_states.update(
+ executor.wait(
+ {e: state for e, state in upstream_states.items()}
+ )
+ )
+
+ ## we submit the task to the task runner to determine if
+ ## we can proceed with mapping - if the new task state is not a Mapped
+ ## state then we don't proceed
+ task_states[task] = executor.wait(
+ executor.submit(
+ self.run_task,
+ task=task,
+ state=task_state, # original state
+ upstream_states=upstream_states,
+ context=dict(
+ prefect.context, **task_contexts.get(task, {})
+ ),
+ task_runner_state_handlers=task_runner_state_handlers,
+ upstream_mapped_states=upstream_mapped_states,
+ is_mapped_parent=True,
+ )
+ )
+
+ ## either way, we should now have enough resolved states to restructure
+ ## the upstream states into a list of upstream state dictionaries to iterate over
+ list_of_upstream_states = prepare_upstream_states_for_mapping(
+ task_states[task], upstream_states, mapped_children
+ )
+
+ submitted_states = []
+
+ for idx, states in enumerate(list_of_upstream_states):
+ ## if we are on a future rerun of a partially complete flow run,
+ ## there might be mapped children in a retrying state; this check
+ ## looks into the current task state's map_states for such info
+ if (
+ isinstance(task_state, Mapped)
+ and len(task_state.map_states) >= idx + 1
+ ):
+ current_state = task_state.map_states[
+ idx
+ ] # type: Optional[State]
+ elif isinstance(task_state, Mapped):
+ current_state = None
+ else:
+ current_state = task_state
+
+ ## this is where each child is submitted for actual work
+ submitted_states.append(
+ executor.submit(
+ self.run_task,
+ task=task,
+ state=current_state,
+ upstream_states=states,
+ context=dict(
+ prefect.context,
+ **task_contexts.get(task, {}),
+ map_index=idx,
+ ),
+ task_runner_state_handlers=task_runner_state_handlers,
+ upstream_mapped_states=upstream_mapped_states,
+ )
+ )
+ if isinstance(task_states.get(task), Mapped):
+ mapped_children[task] = submitted_states # type: ignore
+
+ # -- run the task
+ else:
task_states[task] = executor.submit(
self.run_task,
task=task,
@@ -442,7 +537,7 @@ def get_flow_run_state(
upstream_states=upstream_states,
context=dict(prefect.context, **task_contexts.get(task, {})),
task_runner_state_handlers=task_runner_state_handlers,
- executor=executor,
+ upstream_mapped_states=upstream_mapped_states,
)
# ---------------------------------------------
@@ -469,7 +564,9 @@ def get_flow_run_state(
all_final_states = final_states.copy()
for t, s in list(final_states.items()):
if s.is_mapped():
- s.map_states = executor.wait(s.map_states)
+ # ensure we wait for any mapped children to complete
+ if t in mapped_children:
+ s.map_states = executor.wait(mapped_children[t])
s.result = [ms.result for ms in s.map_states]
all_final_states[t] = s.map_states
@@ -540,7 +637,8 @@ def run_task(
upstream_states: Dict[Edge, State],
context: Dict[str, Any],
task_runner_state_handlers: Iterable[Callable],
- executor: "prefect.engine.executors.Executor",
+ is_mapped_parent: bool = False,
+ upstream_mapped_states: Dict[Edge, list] = None,
) -> State:
"""
@@ -556,13 +654,17 @@ def run_task(
- task_runner_state_handlers (Iterable[Callable]): A list of state change
handlers that will be provided to the task_runner, and called whenever a task changes
state.
- - executor (Executor): executor to use when performing
- computation; defaults to the executor provided in your prefect configuration
+ - is_mapped_parent (bool): a boolean indicating whether this task run is the run of a parent
+ mapped task
+ - upstream_mapped_states (Dict[Edge, list]): dictionary of upstream states corresponding to
+ mapped children dependencies
Returns:
- State: `State` representing the final post-run state of the `Flow`.
"""
+ upstream_mapped_states = upstream_mapped_states or {}
+
with prefect.context(self.context):
default_result = task.result or self.flow.result
task_runner = self.task_runner_cls(
@@ -578,7 +680,9 @@ def run_task(
# if the upstream state is Mapped, wait until its results are all available
if not edge.mapped and upstream_state.is_mapped():
assert isinstance(upstream_state, Mapped) # mypy assert
- upstream_state.map_states = executor.wait(upstream_state.map_states)
+ upstream_state.map_states = upstream_mapped_states.get(
+ edge, upstream_state.map_states
+ )
upstream_state.result = [
s.result for s in upstream_state.map_states
]
@@ -587,5 +691,5 @@ def run_task(
state=state,
upstream_states=upstream_states,
context=context,
- executor=executor,
+ is_mapped_parent=is_mapped_parent,
)
diff --git a/src/prefect/engine/task_runner.py b/src/prefect/engine/task_runner.py
--- a/src/prefect/engine/task_runner.py
+++ b/src/prefect/engine/task_runner.py
@@ -1,6 +1,4 @@
-import copy
from contextlib import redirect_stdout
-import itertools
import json
from typing import (
Any,
@@ -196,7 +194,7 @@ def run(
state: State = None,
upstream_states: Dict[Edge, State] = None,
context: Dict[str, Any] = None,
- executor: "prefect.engine.executors.Executor" = None,
+ is_mapped_parent: bool = False,
) -> State:
"""
The main endpoint for TaskRunners. Calling this method will conditionally execute
@@ -210,8 +208,8 @@ def run(
representing the states of any tasks upstream of this one. The keys of the
dictionary should correspond to the edges leading to the task.
- context (dict, optional): prefect Context to use for execution
- - executor (Executor, optional): executor to use when performing
- computation; defaults to the executor specified in your prefect configuration
+ - is_mapped_parent (bool): a boolean indicating whether this task run is the run of a parent
+ mapped task
Returns:
- `State` object representing the final post-run state of the Task
@@ -224,15 +222,6 @@ def run(
index=("" if map_index is None else "[{}]".format(map_index)),
)
- if executor is None:
- executor = prefect.engine.get_default_executor_class()()
-
- # if mapped is true, this task run is going to generate a Mapped state. It won't
- # actually run, but rather spawn children tasks to map over its inputs. We
- # detect this case by checking for:
- # - upstream edges that are `mapped`
- # - no `map_index` (which indicates that this is the child task, not the parent)
- mapped = any([e.mapped for e in upstream_states]) and map_index is None
task_inputs = {} # type: Dict[str, Any]
try:
@@ -270,29 +259,16 @@ def run(
state=state, upstream_states=upstream_states
)
- # if the task is mapped, process the mapped children and exit
- if mapped:
- state = self.run_mapped_task(
- state=state,
- upstream_states=upstream_states,
- context=context,
- executor=executor,
- )
-
- state = self.wait_for_mapped_task(state=state, executor=executor)
-
- self.logger.debug(
- "Task '{name}': task has been mapped; ending run.".format(
- name=context["task_full_name"]
- )
- )
- raise ENDRUN(state)
-
# retrieve task inputs from upstream and also explicitly passed inputs
task_inputs = self.get_task_inputs(
state=state, upstream_states=upstream_states
)
+ if is_mapped_parent:
+ state = self.check_task_ready_to_map(
+ state, upstream_states=upstream_states
+ )
+
if self.task.target:
# check to see if there is a Result at the task's target
state = self.check_target(state, inputs=task_inputs)
@@ -309,9 +285,7 @@ def run(
state = self.set_task_to_running(state, inputs=task_inputs)
# run the task
- state = self.get_task_run_state(
- state, inputs=task_inputs, timeout_handler=executor.timeout_handler
- )
+ state = self.get_task_run_state(state, inputs=task_inputs)
# cache the output, if appropriate
state = self.cache_result(state, inputs=task_inputs)
@@ -324,7 +298,6 @@ def run(
inputs=task_inputs,
upstream_states=upstream_states,
context=context,
- executor=executor,
)
# for pending signals, including retries and pauses we need to make sure the
@@ -438,6 +411,45 @@ def check_upstream_skipped(
)
return state
+ @call_state_handlers
+ def check_task_ready_to_map(
+ self, state: State, upstream_states: Dict[Edge, State]
+ ) -> State:
+ """
+ Checks if the parent task is ready to proceed with mapping.
+
+ Args:
+ - state (State): the current state of this task
+ - upstream_states (Dict[Edge, Union[State, List[State]]]): the upstream states
+
+ Raises:
+ - ENDRUN: either way, we dont continue past this point
+ """
+ if state.is_mapped():
+ raise ENDRUN(state)
+
+ ## we can't map if there are no success states with iterables upstream
+ if upstream_states and not any(
+ [
+ edge.mapped and state.is_successful()
+ for edge, state in upstream_states.items()
+ ]
+ ):
+ new_state = Failed("No upstream states can be mapped over.") # type: State
+ raise ENDRUN(new_state)
+ elif not all(
+ [
+ hasattr(state.result, "__getitem__")
+ for edge, state in upstream_states.items()
+ if state.is_successful() and not state.is_mapped() and edge.mapped
+ ]
+ ):
+ new_state = Failed("No upstream states can be mapped over.")
+ raise ENDRUN(new_state)
+ else:
+ new_state = Mapped("Ready to proceed with mapping.")
+ raise ENDRUN(new_state)
+
@call_state_handlers
def check_task_trigger(
self, state: State, upstream_states: Dict[Edge, State]
@@ -718,153 +730,6 @@ def check_task_is_cached(self, state: State, inputs: Dict[str, Result]) -> State
)
return state or Pending("Cache was invalid; ready to run.")
- def run_mapped_task(
- self,
- state: State,
- upstream_states: Dict[Edge, State],
- context: Dict[str, Any],
- executor: "prefect.engine.executors.Executor",
- ) -> State:
- """
- If the task is being mapped, submits children tasks for execution. Returns a `Mapped` state.
-
- Args:
- - state (State): the current task state
- - upstream_states (Dict[Edge, State]): the upstream states
- - context (dict, optional): prefect Context to use for execution
- - executor (Executor): executor to use when performing computation
-
- Returns:
- - State: the state of the task after running the check
-
- Raises:
- - ENDRUN: if the current state is not `Running`
- """
-
- map_upstream_states = []
-
- # we don't know how long the iterables are, but we want to iterate until we reach
- # the end of the shortest one
- counter = itertools.count()
-
- # infinite loop, if upstream_states has any entries
- while True and upstream_states:
- i = next(counter)
- states = {}
-
- try:
-
- for edge, upstream_state in upstream_states.items():
-
- # if the edge is not mapped over, then we take its state
- if not edge.mapped:
- states[edge] = upstream_state
-
- # if the edge is mapped and the upstream state is Mapped, then we are mapping
- # over a mapped task. In this case, we take the appropriately-indexed upstream
- # state from the upstream tasks's `Mapped.map_states` array.
- # Note that these "states" might actually be futures at this time; we aren't
- # blocking until they finish.
- elif edge.mapped and upstream_state.is_mapped():
- states[edge] = upstream_state.map_states[i] # type: ignore
-
- # Otherwise, we are mapping over the result of a "vanilla" task. In this
- # case, we create a copy of the upstream state but set the result to the
- # appropriately-indexed item from the upstream task's `State.result`
- # array.
- else:
- states[edge] = copy.copy(upstream_state)
-
- # if the current state is already Mapped, then we might be executing
- # a re-run of the mapping pipeline. In that case, the upstream states
- # might not have `result` attributes (as any required results could be
- # in the `cached_inputs` attribute of one of the child states).
- # Therefore, we only try to get a result if EITHER this task's
- # state is not already mapped OR the upstream result is not None.
- if not state.is_mapped() or upstream_state._result != NoResult:
- if not hasattr(upstream_state.result, "__getitem__"):
- raise TypeError(
- "Cannot map over unsubscriptable object of type {t}: {preview}...".format(
- t=type(upstream_state.result),
- preview=repr(upstream_state.result)[:10],
- )
- )
- upstream_result = upstream_state._result.from_value( # type: ignore
- upstream_state.result[i]
- )
- states[edge].result = upstream_result
- elif state.is_mapped():
- if i >= len(state.map_states): # type: ignore
- raise IndexError()
-
- # only add this iteration if we made it through all iterables
- map_upstream_states.append(states)
-
- # index error means we reached the end of the shortest iterable
- except IndexError:
- break
-
- def run_fn(
- state: State, map_index: int, upstream_states: Dict[Edge, State]
- ) -> State:
- map_context = context.copy()
- map_context.update(map_index=map_index)
- with prefect.context(self.context):
- return self.run(
- upstream_states=upstream_states,
- # if we set the state here, then it will not be processed by `initialize_run()`
- state=state,
- context=map_context,
- executor=executor,
- )
-
- # generate initial states, if available
- if isinstance(state, Mapped):
- initial_states = list(state.map_states) # type: List[Optional[State]]
- else:
- initial_states = []
- initial_states.extend([None] * (len(map_upstream_states) - len(initial_states)))
-
- current_state = Mapped(
- message="Preparing to submit {} mapped tasks.".format(len(initial_states)),
- map_states=initial_states, # type: ignore
- )
- state = self.handle_state_change(old_state=state, new_state=current_state)
- if state is not current_state:
- return state
-
- # map over the initial states, a counter representing the map_index, and also the mapped upstream states
- map_states = executor.map(
- run_fn, initial_states, range(len(map_upstream_states)), map_upstream_states
- )
-
- self.logger.debug(
- "{} mapped tasks submitted for execution.".format(len(map_states))
- )
- new_state = Mapped(
- message="Mapped tasks submitted for execution.", map_states=map_states
- )
- return self.handle_state_change(old_state=state, new_state=new_state)
-
- @call_state_handlers
- def wait_for_mapped_task(
- self, state: State, executor: "prefect.engine.executors.Executor"
- ) -> State:
- """
- Blocks until a mapped state's children have finished running.
-
- Args:
- - state (State): the current `Mapped` state
- - executor (Executor): the run's executor
-
- Returns:
- - State: the new state
- """
- if state.is_mapped():
- assert isinstance(state, Mapped) # mypy assert
- state.map_states = executor.wait(state.map_states)
- return state
-
@call_state_handlers
def set_task_to_running(self, state: State, inputs: Dict[str, Result]) -> State:
"""
@@ -895,12 +760,7 @@ def set_task_to_running(self, state: State, inputs: Dict[str, Result]) -> State:
@run_with_heartbeat
@call_state_handlers
- def get_task_run_state(
- self,
- state: State,
- inputs: Dict[str, Result],
- timeout_handler: Optional[Callable] = None,
- ) -> State:
+ def get_task_run_state(self, state: State, inputs: Dict[str, Result],) -> State:
"""
Runs the task and traps any signals or errors it raises.
Also checkpoints the result of a successful task, if `task.checkpoint` is `True`.
@@ -909,9 +769,6 @@ def get_task_run_state(
- state (State): the current state of this task
- inputs (Dict[str, Result], optional): a dictionary of inputs whose keys correspond
to the task's `run()` arguments.
- - timeout_handler (Callable, optional): function for timing out
- task execution, with call signature `handler(fn, *args, **kwargs)`. Defaults to
- `prefect.utilities.executors.timeout_handler`
Returns:
- State: the state of the task after running the check
@@ -937,9 +794,7 @@ def get_task_run_state(
name=prefect.context.get("task_full_name", self.task.name)
)
)
- timeout_handler = (
- timeout_handler or prefect.utilities.executors.timeout_handler
- )
+ timeout_handler = prefect.utilities.executors.timeout_handler
raw_inputs = {k: r.value for k, r in inputs.items()}
if getattr(self.task, "log_stdout", False):
@@ -1096,7 +951,6 @@ def check_task_is_looping(
inputs: Dict[str, Result] = None,
upstream_states: Dict[Edge, State] = None,
context: Dict[str, Any] = None,
- executor: "prefect.engine.executors.Executor" = None,
) -> State:
"""
Checks to see if the task is in a `Looped` state and if so, rerun the pipeline with an incremeneted `loop_count`.
@@ -1110,8 +964,6 @@ def check_task_is_looping(
representing the states of any tasks upstream of this one. The keys of the
dictionary should correspond to the edges leading to the task.
- context (dict, optional): prefect Context to use for execution
- - executor (Executor, optional): executor to use when performing
- computation; defaults to the executor specified in your prefect configuration
Returns:
- `State` object representing the final post-run state of the Task
@@ -1134,7 +986,6 @@ def check_task_is_looping(
new_state,
upstream_states=upstream_states,
context=context,
- executor=executor,
)
return state
diff --git a/src/prefect/utilities/executors.py b/src/prefect/utilities/executors.py
--- a/src/prefect/utilities/executors.py
+++ b/src/prefect/utilities/executors.py
@@ -1,3 +1,5 @@
+import copy
+import itertools
import multiprocessing
import os
import signal
@@ -8,13 +10,15 @@
from concurrent.futures import ThreadPoolExecutor
from concurrent.futures import TimeoutError as FutureTimeout
from functools import wraps
-from typing import TYPE_CHECKING, Any, Callable, List, Union
+from typing import TYPE_CHECKING, Any, Callable, Dict, List, Union
import prefect
if TYPE_CHECKING:
import prefect.engine.runner
import prefect.engine.state
+ from prefect.core.edge import Edge # pylint: disable=W0611
+ from prefect.core.task import Task # pylint: disable=W0611
from prefect.engine.state import State # pylint: disable=W0611
StateList = Union["State", List["State"]]
@@ -271,3 +275,99 @@ def wrapper(*args: Any, **kwargs: Any) -> Any:
setattr(wrapper, "__wrapped_func__", func)
return wrapper
+
+
+def prepare_upstream_states_for_mapping(
+ state: "State",
+ upstream_states: Dict["Edge", "State"],
+ mapped_children: Dict["Task", list],
+) -> list:
+ """
+ If the task is being mapped, submits children tasks for execution. Returns a `Mapped` state.
+
+ Args:
+ - state (State): the parent task's current state
+ - upstream_states (Dict[Edge, State]): the upstream states to this task
+ - mapped_children (Dict[Task, List[State]]): any mapped children upstream of this task
+
+ Returns:
+ - List: a restructured list of upstream states correponding to each new mapped child task
+ """
+
+ ## if the current state is failed / skipped or otherwise
+ ## in a state that signifies we should not continue with mapping,
+ ## we return an empty list
+ if state.is_pending() or state.is_failed() or state.is_skipped():
+ return []
+
+ map_upstream_states = []
+
+ # we don't know how long the iterables are, but we want to iterate until we reach
+ # the end of the shortest one
+ counter = itertools.count()
+
+ # infinite loop, if upstream_states has any entries
+ while True and upstream_states:
+ i = next(counter)
+ states = {}
+
+ try:
+
+ for edge, upstream_state in upstream_states.items():
+
+ # ensure we are working with populated result objects
+ if edge.key in state.cached_inputs:
+ upstream_state._result = state.cached_inputs[edge.key]
+
+ # if the edge is not mapped over, then we take its state
+ if not edge.mapped:
+ states[edge] = upstream_state
+
+ # if the edge is mapped and the upstream state is Mapped, then we are mapping
+ # over a mapped task. In this case, we take the appropriately-indexed upstream
+ # state from the upstream tasks's `Mapped.map_states` array.
+ # Note that these "states" might actually be futures at this time; we aren't
+ # blocking until they finish.
+ elif edge.mapped and upstream_state.is_mapped():
+ states[edge] = mapped_children[edge.upstream_task][i] # type: ignore
+
+ # Otherwise, we are mapping over the result of a "vanilla" task. In this
+ # case, we create a copy of the upstream state but set the result to the
+ # appropriately-indexed item from the upstream task's `State.result`
+ # array.
+ else:
+ states[edge] = copy.copy(upstream_state)
+
+ # if the current state is already Mapped, then we might be executing
+ # a re-run of the mapping pipeline. In that case, the upstream states
+ # might not have `result` attributes (as any required results could be
+ # in the `cached_inputs` attribute of one of the child states).
+ # Therefore, we only try to get a result if EITHER this task's
+ # state is not already mapped OR the upstream result is not None.
+ if (
+ not state.is_mapped()
+ or upstream_state._result != prefect.engine.result.NoResult
+ ):
+ if not hasattr(upstream_state.result, "__getitem__"):
+ raise TypeError(
+ "Cannot map over unsubscriptable object of type {t}: {preview}...".format(
+ t=type(upstream_state.result),
+ preview=repr(upstream_state.result)[:10],
+ )
+ )
+ upstream_result = upstream_state._result.from_value( # type: ignore
+ upstream_state.result[i]
+ )
+ states[edge].result = upstream_result
+ elif state.is_mapped():
+ if i >= len(state.map_states): # type: ignore
+ raise IndexError()
+
+ # only add this iteration if we made it through all iterables
+ map_upstream_states.append(states)
+
+ # index error means we reached the end of the shortest iterable
+ except IndexError:
+ break
+
+ return map_upstream_states
</patch> | [] | [] | ||||
googleapis__google-cloud-python-3156 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Language: support mention type in Entity.mentions.
[Currently](https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/language/google/cloud/language/entity.py#L79) the mentions property of an entity is only a list of strings whereas it should be a list of objects containing the mention text and mention type.
Furthermore, this change should add mention_type information to the mention documentation.
</issue>
<code>
[start of README.rst]
1 Google Cloud Python Client
2 ==========================
3
4 Python idiomatic client for `Google Cloud Platform`_ services.
5
6 .. _Google Cloud Platform: https://cloud.google.com/
7
8 |pypi| |circleci| |build| |appveyor| |coverage| |versions|
9
10 - `Homepage`_
11 - `API Documentation`_
12 - `Read The Docs Documentation`_
13
14 .. _Homepage: https://googlecloudplatform.github.io/google-cloud-python/
15 .. _API Documentation: https://googlecloudplatform.github.io/google-cloud-python/stable/
16 .. _Read The Docs Documentation: https://google-cloud-python.readthedocs.io/en/latest/
17
18 This client library has **beta** support for the following Google
19 Cloud Platform services:
20
21 - `Google BigQuery`_ (`BigQuery README`_)
22 - `Google Cloud Datastore`_ (`Datastore README`_)
23 - `Stackdriver Logging`_ (`Logging README`_)
24 - `Google Cloud Storage`_ (`Storage README`_)
25 - `Google Cloud Vision`_ (`Vision README`_)
26
27 **Beta** indicates that the client library for a particular service is
28 mostly stable and is being prepared for release. Issues and requests
29 against beta libraries are addressed with a higher priority.
30
31 This client library has **alpha** support for the following Google
32 Cloud Platform services:
33
34 - `Google Cloud Pub/Sub`_ (`Pub/Sub README`_)
35 - `Google Cloud Resource Manager`_ (`Resource Manager README`_)
36 - `Stackdriver Monitoring`_ (`Monitoring README`_)
37 - `Google Cloud Bigtable`_ (`Bigtable README`_)
38 - `Google Cloud DNS`_ (`DNS README`_)
39 - `Stackdriver Error Reporting`_ (`Error Reporting README`_)
40 - `Google Cloud Natural Language`_ (`Natural Language README`_)
41 - `Google Cloud Translation`_ (`Translation README`_)
42 - `Google Cloud Speech`_ (`Speech README`_)
43 - `Google Cloud Bigtable - HappyBase`_ (`HappyBase README`_)
44 - `Google Cloud Runtime Configuration`_ (`Runtime Config README`_)
45 - `Cloud Spanner`_ (`Cloud Spanner README`_)
46
47 **Alpha** indicates that the client library for a particular service is
48 still a work-in-progress and is more likely to get backwards-incompatible
49 updates. See `versioning`_ for more details.
50
51 .. _Google Cloud Datastore: https://pypi.python.org/pypi/google-cloud-datastore
52 .. _Datastore README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/datastore
53 .. _Google Cloud Storage: https://pypi.python.org/pypi/google-cloud-storage
54 .. _Storage README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/storage
55 .. _Google Cloud Pub/Sub: https://pypi.python.org/pypi/google-cloud-pubsub
56 .. _Pub/Sub README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/pubsub
57 .. _Google BigQuery: https://pypi.python.org/pypi/google-cloud-bigquery
58 .. _BigQuery README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigquery
59 .. _Google Cloud Resource Manager: https://pypi.python.org/pypi/google-cloud-resource-manager
60 .. _Resource Manager README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/resource_manager
61 .. _Stackdriver Logging: https://pypi.python.org/pypi/google-cloud-logging
62 .. _Logging README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/logging
63 .. _Stackdriver Monitoring: https://pypi.python.org/pypi/google-cloud-monitoring
64 .. _Monitoring README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/monitoring
65 .. _Google Cloud Bigtable: https://pypi.python.org/pypi/google-cloud-bigtable
66 .. _Bigtable README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigtable
67 .. _Google Cloud DNS: https://pypi.python.org/pypi/google-cloud-dns
68 .. _DNS README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/dns
69 .. _Stackdriver Error Reporting: https://pypi.python.org/pypi/google-cloud-error-reporting
70 .. _Error Reporting README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/error_reporting
71 .. _Google Cloud Natural Language: https://pypi.python.org/pypi/google-cloud-language
72 .. _Natural Language README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/language
73 .. _Google Cloud Translation: https://pypi.python.org/pypi/google-cloud-translate
74 .. _Translation README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/translate
75 .. _Google Cloud Speech: https://pypi.python.org/pypi/google-cloud-speech
76 .. _Speech README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/speech
77 .. _Google Cloud Vision: https://pypi.python.org/pypi/google-cloud-vision
78 .. _Vision README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/vision
79 .. _Google Cloud Bigtable - HappyBase: https://pypi.python.org/pypi/google-cloud-happybase/
80 .. _HappyBase README: https://github.com/GoogleCloudPlatform/google-cloud-python-happybase
81 .. _Google Cloud Runtime Configuration: https://cloud.google.com/deployment-manager/runtime-configurator/
82 .. _Runtime Config README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/runtimeconfig
83 .. _Cloud Spanner: https://cloud.google.com/spanner/
84 .. _Cloud Spanner README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/spanner
85 .. _versioning: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst#versioning
86
87 If you need support for other Google APIs, check out the
88 `Google APIs Python Client library`_.
89
90 .. _Google APIs Python Client library: https://github.com/google/google-api-python-client
91
92 Quick Start
93 -----------
94
95 .. code-block:: console
96
97 $ pip install --upgrade google-cloud
98
99 Example Applications
100 --------------------
101
102 - `getting-started-python`_ - A sample and `tutorial`_ that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine.
103 - `google-cloud-python-expenses-demo`_ - A sample expenses demo using Cloud Datastore and Cloud Storage
104
105 .. _getting-started-python: https://github.com/GoogleCloudPlatform/getting-started-python
106 .. _tutorial: https://cloud.google.com/python
107 .. _google-cloud-python-expenses-demo: https://github.com/GoogleCloudPlatform/google-cloud-python-expenses-demo
108
109 Authentication
110 --------------
111
112 With ``google-cloud-python`` we try to make authentication as painless as possible.
113 Check out the `Authentication section`_ in our documentation to learn more.
114 You may also find the `authentication document`_ shared by all the
115 ``google-cloud-*`` libraries to be helpful.
116
117 .. _Authentication section: https://google-cloud-python.readthedocs.io/en/latest/google-cloud-auth.html
118 .. _authentication document: https://github.com/GoogleCloudPlatform/gcloud-common/tree/master/authentication
119
120 Contributing
121 ------------
122
123 Contributions to this library are always welcome and highly encouraged.
124
125 See `CONTRIBUTING`_ for more information on how to get started.
126
127 .. _CONTRIBUTING: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst
128
129 Community
130 ---------
131
132 Google Cloud Platform Python developers hang out in `Slack`_ in the ``#python``
133 channel, click here to `get an invitation`_.
134
135
136 .. _Slack: https://googlecloud-community.slack.com
137 .. _get an invitation: https://gcp-slack.appspot.com/
138
139 License
140 -------
141
142 Apache 2.0 - See `LICENSE`_ for more information.
143
144 .. _LICENSE: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/LICENSE
145
146 .. |build| image:: https://travis-ci.org/GoogleCloudPlatform/google-cloud-python.svg?branch=master
147 :target: https://travis-ci.org/GoogleCloudPlatform/google-cloud-python
148 .. |circleci| image:: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python.svg?style=shield
149 :target: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python
150 .. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/googlecloudplatform/google-cloud-python?branch=master&svg=true
151 :target: https://ci.appveyor.com/project/GoogleCloudPlatform/google-cloud-python
152 .. |coverage| image:: https://coveralls.io/repos/GoogleCloudPlatform/google-cloud-python/badge.svg?branch=master
153 :target: https://coveralls.io/r/GoogleCloudPlatform/google-cloud-python?branch=master
154 .. |pypi| image:: https://img.shields.io/pypi/v/google-cloud.svg
155 :target: https://pypi.python.org/pypi/google-cloud
156 .. |versions| image:: https://img.shields.io/pypi/pyversions/google-cloud.svg
157 :target: https://pypi.python.org/pypi/google-cloud
158
[end of README.rst]
[start of core/google/cloud/credentials.py]
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """A simple wrapper around the OAuth2 credentials library."""
16
17 import base64
18 import datetime
19 import six
20 from six.moves.urllib.parse import urlencode
21
22 import google.auth
23 import google.auth.credentials
24
25 from google.cloud._helpers import UTC
26 from google.cloud._helpers import _NOW
27 from google.cloud._helpers import _microseconds_from_datetime
28
29
30 def get_credentials():
31 """Gets credentials implicitly from the current environment.
32
33 Uses :func:`google.auth.default()`.
34
35 :rtype: :class:`google.auth.credentials.Credentials`,
36 :returns: A new credentials instance corresponding to the implicit
37 environment.
38 """
39 credentials, _ = google.auth.default()
40 return credentials
41
42
43 def _get_signed_query_params(credentials, expiration, string_to_sign):
44 """Gets query parameters for creating a signed URL.
45
46 :type credentials: :class:`google.auth.credentials.Signer`
47 :param credentials: The credentials used to create a private key
48 for signing text.
49
50 :type expiration: int or long
51 :param expiration: When the signed URL should expire.
52
53 :type string_to_sign: str
54 :param string_to_sign: The string to be signed by the credentials.
55
56 :raises AttributeError: If :meth: sign_blob is unavailable.
57
58 :rtype: dict
59 :returns: Query parameters matching the signing credentials with a
60 signed payload.
61 """
62 if not isinstance(credentials, google.auth.credentials.Signing):
63 auth_uri = ('http://google-cloud-python.readthedocs.io/en/latest/'
64 'google-cloud-auth.html#setting-up-a-service-account')
65 raise AttributeError('you need a private key to sign credentials.'
66 'the credentials you are currently using %s '
67 'just contains a token. see %s for more '
68 'details.' % (type(credentials), auth_uri))
69
70 signature_bytes = credentials.sign_bytes(string_to_sign)
71 signature = base64.b64encode(signature_bytes)
72 service_account_name = credentials.signer_email
73 return {
74 'GoogleAccessId': service_account_name,
75 'Expires': str(expiration),
76 'Signature': signature,
77 }
78
79
80 def _get_expiration_seconds(expiration):
81 """Convert 'expiration' to a number of seconds in the future.
82
83 :type expiration: int, long, datetime.datetime, datetime.timedelta
84 :param expiration: When the signed URL should expire.
85
86 :raises TypeError: When expiration is not an integer.
87
88 :rtype: int
89 :returns: a timestamp as an absolute number of seconds.
90 """
91 # If it's a timedelta, add it to `now` in UTC.
92 if isinstance(expiration, datetime.timedelta):
93 now = _NOW().replace(tzinfo=UTC)
94 expiration = now + expiration
95
96 # If it's a datetime, convert to a timestamp.
97 if isinstance(expiration, datetime.datetime):
98 micros = _microseconds_from_datetime(expiration)
99 expiration = micros // 10**6
100
101 if not isinstance(expiration, six.integer_types):
102 raise TypeError('Expected an integer timestamp, datetime, or '
103 'timedelta. Got %s' % type(expiration))
104 return expiration
105
106
107 def generate_signed_url(credentials, resource, expiration,
108 api_access_endpoint='',
109 method='GET', content_md5=None,
110 content_type=None, response_type=None,
111 response_disposition=None, generation=None):
112 """Generate signed URL to provide query-string auth'n to a resource.
113
114 .. note::
115
116 Assumes ``credentials`` implements the
117 :class:`google.auth.credentials.Signing` interface. Also assumes
118 ``credentials`` has a ``service_account_email`` property which
119 identifies the credentials.
120
121 .. note::
122
123 If you are on Google Compute Engine, you can't generate a signed URL.
124 Follow `Issue 922`_ for updates on this. If you'd like to be able to
125 generate a signed URL from GCE, you can use a standard service account
126 from a JSON file rather than a GCE service account.
127
128 See headers `reference`_ for more details on optional arguments.
129
130 .. _Issue 922: https://github.com/GoogleCloudPlatform/\
131 google-cloud-python/issues/922
132 .. _reference: https://cloud.google.com/storage/docs/reference-headers
133
134 :type credentials: :class:`google.auth.credentials.Signing`
135 :param credentials: Credentials object with an associated private key to
136 sign text.
137
138 :type resource: str
139 :param resource: A pointer to a specific resource
140 (typically, ``/bucket-name/path/to/blob.txt``).
141
142 :type expiration: :class:`int`, :class:`long`, :class:`datetime.datetime`,
143 :class:`datetime.timedelta`
144 :param expiration: When the signed URL should expire.
145
146 :type api_access_endpoint: str
147 :param api_access_endpoint: Optional URI base. Defaults to empty string.
148
149 :type method: str
150 :param method: The HTTP verb that will be used when requesting the URL.
151 Defaults to ``'GET'``.
152
153 :type content_md5: str
154 :param content_md5: (Optional) The MD5 hash of the object referenced by
155 ``resource``.
156
157 :type content_type: str
158 :param content_type: (Optional) The content type of the object referenced
159 by ``resource``.
160
161 :type response_type: str
162 :param response_type: (Optional) Content type of responses to requests for
163 the signed URL. Used to over-ride the content type of
164 the underlying resource.
165
166 :type response_disposition: str
167 :param response_disposition: (Optional) Content disposition of responses to
168 requests for the signed URL.
169
170 :type generation: str
171 :param generation: (Optional) A value that indicates which generation of
172 the resource to fetch.
173
174 :rtype: str
175 :returns: A signed URL you can use to access the resource
176 until expiration.
177 """
178 expiration = _get_expiration_seconds(expiration)
179
180 # Generate the string to sign.
181 string_to_sign = '\n'.join([
182 method,
183 content_md5 or '',
184 content_type or '',
185 str(expiration),
186 resource])
187
188 # Set the right query parameters.
189 query_params = _get_signed_query_params(credentials,
190 expiration,
191 string_to_sign)
192 if response_type is not None:
193 query_params['response-content-type'] = response_type
194 if response_disposition is not None:
195 query_params['response-content-disposition'] = response_disposition
196 if generation is not None:
197 query_params['generation'] = generation
198
199 # Return the built URL.
200 return '{endpoint}{resource}?{querystring}'.format(
201 endpoint=api_access_endpoint, resource=resource,
202 querystring=urlencode(query_params))
203
[end of core/google/cloud/credentials.py]
[start of datastore/google/cloud/datastore/helpers.py]
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Helper functions for dealing with Cloud Datastore's Protobuf API.
16
17 The non-private functions are part of the API.
18 """
19
20 import datetime
21 import itertools
22
23 from google.protobuf import struct_pb2
24 from google.type import latlng_pb2
25 import six
26
27 from google.cloud._helpers import _datetime_to_pb_timestamp
28 from google.cloud._helpers import _pb_timestamp_to_datetime
29 from google.cloud.proto.datastore.v1 import entity_pb2 as _entity_pb2
30 from google.cloud.datastore.entity import Entity
31 from google.cloud.datastore.key import Key
32
33
34 def _get_meaning(value_pb, is_list=False):
35 """Get the meaning from a protobuf value.
36
37 :type value_pb: :class:`.entity_pb2.Value`
38 :param value_pb: The protobuf value to be checked for an
39 associated meaning.
40
41 :type is_list: bool
42 :param is_list: Boolean indicating if the ``value_pb`` contains
43 a list value.
44
45 :rtype: int
46 :returns: The meaning for the ``value_pb`` if one is set, else
47 :data:`None`. For a list value, if there are disagreeing
48 means it just returns a list of meanings. If all the
49 list meanings agree, it just condenses them.
50 """
51 meaning = None
52 if is_list:
53 # An empty list will have no values, hence no shared meaning
54 # set among them.
55 if len(value_pb.array_value.values) == 0:
56 return None
57
58 # We check among all the meanings, some of which may be None,
59 # the rest which may be enum/int values.
60 all_meanings = [_get_meaning(sub_value_pb)
61 for sub_value_pb in value_pb.array_value.values]
62 unique_meanings = set(all_meanings)
63 if len(unique_meanings) == 1:
64 # If there is a unique meaning, we preserve it.
65 meaning = unique_meanings.pop()
66 else: # We know len(value_pb.array_value.values) > 0.
67 # If the meaning is not unique, just return all of them.
68 meaning = all_meanings
69 elif value_pb.meaning: # Simple field (int32).
70 meaning = value_pb.meaning
71
72 return meaning
73
74
75 def _new_value_pb(entity_pb, name):
76 """Add (by name) a new ``Value`` protobuf to an entity protobuf.
77
78 :type entity_pb: :class:`.entity_pb2.Entity`
79 :param entity_pb: An entity protobuf to add a new property to.
80
81 :type name: str
82 :param name: The name of the new property.
83
84 :rtype: :class:`.entity_pb2.Value`
85 :returns: The new ``Value`` protobuf that was added to the entity.
86 """
87 return entity_pb.properties.get_or_create(name)
88
89
90 def _property_tuples(entity_pb):
91 """Iterator of name, ``Value`` tuples from entity properties.
92
93 :type entity_pb: :class:`.entity_pb2.Entity`
94 :param entity_pb: An entity protobuf to add a new property to.
95
96 :rtype: :class:`generator`
97 :returns: An iterator that yields tuples of a name and ``Value``
98 corresponding to properties on the entity.
99 """
100 return six.iteritems(entity_pb.properties)
101
102
103 def entity_from_protobuf(pb):
104 """Factory method for creating an entity based on a protobuf.
105
106 The protobuf should be one returned from the Cloud Datastore
107 Protobuf API.
108
109 :type pb: :class:`.entity_pb2.Entity`
110 :param pb: The Protobuf representing the entity.
111
112 :rtype: :class:`google.cloud.datastore.entity.Entity`
113 :returns: The entity derived from the protobuf.
114 """
115 key = None
116 if pb.HasField('key'): # Message field (Key)
117 key = key_from_protobuf(pb.key)
118
119 entity_props = {}
120 entity_meanings = {}
121 exclude_from_indexes = []
122
123 for prop_name, value_pb in _property_tuples(pb):
124 value = _get_value_from_value_pb(value_pb)
125 entity_props[prop_name] = value
126
127 # Check if the property has an associated meaning.
128 is_list = isinstance(value, list)
129 meaning = _get_meaning(value_pb, is_list=is_list)
130 if meaning is not None:
131 entity_meanings[prop_name] = (meaning, value)
132
133 # Check if ``value_pb`` was excluded from index. Lists need to be
134 # special-cased and we require all ``exclude_from_indexes`` values
135 # in a list agree.
136 if is_list:
137 exclude_values = set(value_pb.exclude_from_indexes
138 for value_pb in value_pb.array_value.values)
139 if len(exclude_values) != 1:
140 raise ValueError('For an array_value, subvalues must either '
141 'all be indexed or all excluded from '
142 'indexes.')
143
144 if exclude_values.pop():
145 exclude_from_indexes.append(prop_name)
146 else:
147 if value_pb.exclude_from_indexes:
148 exclude_from_indexes.append(prop_name)
149
150 entity = Entity(key=key, exclude_from_indexes=exclude_from_indexes)
151 entity.update(entity_props)
152 entity._meanings.update(entity_meanings)
153 return entity
154
155
156 def _set_pb_meaning_from_entity(entity, name, value, value_pb,
157 is_list=False):
158 """Add meaning information (from an entity) to a protobuf.
159
160 :type entity: :class:`google.cloud.datastore.entity.Entity`
161 :param entity: The entity to be turned into a protobuf.
162
163 :type name: str
164 :param name: The name of the property.
165
166 :type value: object
167 :param value: The current value stored as property ``name``.
168
169 :type value_pb: :class:`.entity_pb2.Value`
170 :param value_pb: The protobuf value to add meaning / meanings to.
171
172 :type is_list: bool
173 :param is_list: (Optional) Boolean indicating if the ``value`` is
174 a list value.
175 """
176 if name not in entity._meanings:
177 return
178
179 meaning, orig_value = entity._meanings[name]
180 # Only add the meaning back to the protobuf if the value is
181 # unchanged from when it was originally read from the API.
182 if orig_value is not value:
183 return
184
185 # For lists, we set meaning on each sub-element.
186 if is_list:
187 if not isinstance(meaning, list):
188 meaning = itertools.repeat(meaning)
189 val_iter = six.moves.zip(value_pb.array_value.values,
190 meaning)
191 for sub_value_pb, sub_meaning in val_iter:
192 if sub_meaning is not None:
193 sub_value_pb.meaning = sub_meaning
194 else:
195 value_pb.meaning = meaning
196
197
198 def entity_to_protobuf(entity):
199 """Converts an entity into a protobuf.
200
201 :type entity: :class:`google.cloud.datastore.entity.Entity`
202 :param entity: The entity to be turned into a protobuf.
203
204 :rtype: :class:`.entity_pb2.Entity`
205 :returns: The protobuf representing the entity.
206 """
207 entity_pb = _entity_pb2.Entity()
208 if entity.key is not None:
209 key_pb = entity.key.to_protobuf()
210 entity_pb.key.CopyFrom(key_pb)
211
212 for name, value in entity.items():
213 value_is_list = isinstance(value, list)
214 if value_is_list and len(value) == 0:
215 continue
216
217 value_pb = _new_value_pb(entity_pb, name)
218 # Set the appropriate value.
219 _set_protobuf_value(value_pb, value)
220
221 # Add index information to protobuf.
222 if name in entity.exclude_from_indexes:
223 if not value_is_list:
224 value_pb.exclude_from_indexes = True
225
226 for sub_value in value_pb.array_value.values:
227 sub_value.exclude_from_indexes = True
228
229 # Add meaning information to protobuf.
230 _set_pb_meaning_from_entity(entity, name, value, value_pb,
231 is_list=value_is_list)
232
233 return entity_pb
234
235
236 def key_from_protobuf(pb):
237 """Factory method for creating a key based on a protobuf.
238
239 The protobuf should be one returned from the Cloud Datastore
240 Protobuf API.
241
242 :type pb: :class:`.entity_pb2.Key`
243 :param pb: The Protobuf representing the key.
244
245 :rtype: :class:`google.cloud.datastore.key.Key`
246 :returns: a new `Key` instance
247 """
248 path_args = []
249 for element in pb.path:
250 path_args.append(element.kind)
251 if element.id: # Simple field (int64)
252 path_args.append(element.id)
253 # This is safe: we expect proto objects returned will only have
254 # one of `name` or `id` set.
255 if element.name: # Simple field (string)
256 path_args.append(element.name)
257
258 project = None
259 if pb.partition_id.project_id: # Simple field (string)
260 project = pb.partition_id.project_id
261 namespace = None
262 if pb.partition_id.namespace_id: # Simple field (string)
263 namespace = pb.partition_id.namespace_id
264
265 return Key(*path_args, namespace=namespace, project=project)
266
267
268 def _pb_attr_value(val):
269 """Given a value, return the protobuf attribute name and proper value.
270
271 The Protobuf API uses different attribute names based on value types
272 rather than inferring the type. This function simply determines the
273 proper attribute name based on the type of the value provided and
274 returns the attribute name as well as a properly formatted value.
275
276 Certain value types need to be coerced into a different type (such
277 as a `datetime.datetime` into an integer timestamp, or a
278 `google.cloud.datastore.key.Key` into a Protobuf representation. This
279 function handles that for you.
280
281 .. note::
282 Values which are "text" ('unicode' in Python2, 'str' in Python3) map
283 to 'string_value' in the datastore; values which are "bytes"
284 ('str' in Python2, 'bytes' in Python3) map to 'blob_value'.
285
286 For example:
287
288 >>> _pb_attr_value(1234)
289 ('integer_value', 1234)
290 >>> _pb_attr_value('my_string')
291 ('string_value', 'my_string')
292
293 :type val: `datetime.datetime`, :class:`google.cloud.datastore.key.Key`,
294 bool, float, integer, string
295 :param val: The value to be scrutinized.
296
297 :rtype: tuple
298 :returns: A tuple of the attribute name and proper value type.
299 """
300
301 if isinstance(val, datetime.datetime):
302 name = 'timestamp'
303 value = _datetime_to_pb_timestamp(val)
304 elif isinstance(val, Key):
305 name, value = 'key', val.to_protobuf()
306 elif isinstance(val, bool):
307 name, value = 'boolean', val
308 elif isinstance(val, float):
309 name, value = 'double', val
310 elif isinstance(val, six.integer_types):
311 name, value = 'integer', val
312 elif isinstance(val, six.text_type):
313 name, value = 'string', val
314 elif isinstance(val, (bytes, str)):
315 name, value = 'blob', val
316 elif isinstance(val, Entity):
317 name, value = 'entity', val
318 elif isinstance(val, list):
319 name, value = 'array', val
320 elif isinstance(val, GeoPoint):
321 name, value = 'geo_point', val.to_protobuf()
322 elif val is None:
323 name, value = 'null', struct_pb2.NULL_VALUE
324 else:
325 raise ValueError("Unknown protobuf attr type %s" % type(val))
326
327 return name + '_value', value
328
329
330 def _get_value_from_value_pb(value_pb):
331 """Given a protobuf for a Value, get the correct value.
332
333 The Cloud Datastore Protobuf API returns a Property Protobuf which
334 has one value set and the rest blank. This function retrieves the
335 the one value provided.
336
337 Some work is done to coerce the return value into a more useful type
338 (particularly in the case of a timestamp value, or a key value).
339
340 :type value_pb: :class:`.entity_pb2.Value`
341 :param value_pb: The Value Protobuf.
342
343 :rtype: object
344 :returns: The value provided by the Protobuf.
345 :raises: :class:`ValueError <exceptions.ValueError>` if no value type
346 has been set.
347 """
348 value_type = value_pb.WhichOneof('value_type')
349
350 if value_type == 'timestamp_value':
351 result = _pb_timestamp_to_datetime(value_pb.timestamp_value)
352
353 elif value_type == 'key_value':
354 result = key_from_protobuf(value_pb.key_value)
355
356 elif value_type == 'boolean_value':
357 result = value_pb.boolean_value
358
359 elif value_type == 'double_value':
360 result = value_pb.double_value
361
362 elif value_type == 'integer_value':
363 result = value_pb.integer_value
364
365 elif value_type == 'string_value':
366 result = value_pb.string_value
367
368 elif value_type == 'blob_value':
369 result = value_pb.blob_value
370
371 elif value_type == 'entity_value':
372 result = entity_from_protobuf(value_pb.entity_value)
373
374 elif value_type == 'array_value':
375 result = [_get_value_from_value_pb(value)
376 for value in value_pb.array_value.values]
377
378 elif value_type == 'geo_point_value':
379 result = GeoPoint(value_pb.geo_point_value.latitude,
380 value_pb.geo_point_value.longitude)
381
382 elif value_type == 'null_value':
383 result = None
384
385 else:
386 raise ValueError('Value protobuf did not have any value set')
387
388 return result
389
390
391 def _set_protobuf_value(value_pb, val):
392 """Assign 'val' to the correct subfield of 'value_pb'.
393
394 The Protobuf API uses different attribute names based on value types
395 rather than inferring the type.
396
397 Some value types (entities, keys, lists) cannot be directly
398 assigned; this function handles them correctly.
399
400 :type value_pb: :class:`.entity_pb2.Value`
401 :param value_pb: The value protobuf to which the value is being assigned.
402
403 :type val: :class:`datetime.datetime`, boolean, float, integer, string,
404 :class:`google.cloud.datastore.key.Key`,
405 :class:`google.cloud.datastore.entity.Entity`
406 :param val: The value to be assigned.
407 """
408 attr, val = _pb_attr_value(val)
409 if attr == 'key_value':
410 value_pb.key_value.CopyFrom(val)
411 elif attr == 'timestamp_value':
412 value_pb.timestamp_value.CopyFrom(val)
413 elif attr == 'entity_value':
414 entity_pb = entity_to_protobuf(val)
415 value_pb.entity_value.CopyFrom(entity_pb)
416 elif attr == 'array_value':
417 l_pb = value_pb.array_value.values
418 for item in val:
419 i_pb = l_pb.add()
420 _set_protobuf_value(i_pb, item)
421 elif attr == 'geo_point_value':
422 value_pb.geo_point_value.CopyFrom(val)
423 else: # scalar, just assign
424 setattr(value_pb, attr, val)
425
426
427 class GeoPoint(object):
428 """Simple container for a geo point value.
429
430 :type latitude: float
431 :param latitude: Latitude of a point.
432
433 :type longitude: float
434 :param longitude: Longitude of a point.
435 """
436
437 def __init__(self, latitude, longitude):
438 self.latitude = latitude
439 self.longitude = longitude
440
441 def to_protobuf(self):
442 """Convert the current object to protobuf.
443
444 :rtype: :class:`google.type.latlng_pb2.LatLng`.
445 :returns: The current point as a protobuf.
446 """
447 return latlng_pb2.LatLng(latitude=self.latitude,
448 longitude=self.longitude)
449
450 def __eq__(self, other):
451 """Compare two geo points for equality.
452
453 :rtype: bool
454 :returns: True if the points compare equal, else False.
455 """
456 if not isinstance(other, GeoPoint):
457 return False
458
459 return (self.latitude == other.latitude and
460 self.longitude == other.longitude)
461
462 def __ne__(self, other):
463 """Compare two geo points for inequality.
464
465 :rtype: bool
466 :returns: False if the points compare equal, else True.
467 """
468 return not self.__eq__(other)
469
[end of datastore/google/cloud/datastore/helpers.py]
[start of docs/conf.py]
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 # google-cloud documentation build configuration file, created by
16 # sphinx-quickstart on Tue Jan 21 22:24:47 2014.
17 #
18 # This file is execfile()d with the current directory set to its containing dir.
19 #
20 # Note that not all possible configuration values are present in this
21 # autogenerated file.
22 #
23 # All configuration values have a default; values that are commented out
24 # serve to show the default.
25
26 from email import message_from_string
27 import os
28 from pkg_resources import get_distribution
29 import sys
30 import urllib
31
32 import sphinx_rtd_theme
33
34
35 ON_READ_THE_DOCS = os.environ.get('READTHEDOCS', None) == 'True'
36
37 # If extensions (or modules to document with autodoc) are in another directory,
38 # add these directories to sys.path here. If the directory is relative to the
39 # documentation root, use os.path.abspath to make it absolute, like shown here.
40 sys.path.insert(0, os.path.abspath('..'))
41
42 # -- General configuration -----------------------------------------------------
43
44 # If your documentation needs a minimal Sphinx version, state it here.
45 #needs_sphinx = '1.0'
46
47 # Add any Sphinx extension module names here, as strings. They can be extensions
48 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
49 extensions = [
50 'sphinx.ext.autodoc',
51 'sphinx.ext.autosummary',
52 'sphinx.ext.doctest',
53 'sphinx.ext.intersphinx',
54 'sphinx.ext.todo',
55 'sphinx.ext.viewcode',
56 ]
57
58 # Add any paths that contain templates here, relative to this directory.
59 templates_path = []
60
61 # The suffix of source filenames.
62 source_suffix = '.rst'
63
64 # The encoding of source files.
65 #source_encoding = 'utf-8-sig'
66
67 # The master toctree document.
68 master_doc = 'index'
69
70 # General information about the project.
71 project = u'google-cloud'
72 copyright = u'2014, Google'
73
74 # The version info for the project you're documenting, acts as replacement for
75 # |version| and |release|, also used in various other places throughout the
76 # built documents.
77 #
78 # The short X.Y version.
79 distro = get_distribution('google-cloud')
80 release = os.getenv('SPHINX_RELEASE', distro.version)
81
82 # The language for content autogenerated by Sphinx. Refer to documentation
83 # for a list of supported languages.
84 #language = None
85
86 # There are two options for replacing |today|: either, you set today to some
87 # non-false value, then it is used:
88 #today = ''
89 # Else, today_fmt is used as the format for a strftime call.
90 #today_fmt = '%B %d, %Y'
91
92 # List of patterns, relative to source directory, that match files and
93 # directories to ignore when looking for source files.
94 exclude_patterns = ['_build']
95
96 # The reST default role (used for this markup: `text`) to use for all documents.
97 #default_role = None
98
99 # If true, '()' will be appended to :func: etc. cross-reference text.
100 #add_function_parentheses = True
101
102 # If true, the current module name will be prepended to all description
103 # unit titles (such as .. function::).
104 #add_module_names = True
105
106 # If true, sectionauthor and moduleauthor directives will be shown in the
107 # output. They are ignored by default.
108 #show_authors = False
109
110 # The name of the Pygments (syntax highlighting) style to use.
111 pygments_style = 'sphinx'
112
113 # A list of ignored prefixes for module index sorting.
114 #modindex_common_prefix = []
115
116
117 # -- Options for HTML output ---------------------------------------------------
118
119 # The theme to use for HTML and HTML Help pages. See the documentation for
120 # a list of builtin themes.
121
122 if not ON_READ_THE_DOCS:
123 html_theme = 'sphinx_rtd_theme'
124 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
125
126 # Theme options are theme-specific and customize the look and feel of a theme
127 # further. For a list of options available for each theme, see the
128 # documentation.
129 #html_theme_options = {}
130
131 # Add any paths that contain custom themes here, relative to this directory.
132 #html_theme_path = []
133
134 # The name for this set of Sphinx documents. If None, it defaults to
135 # "<project> v<release> documentation".
136 #html_title = None
137
138 # A shorter title for the navigation bar. Default is the same as html_title.
139 #html_short_title = None
140
141 # The name of an image file (relative to this directory) to place at the top
142 # of the sidebar.
143 #html_logo = None
144
145 # The name of an image file (within the static path) to use as favicon of the
146 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
147 # pixels large.
148 html_favicon = '_static/images/favicon.ico'
149
150 # Add any paths that contain custom static files (such as style sheets) here,
151 # relative to this directory. They are copied after the builtin static files,
152 # so a file named "default.css" will overwrite the builtin "default.css".
153 html_static_path = ['_static']
154
155 html_add_permalinks = '#'
156
157 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
158 # using the given strftime format.
159 #html_last_updated_fmt = '%b %d, %Y'
160
161 # If true, SmartyPants will be used to convert quotes and dashes to
162 # typographically correct entities.
163 #html_use_smartypants = True
164
165 # Custom sidebar templates, maps document names to template names.
166 #html_sidebars = {}
167
168 # Additional templates that should be rendered to pages, maps page names to
169 # template names.
170 #html_additional_pages = {}
171
172 # If false, no module index is generated.
173 #html_domain_indices = True
174
175 # If false, no index is generated.
176 #html_use_index = True
177
178 # If true, the index is split into individual pages for each letter.
179 #html_split_index = False
180
181 # If true, links to the reST sources are added to the pages.
182 #html_show_sourcelink = True
183
184 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
185 #html_show_sphinx = True
186
187 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
188 #html_show_copyright = True
189
190 # If true, an OpenSearch description file will be output, and all pages will
191 # contain a <link> tag referring to it. The value of this option must be the
192 # base URL from which the finished HTML is served.
193 #html_use_opensearch = ''
194
195 # This is the file name suffix for HTML files (e.g. ".xhtml").
196 #html_file_suffix = None
197
198 # Output file base name for HTML help builder.
199 htmlhelp_basename = 'google-cloud-doc'
200
201 html_context = {}
202
203
204 # -- Options for LaTeX output --------------------------------------------------
205
206 latex_elements = {
207 # The paper size ('letterpaper' or 'a4paper').
208 #'papersize': 'letterpaper',
209
210 # The font size ('10pt', '11pt' or '12pt').
211 #'pointsize': '10pt',
212
213 # Additional stuff for the LaTeX preamble.
214 #'preamble': '',
215 }
216
217 metadata = distro.get_metadata(distro.PKG_INFO)
218 author = message_from_string(metadata).get('Author')
219 # Grouping the document tree into LaTeX files. List of tuples
220 # (source start file, target name, title, author, documentclass [howto/manual]).
221 latex_documents = [
222 ('index', 'google-cloud.tex', u'google-cloud Documentation',
223 author, 'manual'),
224 ]
225
226 # The name of an image file (relative to this directory) to place at the top of
227 # the title page.
228 #latex_logo = None
229
230 # For "manual" documents, if this is true, then toplevel headings are parts,
231 # not chapters.
232 #latex_use_parts = False
233
234 # If true, show page references after internal links.
235 #latex_show_pagerefs = False
236
237 # If true, show URL addresses after external links.
238 #latex_show_urls = False
239
240 # Documents to append as an appendix to all manuals.
241 #latex_appendices = []
242
243 # If false, no module index is generated.
244 #latex_domain_indices = True
245
246
247 # -- Options for manual page output --------------------------------------------
248
249 # One entry per manual page. List of tuples
250 # (source start file, name, description, authors, manual section).
251 man_pages = [
252 ('index', 'google-cloud', u'google-cloud Documentation',
253 [author], 1)
254 ]
255
256 # If true, show URL addresses after external links.
257 #man_show_urls = False
258
259
260 # -- Options for Texinfo output ------------------------------------------------
261
262 # Grouping the document tree into Texinfo files. List of tuples
263 # (source start file, target name, title, author,
264 # dir menu entry, description, category)
265 texinfo_documents = [
266 ('index', 'google-cloud', u'google-cloud Documentation',
267 author, 'google-cloud', 'Python API for Google Cloud.',
268 'Miscellaneous'),
269 ]
270
271 # Documents to append as an appendix to all manuals.
272 #texinfo_appendices = []
273
274 # If false, no module index is generated.
275 #texinfo_domain_indices = True
276
277 # How to display URL addresses: 'footnote', 'no', or 'inline'.
278 #texinfo_show_urls = 'footnote'
279
280 # This pulls class descriptions from the class docstring,
281 # and parameter definitions from the __init__ docstring.
282 autoclass_content = 'both'
283
284 # Configuration for intersphinx:
285 # Refer to the Python standard library and the oauth2client and
286 # httplib2 libraries.
287 intersphinx_mapping = {
288 'httplib2': ('http://httplib2.readthedocs.io/en/latest/', None),
289 'oauth2client': ('http://oauth2client.readthedocs.io/en/latest', None),
290 'pandas': ('http://pandas.pydata.org/pandas-docs/stable/', None),
291 'python': ('https://docs.python.org/2', None),
292 'google-auth': ('https://google-auth.readthedocs.io/en/stable', None),
293 }
294
[end of docs/conf.py]
[start of language/google/cloud/language/document.py]
1 # Copyright 2016-2017 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Definition for Google Cloud Natural Language API documents.
16
17 A document is used to hold text to be analyzed and annotated.
18 """
19
20 import collections
21 import sys
22
23 from google.cloud.language import api_responses
24 from google.cloud.language.entity import Entity
25 from google.cloud.language.sentiment import Sentiment
26 from google.cloud.language.sentence import Sentence
27 from google.cloud.language.syntax import Token
28
29
30 Annotations = collections.namedtuple(
31 'Annotations',
32 ['sentences', 'tokens', 'sentiment', 'entities', 'language'])
33 """Annotations for a document.
34
35 :type sentences: list
36 :param sentences: List of :class:`.Sentence` in a document.
37
38 :type tokens: list
39 :param tokens: List of :class:`.Token` from a document.
40
41 :type sentiment: :class:`Sentiment`
42 :param sentiment: The sentiment of a document.
43
44 :type entities: list
45 :param entities: List of :class:`~.language.entity.Entity`
46 found in a document.
47
48 :type language: str
49 :param language: The language used for the annotation.
50 """
51
52
53 class Encoding(object):
54 """The encoding type used to calculate offsets.
55
56 Represents the text encoding that the caller uses to process the output.
57 The API provides the beginning offsets for various outputs, such as tokens
58 and mentions.
59 """
60
61 NONE = 'NONE'
62 """Unspecified encoding type."""
63
64 UTF8 = 'UTF8'
65 """UTF-8 encoding type."""
66
67 UTF16 = 'UTF16'
68 """UTF-16 encoding type."""
69
70 UTF32 = 'UTF32'
71 """UTF-32 encoding type."""
72
73 @classmethod
74 def get_default(cls):
75 """Return the appropriate default encoding on this system.
76
77 :rtype: str
78 :returns: The correct default encoding on this system.
79 """
80 if sys.maxunicode == 65535:
81 return cls.UTF16
82 return cls.UTF32
83
84
85 class Document(object):
86 """Document to send to Google Cloud Natural Language API.
87
88 Represents either plain text or HTML, and the content is either
89 stored on the document or referred to in a Google Cloud Storage
90 object.
91
92 :type client: :class:`~google.cloud.language.client.Client`
93 :param client: A client which holds credentials and other
94 configuration.
95
96 :type content: str
97 :param content: (Optional) The document text content (either plain
98 text or HTML).
99
100 :type gcs_url: str
101 :param gcs_url: (Optional) The URL of the Google Cloud Storage object
102 holding the content. Of the form
103 ``gs://{bucket}/{blob-name}``.
104
105 :type doc_type: str
106 :param doc_type: (Optional) The type of text in the document.
107 Defaults to plain text. Can be one of
108 :attr:`~.Document.PLAIN_TEXT` or
109 or :attr:`~.Document.HTML`.
110
111 :type language: str
112 :param language: (Optional) The language of the document text.
113 Defaults to None (auto-detect).
114
115 :type encoding: str
116 :param encoding: (Optional) The encoding of the document text.
117 Defaults to UTF-8. Can be one of
118 :attr:`~.Encoding.UTF8`, :attr:`~.Encoding.UTF16`
119 or :attr:`~.Encoding.UTF32`.
120
121 :raises: :class:`~exceptions.ValueError` both ``content`` and ``gcs_url``
122 are specified or if neither are specified.
123 """
124
125 TYPE_UNSPECIFIED = 'TYPE_UNSPECIFIED'
126 """Unspecified document type."""
127
128 PLAIN_TEXT = 'PLAIN_TEXT'
129 """Plain text document type."""
130
131 HTML = 'HTML'
132 """HTML document type."""
133
134 def __init__(self, client, content=None, gcs_url=None, doc_type=PLAIN_TEXT,
135 language=None, encoding=Encoding.get_default()):
136 if content is not None and gcs_url is not None:
137 raise ValueError('A Document cannot contain both local text and '
138 'a link to text in a Google Cloud Storage object')
139 if content is None and gcs_url is None:
140 raise ValueError('A Document must contain either local text or a '
141 'link to text in a Google Cloud Storage object')
142 self.client = client
143 self.content = content
144 self.gcs_url = gcs_url
145 self.doc_type = doc_type
146 self.language = language
147 self.encoding = encoding
148
149 def _to_dict(self):
150 """Helper to convert the current document into a dictionary.
151
152 To be used when constructing requests.
153
154 :rtype: dict
155 :returns: The Document value as a JSON dictionary.
156 """
157 info = {
158 'type': self.doc_type,
159 }
160 if self.language is not None:
161 info['language'] = self.language
162 if self.content is not None:
163 info['content'] = self.content
164 elif self.gcs_url is not None:
165 info['gcsContentUri'] = self.gcs_url
166 return info
167
168 def analyze_entities(self):
169 """Analyze the entities in the current document.
170
171 Finds named entities (currently finds proper names as of August 2016)
172 in the text, entity types, salience, mentions for each entity, and
173 other properties.
174
175 .. _analyzeEntities: https://cloud.google.com/natural-language/\
176 reference/rest/v1/documents/analyzeEntities
177
178 See `analyzeEntities`_.
179
180 :rtype: :class:`~.language.entity.EntityResponse`
181 :returns: A representation of the entity response.
182 """
183 data = {
184 'document': self._to_dict(),
185 'encodingType': self.encoding,
186 }
187 api_response = self.client._connection.api_request(
188 method='POST', path='analyzeEntities', data=data)
189 return api_responses.EntityResponse.from_api_repr(api_response)
190
191 def analyze_sentiment(self):
192 """Analyze the sentiment in the current document.
193
194 .. _analyzeSentiment: https://cloud.google.com/natural-language/\
195 reference/rest/v1/documents/analyzeSentiment
196
197 See `analyzeSentiment`_.
198
199 :rtype: :class:`.SentimentResponse`
200 :returns: A representation of the sentiment response.
201 """
202 data = {'document': self._to_dict()}
203 api_response = self.client._connection.api_request(
204 method='POST', path='analyzeSentiment', data=data)
205 return api_responses.SentimentResponse.from_api_repr(api_response)
206
207 def analyze_syntax(self):
208 """Analyze the syntax in the current document.
209
210 .. _analyzeSyntax: https://cloud.google.com/natural-language/\
211 reference/rest/v1/documents/analyzeSyntax
212
213 See `analyzeSyntax`_.
214
215 :rtype: list
216 :returns: A list of :class:`~.language.syntax.Token` returned from
217 the API.
218 """
219 data = {
220 'document': self._to_dict(),
221 'encodingType': self.encoding,
222 }
223 api_response = self.client._connection.api_request(
224 method='POST', path='analyzeSyntax', data=data)
225 return api_responses.SyntaxResponse.from_api_repr(api_response)
226
227 def annotate_text(self, include_syntax=True, include_entities=True,
228 include_sentiment=True):
229 """Advanced natural language API: document syntax and other features.
230
231 Includes the full functionality of :meth:`analyze_entities` and
232 :meth:`analyze_sentiment`, enabled by the flags
233 ``include_entities`` and ``include_sentiment`` respectively.
234
235 In addition ``include_syntax`` adds a new feature that analyzes
236 the document for semantic and syntacticinformation.
237
238 .. note::
239
240 This API is intended for users who are familiar with machine
241 learning and need in-depth text features to build upon.
242
243 .. _annotateText: https://cloud.google.com/natural-language/\
244 reference/rest/v1/documents/annotateText
245
246 See `annotateText`_.
247
248 :type include_syntax: bool
249 :param include_syntax: (Optional) Flag to enable syntax analysis
250 of the current document.
251
252 :type include_entities: bool
253 :param include_entities: (Optional) Flag to enable entity extraction
254 from the current document.
255
256 :type include_sentiment: bool
257 :param include_sentiment: (Optional) Flag to enable sentiment
258 analysis of the current document.
259
260 :rtype: :class:`Annotations`
261 :returns: A tuple of each of the four values returned from the API:
262 sentences, tokens, sentiment and entities.
263 """
264 features = {}
265 if include_syntax:
266 features['extractSyntax'] = True
267 if include_entities:
268 features['extractEntities'] = True
269 if include_sentiment:
270 features['extractDocumentSentiment'] = True
271
272 data = {
273 'document': self._to_dict(),
274 'features': features,
275 'encodingType': self.encoding,
276 }
277 api_response = self.client._connection.api_request(
278 method='POST', path='annotateText', data=data)
279
280 sentences = [Sentence.from_api_repr(sentence)
281 for sentence in api_response['sentences']]
282 tokens = [Token.from_api_repr(token)
283 for token in api_response['tokens']]
284 sentiment_info = api_response.get('documentSentiment')
285 if sentiment_info is None:
286 sentiment = None
287 else:
288 sentiment = Sentiment.from_api_repr(sentiment_info)
289 entities = [Entity.from_api_repr(entity)
290 for entity in api_response['entities']]
291 annotations = Annotations(
292 entities=entities,
293 language=api_response.get('language'),
294 sentences=sentences,
295 sentiment=sentiment,
296 tokens=tokens,
297 )
298 return annotations
299
[end of language/google/cloud/language/document.py]
[start of language/google/cloud/language/entity.py]
1 # Copyright 2016-2017 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Definition for Google Cloud Natural Language API entities.
16
17 An entity is used to describe a proper name extracted from text.
18 """
19
20
21 class EntityType(object):
22 """List of possible entity types."""
23
24 UNKNOWN = 'UNKNOWN'
25 """Unknown entity type."""
26
27 PERSON = 'PERSON'
28 """Person entity type."""
29
30 LOCATION = 'LOCATION'
31 """Location entity type."""
32
33 ORGANIZATION = 'ORGANIZATION'
34 """Organization entity type."""
35
36 EVENT = 'EVENT'
37 """Event entity type."""
38
39 WORK_OF_ART = 'WORK_OF_ART'
40 """Work of art entity type."""
41
42 CONSUMER_GOOD = 'CONSUMER_GOOD'
43 """Consumer good entity type."""
44
45 OTHER = 'OTHER'
46 """Other entity type (i.e. known but not classified)."""
47
48
49 class Entity(object):
50 """A Google Cloud Natural Language API entity.
51
52 Represents a phrase in text that is a known entity, such as a person,
53 an organization, or location. The API associates information, such as
54 salience and mentions, with entities.
55
56 .. _Entity message: https://cloud.google.com/natural-language/\
57 reference/rest/v1/Entity
58 .. _EntityType enum: https://cloud.google.com/natural-language/\
59 reference/rest/v1/Entity#Type
60
61 See `Entity message`_.
62
63 :type name: str
64 :param name: The name / phrase identified as the entity.
65
66 :type entity_type: str
67 :param entity_type: The type of the entity. See `EntityType enum`_.
68
69 :type metadata: dict
70 :param metadata: The metadata associated with the entity.
71 Wikipedia URLs and Knowledge Graph MIDs are
72 provided, if available. The associated keys are
73 "wikipedia_url" and "mid", respectively.
74
75 :type salience: float
76 :param salience: The prominence of the entity / phrase within the text
77 containing it.
78
79 :type mentions: list
80 :param mentions: List of strings that mention the entity.
81 """
82
83 def __init__(self, name, entity_type, metadata, salience, mentions):
84 self.name = name
85 self.entity_type = entity_type
86 self.metadata = metadata
87 self.salience = salience
88 self.mentions = mentions
89
90 @classmethod
91 def from_api_repr(cls, payload):
92 """Convert an Entity from the JSON API into an :class:`Entity`.
93
94 :param payload: dict
95 :type payload: The value from the backend.
96
97 :rtype: :class:`Entity`
98 :returns: The entity parsed from the API representation.
99 """
100 name = payload['name']
101 entity_type = payload['type']
102 metadata = payload['metadata']
103 salience = payload['salience']
104 mentions = [value['text']['content']
105 for value in payload['mentions']]
106 return cls(name, entity_type, metadata, salience, mentions)
107
[end of language/google/cloud/language/entity.py]
[start of storage/google/cloud/storage/blob.py]
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint: disable=too-many-lines
16
17 """Create / interact with Google Cloud Storage blobs."""
18
19 import base64
20 import copy
21 import hashlib
22 from io import BytesIO
23 from io import UnsupportedOperation
24 import json
25 import mimetypes
26 import os
27 import time
28
29 import httplib2
30 import six
31 from six.moves.urllib.parse import quote
32
33 from google.cloud._helpers import _rfc3339_to_datetime
34 from google.cloud._helpers import _to_bytes
35 from google.cloud._helpers import _bytes_to_unicode
36 from google.cloud.credentials import generate_signed_url
37 from google.cloud.exceptions import NotFound
38 from google.cloud.exceptions import make_exception
39 from google.cloud.storage._helpers import _PropertyMixin
40 from google.cloud.storage._helpers import _scalar_property
41 from google.cloud.storage.acl import ObjectACL
42 from google.cloud.streaming.http_wrapper import Request
43 from google.cloud.streaming.http_wrapper import make_api_request
44 from google.cloud.streaming.transfer import Download
45 from google.cloud.streaming.transfer import RESUMABLE_UPLOAD
46 from google.cloud.streaming.transfer import Upload
47
48
49 _API_ACCESS_ENDPOINT = 'https://storage.googleapis.com'
50
51
52 class Blob(_PropertyMixin):
53 """A wrapper around Cloud Storage's concept of an ``Object``.
54
55 :type name: str
56 :param name: The name of the blob. This corresponds to the
57 unique path of the object in the bucket.
58
59 :type bucket: :class:`google.cloud.storage.bucket.Bucket`
60 :param bucket: The bucket to which this blob belongs.
61
62 :type chunk_size: int
63 :param chunk_size: The size of a chunk of data whenever iterating (1 MB).
64 This must be a multiple of 256 KB per the API
65 specification.
66
67 :type encryption_key: bytes
68 :param encryption_key:
69 Optional 32 byte encryption key for customer-supplied encryption.
70 See https://cloud.google.com/storage/docs/encryption#customer-supplied
71 """
72
73 _chunk_size = None # Default value for each instance.
74
75 _CHUNK_SIZE_MULTIPLE = 256 * 1024
76 """Number (256 KB, in bytes) that must divide the chunk size."""
77
78 _STORAGE_CLASSES = (
79 'NEARLINE',
80 'MULTI_REGIONAL',
81 'REGIONAL',
82 'COLDLINE',
83 'STANDARD', # alias for MULTI_REGIONAL/REGIONAL, based on location
84 )
85 """Allowed values for :attr:`storage_class`.
86
87 See:
88 https://cloud.google.com/storage/docs/json_api/v1/objects#storageClass
89 https://cloud.google.com/storage/docs/per-object-storage-class
90
91 .. note::
92 This list does not include 'DURABLE_REDUCED_AVAILABILITY', which
93 is only documented for buckets (and deprectated.
94
95 .. note::
96 The documentation does *not* mention 'STANDARD', but it is the value
97 assigned by the back-end for objects created in buckets with 'STANDARD'
98 set as their 'storage_class'.
99 """
100
101 def __init__(self, name, bucket, chunk_size=None, encryption_key=None):
102 super(Blob, self).__init__(name=name)
103
104 self.chunk_size = chunk_size # Check that setter accepts value.
105 self.bucket = bucket
106 self._acl = ObjectACL(self)
107 self._encryption_key = encryption_key
108
109 @property
110 def chunk_size(self):
111 """Get the blob's default chunk size.
112
113 :rtype: int or ``NoneType``
114 :returns: The current blob's chunk size, if it is set.
115 """
116 return self._chunk_size
117
118 @chunk_size.setter
119 def chunk_size(self, value):
120 """Set the blob's default chunk size.
121
122 :type value: int
123 :param value: (Optional) The current blob's chunk size, if it is set.
124
125 :raises: :class:`ValueError` if ``value`` is not ``None`` and is not a
126 multiple of 256 KB.
127 """
128 if value is not None and value % self._CHUNK_SIZE_MULTIPLE != 0:
129 raise ValueError('Chunk size must be a multiple of %d.' % (
130 self._CHUNK_SIZE_MULTIPLE,))
131 self._chunk_size = value
132
133 @staticmethod
134 def path_helper(bucket_path, blob_name):
135 """Relative URL path for a blob.
136
137 :type bucket_path: str
138 :param bucket_path: The URL path for a bucket.
139
140 :type blob_name: str
141 :param blob_name: The name of the blob.
142
143 :rtype: str
144 :returns: The relative URL path for ``blob_name``.
145 """
146 return bucket_path + '/o/' + quote(blob_name, safe='')
147
148 @property
149 def acl(self):
150 """Create our ACL on demand."""
151 return self._acl
152
153 def __repr__(self):
154 if self.bucket:
155 bucket_name = self.bucket.name
156 else:
157 bucket_name = None
158
159 return '<Blob: %s, %s>' % (bucket_name, self.name)
160
161 @property
162 def path(self):
163 """Getter property for the URL path to this Blob.
164
165 :rtype: str
166 :returns: The URL path to this Blob.
167 """
168 if not self.name:
169 raise ValueError('Cannot determine path without a blob name.')
170
171 return self.path_helper(self.bucket.path, self.name)
172
173 @property
174 def client(self):
175 """The client bound to this blob."""
176 return self.bucket.client
177
178 @property
179 def public_url(self):
180 """The public URL for this blob's object.
181
182 :rtype: `string`
183 :returns: The public URL for this blob.
184 """
185 return '{storage_base_url}/{bucket_name}/{quoted_name}'.format(
186 storage_base_url='https://storage.googleapis.com',
187 bucket_name=self.bucket.name,
188 quoted_name=quote(self.name, safe=''))
189
190 def generate_signed_url(self, expiration, method='GET',
191 content_type=None,
192 generation=None, response_disposition=None,
193 response_type=None, client=None, credentials=None):
194 """Generates a signed URL for this blob.
195
196 .. note::
197
198 If you are on Google Compute Engine, you can't generate a signed
199 URL. Follow `Issue 922`_ for updates on this. If you'd like to
200 be able to generate a signed URL from GCE, you can use a standard
201 service account from a JSON file rather than a GCE service account.
202
203 .. _Issue 922: https://github.com/GoogleCloudPlatform/\
204 google-cloud-python/issues/922
205
206 If you have a blob that you want to allow access to for a set
207 amount of time, you can use this method to generate a URL that
208 is only valid within a certain time period.
209
210 This is particularly useful if you don't want publicly
211 accessible blobs, but don't want to require users to explicitly
212 log in.
213
214 :type expiration: int, long, datetime.datetime, datetime.timedelta
215 :param expiration: When the signed URL should expire.
216
217 :type method: str
218 :param method: The HTTP verb that will be used when requesting the URL.
219
220 :type content_type: str
221 :param content_type: (Optional) The content type of the object
222 referenced by ``resource``.
223
224 :type generation: str
225 :param generation: (Optional) A value that indicates which generation
226 of the resource to fetch.
227
228 :type response_disposition: str
229 :param response_disposition: (Optional) Content disposition of
230 responses to requests for the signed URL.
231 For example, to enable the signed URL
232 to initiate a file of ``blog.png``, use
233 the value
234 ``'attachment; filename=blob.png'``.
235
236 :type response_type: str
237 :param response_type: (Optional) Content type of responses to requests
238 for the signed URL. Used to over-ride the content
239 type of the underlying blob/object.
240
241 :type client: :class:`~google.cloud.storage.client.Client` or
242 ``NoneType``
243 :param client: (Optional) The client to use. If not passed, falls back
244 to the ``client`` stored on the blob's bucket.
245
246
247 :type credentials: :class:`oauth2client.client.OAuth2Credentials` or
248 :class:`NoneType`
249 :param credentials: (Optional) The OAuth2 credentials to use to sign
250 the URL. Defaults to the credentials stored on the
251 client used.
252
253 :rtype: str
254 :returns: A signed URL you can use to access the resource
255 until expiration.
256 """
257 resource = '/{bucket_name}/{quoted_name}'.format(
258 bucket_name=self.bucket.name,
259 quoted_name=quote(self.name, safe=''))
260
261 if credentials is None:
262 client = self._require_client(client)
263 credentials = client._base_connection.credentials
264
265 return generate_signed_url(
266 credentials, resource=resource,
267 api_access_endpoint=_API_ACCESS_ENDPOINT,
268 expiration=expiration, method=method,
269 content_type=content_type,
270 response_type=response_type,
271 response_disposition=response_disposition,
272 generation=generation)
273
274 def exists(self, client=None):
275 """Determines whether or not this blob exists.
276
277 :type client: :class:`~google.cloud.storage.client.Client` or
278 ``NoneType``
279 :param client: Optional. The client to use. If not passed, falls back
280 to the ``client`` stored on the blob's bucket.
281
282 :rtype: bool
283 :returns: True if the blob exists in Cloud Storage.
284 """
285 client = self._require_client(client)
286 try:
287 # We only need the status code (200 or not) so we seek to
288 # minimize the returned payload.
289 query_params = {'fields': 'name'}
290 # We intentionally pass `_target_object=None` since fields=name
291 # would limit the local properties.
292 client._connection.api_request(
293 method='GET', path=self.path,
294 query_params=query_params, _target_object=None)
295 # NOTE: This will not fail immediately in a batch. However, when
296 # Batch.finish() is called, the resulting `NotFound` will be
297 # raised.
298 return True
299 except NotFound:
300 return False
301
302 def delete(self, client=None):
303 """Deletes a blob from Cloud Storage.
304
305 :type client: :class:`~google.cloud.storage.client.Client` or
306 ``NoneType``
307 :param client: Optional. The client to use. If not passed, falls back
308 to the ``client`` stored on the blob's bucket.
309
310 :rtype: :class:`Blob`
311 :returns: The blob that was just deleted.
312 :raises: :class:`google.cloud.exceptions.NotFound`
313 (propagated from
314 :meth:`google.cloud.storage.bucket.Bucket.delete_blob`).
315 """
316 return self.bucket.delete_blob(self.name, client=client)
317
318 def download_to_file(self, file_obj, client=None):
319 """Download the contents of this blob into a file-like object.
320
321 .. note::
322
323 If the server-set property, :attr:`media_link`, is not yet
324 initialized, makes an additional API request to load it.
325
326 Downloading a file that has been encrypted with a `customer-supplied`_
327 encryption key:
328
329 .. literalinclude:: storage_snippets.py
330 :start-after: [START download_to_file]
331 :end-before: [END download_to_file]
332
333 The ``encryption_key`` should be a str or bytes with a length of at
334 least 32.
335
336 .. _customer-supplied: https://cloud.google.com/storage/docs/\
337 encryption#customer-supplied
338
339 :type file_obj: file
340 :param file_obj: A file handle to which to write the blob's data.
341
342 :type client: :class:`~google.cloud.storage.client.Client` or
343 ``NoneType``
344 :param client: Optional. The client to use. If not passed, falls back
345 to the ``client`` stored on the blob's bucket.
346
347 :raises: :class:`google.cloud.exceptions.NotFound`
348 """
349 client = self._require_client(client)
350 if self.media_link is None: # not yet loaded
351 self.reload()
352
353 download_url = self.media_link
354
355 # Use apitools 'Download' facility.
356 download = Download.from_stream(file_obj)
357
358 if self.chunk_size is not None:
359 download.chunksize = self.chunk_size
360
361 headers = _get_encryption_headers(self._encryption_key)
362
363 request = Request(download_url, 'GET', headers)
364
365 # Use ``_base_connection`` rather ``_connection`` since the current
366 # connection may be a batch. A batch wraps a client's connection,
367 # but does not store the ``http`` object. The rest (API_BASE_URL and
368 # build_api_url) are also defined on the Batch class, but we just
369 # use the wrapped connection since it has all three (http,
370 # API_BASE_URL and build_api_url).
371 download.initialize_download(request, client._base_connection.http)
372
373 def download_to_filename(self, filename, client=None):
374 """Download the contents of this blob into a named file.
375
376 :type filename: str
377 :param filename: A filename to be passed to ``open``.
378
379 :type client: :class:`~google.cloud.storage.client.Client` or
380 ``NoneType``
381 :param client: Optional. The client to use. If not passed, falls back
382 to the ``client`` stored on the blob's bucket.
383
384 :raises: :class:`google.cloud.exceptions.NotFound`
385 """
386 with open(filename, 'wb') as file_obj:
387 self.download_to_file(file_obj, client=client)
388
389 mtime = time.mktime(self.updated.timetuple())
390 os.utime(file_obj.name, (mtime, mtime))
391
392 def download_as_string(self, client=None):
393 """Download the contents of this blob as a string.
394
395 :type client: :class:`~google.cloud.storage.client.Client` or
396 ``NoneType``
397 :param client: Optional. The client to use. If not passed, falls back
398 to the ``client`` stored on the blob's bucket.
399
400 :rtype: bytes
401 :returns: The data stored in this blob.
402 :raises: :class:`google.cloud.exceptions.NotFound`
403 """
404 string_buffer = BytesIO()
405 self.download_to_file(string_buffer, client=client)
406 return string_buffer.getvalue()
407
408 def _create_upload(
409 self, client, file_obj=None, size=None, content_type=None,
410 chunk_size=None, strategy=None, extra_headers=None):
411 """Helper for upload methods.
412
413 Creates a :class:`google.cloud.core.streaming.Upload` object to handle
414 the details of uploading a file to Cloud Storage.
415
416 :type client: :class:`~google.cloud.storage.client.Client` or
417 ``NoneType``
418 :param client: Optional. The client to use. If not passed, falls back
419 to the ``client`` stored on the blob's bucket.
420
421 :type file_obj: file
422 :param file_obj: A file handle open for reading.
423
424 :type size: int
425 :param size: The size of the upload, in bytes.
426
427 :type content_type: str
428 :param content_type: Optional type of content being uploaded.
429
430 :type chunk_size: int
431 :param chunk_size: The size of each chunk when doing resumable and
432 media uploads.
433
434 :type strategy: str
435 :param strategy: Either
436 :attr:`google.cloud.core.streaming.transfer.SIMPLE_UPLOAD` or
437 :attr:`google.cloud.core.streaming.transfer.RESUMABLE_UPLOAD`.
438
439 :type extra_headers: dict
440 :param extra_headers: Additional headers to be sent with the upload
441 initiation request.
442
443 :rtype: Tuple[google.cloud.core.streaming.Upload,
444 google.cloud.core.streaming.Request,
445 google.cloud.core.streaming.Response]
446 :returns: The Upload object, the upload HTTP request, and the upload
447 initiation response.
448 """
449
450 client = self._require_client(client)
451
452 # Use ``_base_connection`` rather ``_connection`` since the current
453 # connection may be a batch. A batch wraps a client's connection,
454 # but does not store the ``http`` object. The rest (API_BASE_URL and
455 # build_api_url) are also defined on the Batch class, but we just
456 # use the wrapped connection since it has all three (http,
457 # API_BASE_URL and build_api_url).
458 connection = client._base_connection
459
460 content_type = (content_type or self._properties.get('contentType') or
461 'application/octet-stream')
462
463 headers = {
464 'Accept': 'application/json',
465 'Accept-Encoding': 'gzip, deflate',
466 'User-Agent': connection.USER_AGENT,
467 }
468
469 if extra_headers:
470 headers.update(extra_headers)
471
472 headers.update(_get_encryption_headers(self._encryption_key))
473
474 # Use apitools' Upload functionality
475 upload = Upload(
476 file_obj, content_type, total_size=size, auto_transfer=False)
477
478 if chunk_size is not None:
479 upload.chunksize = chunk_size
480
481 if strategy is not None:
482 upload.strategy = RESUMABLE_UPLOAD
483
484 url_builder = _UrlBuilder(
485 bucket_name=self.bucket.name,
486 object_name=self.name)
487 upload_config = _UploadConfig()
488
489 # Temporary URL until strategy is determined.
490 base_url = connection.API_BASE_URL + '/upload'
491 upload_url = connection.build_api_url(
492 api_base_url=base_url,
493 path=self.bucket.path + '/o')
494
495 # Configure the upload request parameters.
496 request = Request(upload_url, 'POST', headers)
497 upload.configure_request(upload_config, request, url_builder)
498
499 # Configure final URL
500 query_params = url_builder.query_params
501 base_url = connection.API_BASE_URL + '/upload'
502 request.url = connection.build_api_url(
503 api_base_url=base_url,
504 path=self.bucket.path + '/o',
505 query_params=query_params)
506
507 # Start the upload session
508 response = upload.initialize_upload(request, connection.http)
509
510 return upload, request, response
511
512 @staticmethod
513 def _check_response_error(request, http_response):
514 """Helper for :meth:`upload_from_file`."""
515 info = http_response.info
516 status = int(info['status'])
517 if not 200 <= status < 300:
518 faux_response = httplib2.Response({'status': status})
519 raise make_exception(faux_response, http_response.content,
520 error_info=request.url)
521
522 def upload_from_file(self, file_obj, rewind=False, size=None,
523 content_type=None, num_retries=6, client=None):
524 """Upload the contents of this blob from a file-like object.
525
526 The content type of the upload will either be
527 - The value passed in to the function (if any)
528 - The value stored on the current blob
529 - The default value of 'application/octet-stream'
530
531 .. note::
532 The effect of uploading to an existing blob depends on the
533 "versioning" and "lifecycle" policies defined on the blob's
534 bucket. In the absence of those policies, upload will
535 overwrite any existing contents.
536
537 See the `object versioning
538 <https://cloud.google.com/storage/docs/object-versioning>`_ and
539 `lifecycle <https://cloud.google.com/storage/docs/lifecycle>`_
540 API documents for details.
541
542 Uploading a file with a `customer-supplied`_ encryption key:
543
544 .. literalinclude:: storage_snippets.py
545 :start-after: [START upload_from_file]
546 :end-before: [END upload_from_file]
547
548 The ``encryption_key`` should be a str or bytes with a length of at
549 least 32.
550
551 .. _customer-supplied: https://cloud.google.com/storage/docs/\
552 encryption#customer-supplied
553
554 :type file_obj: file
555 :param file_obj: A file handle open for reading.
556
557 :type rewind: bool
558 :param rewind: If True, seek to the beginning of the file handle before
559 writing the file to Cloud Storage.
560
561 :type size: int
562 :param size: The number of bytes to read from the file handle.
563 If not provided, we'll try to guess the size using
564 :func:`os.fstat`. (If the file handle is not from the
565 filesystem this won't be possible.)
566
567 :type content_type: str
568 :param content_type: Optional type of content being uploaded.
569
570 :type num_retries: int
571 :param num_retries: Number of upload retries. Defaults to 6.
572
573 :type client: :class:`~google.cloud.storage.client.Client` or
574 ``NoneType``
575 :param client: Optional. The client to use. If not passed, falls back
576 to the ``client`` stored on the blob's bucket.
577
578 :raises: :class:`ValueError` if size is not passed in and can not be
579 determined; :class:`google.cloud.exceptions.GoogleCloudError`
580 if the upload response returns an error status.
581 """
582 client = self._require_client(client)
583 # Use ``_base_connection`` rather ``_connection`` since the current
584 # connection may be a batch. A batch wraps a client's connection,
585 # but does not store the ``http`` object. The rest (API_BASE_URL and
586 # build_api_url) are also defined on the Batch class, but we just
587 # use the wrapped connection since it has all three (http,
588 # API_BASE_URL and build_api_url).
589 connection = client._base_connection
590
591 # Rewind the file if desired.
592 if rewind:
593 file_obj.seek(0, os.SEEK_SET)
594
595 # Get the basic stats about the file.
596 total_bytes = size
597 if total_bytes is None:
598 if hasattr(file_obj, 'fileno'):
599 try:
600 total_bytes = os.fstat(file_obj.fileno()).st_size
601 except (OSError, UnsupportedOperation):
602 pass # Assuming fd is not an actual file (maybe socket).
603
604 chunk_size = None
605 strategy = None
606 if self.chunk_size is not None:
607 chunk_size = self.chunk_size
608
609 if total_bytes is None:
610 strategy = RESUMABLE_UPLOAD
611 elif total_bytes is None:
612 raise ValueError('total bytes could not be determined. Please '
613 'pass an explicit size, or supply a chunk size '
614 'for a streaming transfer.')
615
616 upload, request, _ = self._create_upload(
617 client, file_obj=file_obj, size=total_bytes,
618 content_type=content_type, chunk_size=chunk_size,
619 strategy=strategy)
620
621 if upload.strategy == RESUMABLE_UPLOAD:
622 http_response = upload.stream_file(use_chunks=True)
623 else:
624 http_response = make_api_request(
625 connection.http, request, retries=num_retries)
626
627 self._check_response_error(request, http_response)
628 response_content = http_response.content
629
630 if not isinstance(response_content,
631 six.string_types): # pragma: NO COVER Python3
632 response_content = response_content.decode('utf-8')
633 self._set_properties(json.loads(response_content))
634
635 def upload_from_filename(self, filename, content_type=None, client=None):
636 """Upload this blob's contents from the content of a named file.
637
638 The content type of the upload will either be
639 - The value passed in to the function (if any)
640 - The value stored on the current blob
641 - The value given by mimetypes.guess_type
642
643 .. note::
644 The effect of uploading to an existing blob depends on the
645 "versioning" and "lifecycle" policies defined on the blob's
646 bucket. In the absence of those policies, upload will
647 overwrite any existing contents.
648
649 See the `object versioning
650 <https://cloud.google.com/storage/docs/object-versioning>`_ and
651 `lifecycle <https://cloud.google.com/storage/docs/lifecycle>`_
652 API documents for details.
653
654 :type filename: str
655 :param filename: The path to the file.
656
657 :type content_type: str
658 :param content_type: Optional type of content being uploaded.
659
660 :type client: :class:`~google.cloud.storage.client.Client` or
661 ``NoneType``
662 :param client: Optional. The client to use. If not passed, falls back
663 to the ``client`` stored on the blob's bucket.
664 """
665 content_type = content_type or self._properties.get('contentType')
666 if content_type is None:
667 content_type, _ = mimetypes.guess_type(filename)
668
669 with open(filename, 'rb') as file_obj:
670 self.upload_from_file(
671 file_obj, content_type=content_type, client=client)
672
673 def upload_from_string(self, data, content_type='text/plain', client=None):
674 """Upload contents of this blob from the provided string.
675
676 .. note::
677 The effect of uploading to an existing blob depends on the
678 "versioning" and "lifecycle" policies defined on the blob's
679 bucket. In the absence of those policies, upload will
680 overwrite any existing contents.
681
682 See the `object versioning
683 <https://cloud.google.com/storage/docs/object-versioning>`_ and
684 `lifecycle <https://cloud.google.com/storage/docs/lifecycle>`_
685 API documents for details.
686
687 :type data: bytes or str
688 :param data: The data to store in this blob. If the value is
689 text, it will be encoded as UTF-8.
690
691 :type content_type: str
692 :param content_type: Optional type of content being uploaded. Defaults
693 to ``'text/plain'``.
694
695 :type client: :class:`~google.cloud.storage.client.Client` or
696 ``NoneType``
697 :param client: Optional. The client to use. If not passed, falls back
698 to the ``client`` stored on the blob's bucket.
699 """
700 if isinstance(data, six.text_type):
701 data = data.encode('utf-8')
702 string_buffer = BytesIO()
703 string_buffer.write(data)
704 self.upload_from_file(
705 file_obj=string_buffer, rewind=True, size=len(data),
706 content_type=content_type, client=client)
707
708 def create_resumable_upload_session(
709 self,
710 content_type=None,
711 size=None,
712 origin=None,
713 client=None):
714 """Create a resumable upload session.
715
716 Resumable upload sessions allow you to start an upload session from
717 one client and complete the session in another. This method is called
718 by the initiator to set the metadata and limits. The initiator then
719 passes the session URL to the client that will upload the binary data.
720 The client performs a PUT request on the session URL to complete the
721 upload. This process allows untrusted clients to upload to an
722 access-controlled bucket. For more details, see the
723 `documentation on signed URLs`_.
724
725 .. _documentation on signed URLs: https://cloud.google.com/storage\
726 /docs/access-control/signed-urls#signing-resumable
727
728 The content type of the upload will either be
729 - The value passed in to the function (if any)
730 - The value stored on the current blob
731 - The default value of 'application/octet-stream'
732
733 .. note::
734 The effect of uploading to an existing blob depends on the
735 "versioning" and "lifecycle" policies defined on the blob's
736 bucket. In the absence of those policies, upload will
737 overwrite any existing contents.
738
739 See the `object versioning
740 <https://cloud.google.com/storage/docs/object-versioning>`_ and
741 `lifecycle <https://cloud.google.com/storage/docs/lifecycle>`_
742 API documents for details.
743
744 If :attr:`encryption_key` is set, the blob will be `encrypted`_.
745
746 .. _encrypted: https://cloud.google.com/storage/docs/\
747 encryption#customer-supplied
748
749 :type size: int
750 :param size: Optional, the maximum number of bytes that can be
751 uploaded using this session. If the size is not known when creating
752 the session, this should be left blank.
753
754 :type content_type: str
755 :param content_type: Optional type of content being uploaded. This can
756 be used to restrict the allowed file type that can be uploaded
757 to the size.
758
759 :type origin: str
760 :param origin: Optional origin. If set, the upload can only be
761 completed by a user-agent that uploads from the given origin. This
762 can be useful when passing the session to a web client.
763
764 :type client: :class:`~google.cloud.storage.client.Client` or
765 ``NoneType``
766 :param client: Optional. The client to use. If not passed, falls back
767 to the ``client`` stored on the blob's bucket.
768
769 :rtype: str
770 :returns: The resumable upload session URL. The upload can be
771 completed by making an HTTP PUT request with the file's contents.
772
773 :raises: :class:`google.cloud.exceptions.GoogleCloudError`
774 if the session creation response returns an error status.
775 """
776
777 extra_headers = {}
778
779 if origin is not None:
780 # This header is specifically for client-side uploads, it
781 # determines the origins allowed for CORS.
782 extra_headers['Origin'] = origin
783
784 _, _, start_response = self._create_upload(
785 client,
786 size=size,
787 content_type=content_type,
788 strategy=RESUMABLE_UPLOAD,
789 extra_headers=extra_headers)
790
791 # The location header contains the session URL. This can be used
792 # to continue the upload.
793 resumable_upload_session_url = start_response.info['location']
794
795 return resumable_upload_session_url
796
797 def make_public(self, client=None):
798 """Make this blob public giving all users read access.
799
800 :type client: :class:`~google.cloud.storage.client.Client` or
801 ``NoneType``
802 :param client: Optional. The client to use. If not passed, falls back
803 to the ``client`` stored on the blob's bucket.
804 """
805 self.acl.all().grant_read()
806 self.acl.save(client=client)
807
808 def compose(self, sources, client=None):
809 """Concatenate source blobs into this one.
810
811 :type sources: list of :class:`Blob`
812 :param sources: blobs whose contents will be composed into this blob.
813
814 :type client: :class:`~google.cloud.storage.client.Client` or
815 ``NoneType``
816 :param client: Optional. The client to use. If not passed, falls back
817 to the ``client`` stored on the blob's bucket.
818
819 :raises: :exc:`ValueError` if this blob does not have its
820 :attr:`content_type` set.
821 """
822 if self.content_type is None:
823 raise ValueError("Destination 'content_type' not set.")
824 client = self._require_client(client)
825 request = {
826 'sourceObjects': [{'name': source.name} for source in sources],
827 'destination': self._properties.copy(),
828 }
829 api_response = client._connection.api_request(
830 method='POST', path=self.path + '/compose', data=request,
831 _target_object=self)
832 self._set_properties(api_response)
833
834 def rewrite(self, source, token=None, client=None):
835 """Rewrite source blob into this one.
836
837 :type source: :class:`Blob`
838 :param source: blob whose contents will be rewritten into this blob.
839
840 :type token: str
841 :param token: Optional. Token returned from an earlier, not-completed
842 call to rewrite the same source blob. If passed,
843 result will include updated status, total bytes written.
844
845 :type client: :class:`~google.cloud.storage.client.Client` or
846 ``NoneType``
847 :param client: Optional. The client to use. If not passed, falls back
848 to the ``client`` stored on the blob's bucket.
849
850 :rtype: tuple
851 :returns: ``(token, bytes_rewritten, total_bytes)``, where ``token``
852 is a rewrite token (``None`` if the rewrite is complete),
853 ``bytes_rewritten`` is the number of bytes rewritten so far,
854 and ``total_bytes`` is the total number of bytes to be
855 rewritten.
856 """
857 client = self._require_client(client)
858 headers = _get_encryption_headers(self._encryption_key)
859 headers.update(_get_encryption_headers(
860 source._encryption_key, source=True))
861
862 if token:
863 query_params = {'rewriteToken': token}
864 else:
865 query_params = {}
866
867 api_response = client._connection.api_request(
868 method='POST', path=source.path + '/rewriteTo' + self.path,
869 query_params=query_params, data=self._properties, headers=headers,
870 _target_object=self)
871 self._set_properties(api_response['resource'])
872 rewritten = int(api_response['totalBytesRewritten'])
873 size = int(api_response['objectSize'])
874
875 if api_response['done']:
876 return None, rewritten, size
877
878 return api_response['rewriteToken'], rewritten, size
879
880 def update_storage_class(self, new_class, client=None):
881 """Update blob's storage class via a rewrite-in-place.
882
883 See:
884 https://cloud.google.com/storage/docs/per-object-storage-class
885
886 :type new_class: str
887 :param new_class: new storage class for the object
888
889 :type client: :class:`~google.cloud.storage.client.Client`
890 :param client: Optional. The client to use. If not passed, falls back
891 to the ``client`` stored on the blob's bucket.
892 """
893 if new_class not in self._STORAGE_CLASSES:
894 raise ValueError("Invalid storage class: %s" % (new_class,))
895
896 client = self._require_client(client)
897 headers = _get_encryption_headers(self._encryption_key)
898 headers.update(_get_encryption_headers(
899 self._encryption_key, source=True))
900
901 api_response = client._connection.api_request(
902 method='POST', path=self.path + '/rewriteTo' + self.path,
903 data={'storageClass': new_class}, headers=headers,
904 _target_object=self)
905 self._set_properties(api_response['resource'])
906
907 cache_control = _scalar_property('cacheControl')
908 """HTTP 'Cache-Control' header for this object.
909
910 See: https://tools.ietf.org/html/rfc7234#section-5.2 and
911 https://cloud.google.com/storage/docs/json_api/v1/objects
912
913 If the property is not set locally, returns ``None``.
914
915 :rtype: str or ``NoneType``
916 """
917
918 content_disposition = _scalar_property('contentDisposition')
919 """HTTP 'Content-Disposition' header for this object.
920
921 See: https://tools.ietf.org/html/rfc6266 and
922 https://cloud.google.com/storage/docs/json_api/v1/objects
923
924 If the property is not set locally, returns ``None``.
925
926 :rtype: str or ``NoneType``
927 """
928
929 content_encoding = _scalar_property('contentEncoding')
930 """HTTP 'Content-Encoding' header for this object.
931
932 See: https://tools.ietf.org/html/rfc7231#section-3.1.2.2 and
933 https://cloud.google.com/storage/docs/json_api/v1/objects
934
935 If the property is not set locally, returns ``None``.
936
937 :rtype: str or ``NoneType``
938 """
939
940 content_language = _scalar_property('contentLanguage')
941 """HTTP 'Content-Language' header for this object.
942
943 See: http://tools.ietf.org/html/bcp47 and
944 https://cloud.google.com/storage/docs/json_api/v1/objects
945
946 If the property is not set locally, returns ``None``.
947
948 :rtype: str or ``NoneType``
949 """
950
951 content_type = _scalar_property('contentType')
952 """HTTP 'Content-Type' header for this object.
953
954 See: https://tools.ietf.org/html/rfc2616#section-14.17 and
955 https://cloud.google.com/storage/docs/json_api/v1/objects
956
957 If the property is not set locally, returns ``None``.
958
959 :rtype: str or ``NoneType``
960 """
961
962 crc32c = _scalar_property('crc32c')
963 """CRC32C checksum for this object.
964
965 See: http://tools.ietf.org/html/rfc4960#appendix-B and
966 https://cloud.google.com/storage/docs/json_api/v1/objects
967
968 If the property is not set locally, returns ``None``.
969
970 :rtype: str or ``NoneType``
971 """
972
973 @property
974 def component_count(self):
975 """Number of underlying components that make up this object.
976
977 See: https://cloud.google.com/storage/docs/json_api/v1/objects
978
979 :rtype: int or ``NoneType``
980 :returns: The component count (in case of a composed object) or
981 ``None`` if the property is not set locally. This property
982 will not be set on objects not created via ``compose``.
983 """
984 component_count = self._properties.get('componentCount')
985 if component_count is not None:
986 return int(component_count)
987
988 @property
989 def etag(self):
990 """Retrieve the ETag for the object.
991
992 See: http://tools.ietf.org/html/rfc2616#section-3.11 and
993 https://cloud.google.com/storage/docs/json_api/v1/objects
994
995 :rtype: str or ``NoneType``
996 :returns: The blob etag or ``None`` if the property is not set locally.
997 """
998 return self._properties.get('etag')
999
1000 @property
1001 def generation(self):
1002 """Retrieve the generation for the object.
1003
1004 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1005
1006 :rtype: int or ``NoneType``
1007 :returns: The generation of the blob or ``None`` if the property
1008 is not set locally.
1009 """
1010 generation = self._properties.get('generation')
1011 if generation is not None:
1012 return int(generation)
1013
1014 @property
1015 def id(self):
1016 """Retrieve the ID for the object.
1017
1018 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1019
1020 :rtype: str or ``NoneType``
1021 :returns: The ID of the blob or ``None`` if the property is not
1022 set locally.
1023 """
1024 return self._properties.get('id')
1025
1026 md5_hash = _scalar_property('md5Hash')
1027 """MD5 hash for this object.
1028
1029 See: http://tools.ietf.org/html/rfc4960#appendix-B and
1030 https://cloud.google.com/storage/docs/json_api/v1/objects
1031
1032 If the property is not set locally, returns ``None``.
1033
1034 :rtype: str or ``NoneType``
1035 """
1036
1037 @property
1038 def media_link(self):
1039 """Retrieve the media download URI for the object.
1040
1041 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1042
1043 :rtype: str or ``NoneType``
1044 :returns: The media link for the blob or ``None`` if the property is
1045 not set locally.
1046 """
1047 return self._properties.get('mediaLink')
1048
1049 @property
1050 def metadata(self):
1051 """Retrieve arbitrary/application specific metadata for the object.
1052
1053 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1054
1055 :rtype: dict or ``NoneType``
1056 :returns: The metadata associated with the blob or ``None`` if the
1057 property is not set locally.
1058 """
1059 return copy.deepcopy(self._properties.get('metadata'))
1060
1061 @metadata.setter
1062 def metadata(self, value):
1063 """Update arbitrary/application specific metadata for the object.
1064
1065 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1066
1067 :type value: dict
1068 :param value: (Optional) The blob metadata to set.
1069 """
1070 self._patch_property('metadata', value)
1071
1072 @property
1073 def metageneration(self):
1074 """Retrieve the metageneration for the object.
1075
1076 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1077
1078 :rtype: int or ``NoneType``
1079 :returns: The metageneration of the blob or ``None`` if the property
1080 is not set locally.
1081 """
1082 metageneration = self._properties.get('metageneration')
1083 if metageneration is not None:
1084 return int(metageneration)
1085
1086 @property
1087 def owner(self):
1088 """Retrieve info about the owner of the object.
1089
1090 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1091
1092 :rtype: dict or ``NoneType``
1093 :returns: Mapping of owner's role/ID. If the property is not set
1094 locally, returns ``None``.
1095 """
1096 return copy.deepcopy(self._properties.get('owner'))
1097
1098 @property
1099 def self_link(self):
1100 """Retrieve the URI for the object.
1101
1102 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1103
1104 :rtype: str or ``NoneType``
1105 :returns: The self link for the blob or ``None`` if the property is
1106 not set locally.
1107 """
1108 return self._properties.get('selfLink')
1109
1110 @property
1111 def size(self):
1112 """Size of the object, in bytes.
1113
1114 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1115
1116 :rtype: int or ``NoneType``
1117 :returns: The size of the blob or ``None`` if the property
1118 is not set locally.
1119 """
1120 size = self._properties.get('size')
1121 if size is not None:
1122 return int(size)
1123
1124 @property
1125 def storage_class(self):
1126 """Retrieve the storage class for the object.
1127
1128 See: https://cloud.google.com/storage/docs/storage-classes
1129
1130 :rtype: str or ``NoneType``
1131 :returns: If set, one of "MULTI_REGIONAL", "REGIONAL",
1132 "NEARLINE", "COLDLINE", "STANDARD", or
1133 "DURABLE_REDUCED_AVAILABILITY", else ``None``.
1134 """
1135 return self._properties.get('storageClass')
1136
1137 @property
1138 def time_deleted(self):
1139 """Retrieve the timestamp at which the object was deleted.
1140
1141 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1142
1143 :rtype: :class:`datetime.datetime` or ``NoneType``
1144 :returns: Datetime object parsed from RFC3339 valid timestamp, or
1145 ``None`` if the property is not set locally. If the blob has
1146 not been deleted, this will never be set.
1147 """
1148 value = self._properties.get('timeDeleted')
1149 if value is not None:
1150 return _rfc3339_to_datetime(value)
1151
1152 @property
1153 def time_created(self):
1154 """Retrieve the timestamp at which the object was created.
1155
1156 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1157
1158 :rtype: :class:`datetime.datetime` or ``NoneType``
1159 :returns: Datetime object parsed from RFC3339 valid timestamp, or
1160 ``None`` if the property is not set locally.
1161 """
1162 value = self._properties.get('timeCreated')
1163 if value is not None:
1164 return _rfc3339_to_datetime(value)
1165
1166 @property
1167 def updated(self):
1168 """Retrieve the timestamp at which the object was updated.
1169
1170 See: https://cloud.google.com/storage/docs/json_api/v1/objects
1171
1172 :rtype: :class:`datetime.datetime` or ``NoneType``
1173 :returns: Datetime object parsed from RFC3339 valid timestamp, or
1174 ``None`` if the property is not set locally.
1175 """
1176 value = self._properties.get('updated')
1177 if value is not None:
1178 return _rfc3339_to_datetime(value)
1179
1180
1181 class _UploadConfig(object):
1182 """Faux message FBO apitools' 'configure_request'.
1183
1184 Values extracted from apitools
1185 'samples/storage_sample/storage/storage_v1_client.py'
1186 """
1187 accept = ['*/*']
1188 max_size = None
1189 resumable_multipart = True
1190 resumable_path = u'/resumable/upload/storage/v1/b/{bucket}/o'
1191 simple_multipart = True
1192 simple_path = u'/upload/storage/v1/b/{bucket}/o'
1193
1194
1195 class _UrlBuilder(object):
1196 """Faux builder FBO apitools' 'configure_request'"""
1197 def __init__(self, bucket_name, object_name):
1198 self.query_params = {'name': object_name}
1199 self._bucket_name = bucket_name
1200 self._relative_path = ''
1201
1202
1203 def _get_encryption_headers(key, source=False):
1204 """Builds customer encryption key headers
1205
1206 :type key: bytes
1207 :param key: 32 byte key to build request key and hash.
1208
1209 :type source: bool
1210 :param source: If true, return headers for the "source" blob; otherwise,
1211 return headers for the "destination" blob.
1212
1213 :rtype: dict
1214 :returns: dict of HTTP headers being sent in request.
1215 """
1216 if key is None:
1217 return {}
1218
1219 key = _to_bytes(key)
1220 key_hash = hashlib.sha256(key).digest()
1221 key_hash = base64.b64encode(key_hash).rstrip()
1222 key = base64.b64encode(key).rstrip()
1223
1224 if source:
1225 prefix = 'X-Goog-Copy-Source-Encryption-'
1226 else:
1227 prefix = 'X-Goog-Encryption-'
1228
1229 return {
1230 prefix + 'Algorithm': 'AES256',
1231 prefix + 'Key': _bytes_to_unicode(key),
1232 prefix + 'Key-Sha256': _bytes_to_unicode(key_hash),
1233 }
1234
[end of storage/google/cloud/storage/blob.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| googleapis/google-cloud-python | ebb77fb029efc65273890cb17c4aa62f99d54607 | Language: support mention type in Entity.mentions.
[Currently](https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/language/google/cloud/language/entity.py#L79) the mentions property of an entity is only a list of strings whereas it should be a list of objects containing the mention text and mention type.
Furthermore, this change should add mention_type information to the mention documentation.
| Adding the release blocking tag; this is a beta blocker. | 2017-03-16T16:21:51Z | <patch>
diff --git a/language/google/cloud/language/entity.py b/language/google/cloud/language/entity.py
--- a/language/google/cloud/language/entity.py
+++ b/language/google/cloud/language/entity.py
@@ -46,6 +46,80 @@ class EntityType(object):
"""Other entity type (i.e. known but not classified)."""
+class MentionType(object):
+ """List of possible mention types."""
+
+ TYPE_UNKNOWN = 'TYPE_UNKNOWN'
+ """Unknown mention type"""
+
+ PROPER = 'PROPER'
+ """Proper name"""
+
+ COMMON = 'COMMON'
+ """Common noun (or noun compound)"""
+
+
+class Mention(object):
+ """A Google Cloud Natural Language API mention.
+
+ Represents a mention for an entity in the text. Currently, proper noun
+ mentions are supported.
+ """
+ def __init__(self, text, mention_type):
+ self.text = text
+ self.mention_type = mention_type
+
+ def __str__(self):
+ return str(self.text)
+
+ @classmethod
+ def from_api_repr(cls, payload):
+ """Convert a Mention from the JSON API into an :class:`Mention`.
+
+ :param payload: dict
+ :type payload: The value from the backend.
+
+ :rtype: :class:`Mention`
+ :returns: The mention parsed from the API representation.
+ """
+ text = TextSpan.from_api_repr(payload['text'])
+ mention_type = payload['type']
+ return cls(text, mention_type)
+
+
+class TextSpan(object):
+ """A span of text from Google Cloud Natural Language API.
+
+ Represents a word or phrase of text, as well as its offset
+ from the original document.
+ """
+ def __init__(self, content, begin_offset):
+ self.content = content
+ self.begin_offset = begin_offset
+
+ def __str__(self):
+ """Return the string representation of this TextSpan.
+
+ :rtype: str
+ :returns: The text content
+ """
+ return self.content
+
+ @classmethod
+ def from_api_repr(cls, payload):
+ """Convert a TextSpan from the JSON API into an :class:`TextSpan`.
+
+ :param payload: dict
+ :type payload: The value from the backend.
+
+ :rtype: :class:`TextSpan`
+ :returns: The text span parsed from the API representation.
+ """
+ content = payload['content']
+ begin_offset = payload['beginOffset']
+ return cls(content=content, begin_offset=begin_offset)
+
+
class Entity(object):
"""A Google Cloud Natural Language API entity.
@@ -101,6 +175,5 @@ def from_api_repr(cls, payload):
entity_type = payload['type']
metadata = payload['metadata']
salience = payload['salience']
- mentions = [value['text']['content']
- for value in payload['mentions']]
+ mentions = [Mention.from_api_repr(val) for val in payload['mentions']]
return cls(name, entity_type, metadata, salience, mentions)
</patch> | [] | [] | |||
conan-io__conan-4003 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GNU Make generator
https://github.com/solvingj/conan-make_generator/blob/master/conanfile.py by @solvingj is almost it.
I agree it could be built-in.
Can use conditional:
```
ifneq ($(USE_CONAN),)
INC_PATHS += $(CONAN_INC_PATHS)
LD_PATHS += $(CONAN_LIB_PATHS)
LD_LIBS += $(CONAN_LIBS)
CXXFLAGS += $(CONAN_CPP_FLAGS)
CFLAGS += $(CONAN_CFLAGS)
DEFINES += $(CONAN_DEFINES)
LDFLAGS_SHARED += $(CONAN_SHAREDLINKFLAGS)
LDFLAGS_EXE += $(CONAN_EXELINKFLAGS)
C_SRCS += $(CONAN_C_SRCS)
CXX_SRCS += $(CONAN_CXX_SRCS)
endif
```
</issue>
<code>
[start of README.rst]
1 Conan
2 =====
3
4 A distributed, open-source, C/C++ package manager.
5
6 +------------------------+-------------------------+
7 | **master** | **develop** |
8 +========================+=========================+
9 | |Build Status Master| | |Build Status Develop| |
10 +------------------------+-------------------------+
11
12
13 +------------------------+---------------------------+---------------------------------------------+
14 | **Coverage master** | **Coverage develop** | **Coverage graph** |
15 +========================+===========================+=============================================+
16 | |Master coverage| | |Develop coverage| | |Coverage graph| |
17 +------------------------+---------------------------+---------------------------------------------+
18
19
20 Setup
21 ======
22
23 From binaries
24 -------------
25
26 We have installers for `most platforms here <http://conan.io>`__ but you
27 can run **conan** from sources if you want.
28
29 From pip
30 --------
31
32 Conan is compatible with Python 2 and Python 3.
33
34 - Install pip following `pip docs`_.
35 - Install conan:
36
37 .. code-block:: bash
38
39 $ pip install conan
40
41 From Homebrew (OSx)
42 -------------------
43
44 - Install Homebrew following `brew homepage`_.
45
46 .. code-block:: bash
47
48 $ brew update
49 $ brew install conan
50
51 From source
52 -----------
53
54 You can run **conan** client and server in Windows, MacOS, and Linux.
55
56 - **Install pip following** `pip docs`_.
57
58 - **Clone conan repository:**
59
60 .. code-block:: bash
61
62 $ git clone https://github.com/conan-io/conan.git
63
64 - **Install in editable mode**
65
66 .. code-block:: bash
67
68 $ cd conan && sudo pip install -e .
69
70 If you are in Windows, using ``sudo`` is not required.
71
72 - **You are ready, try to run conan:**
73
74 .. code-block::
75
76 $ conan --help
77
78 Consumer commands
79 install Installs the requirements specified in a conanfile (.py or .txt).
80 config Manages configuration. Edits the conan.conf or installs config files.
81 get Gets a file or list a directory of a given reference or package.
82 info Gets information about the dependency graph of a recipe.
83 search Searches package recipes and binaries in the local cache or in a remote.
84 Creator commands
85 new Creates a new package recipe template with a 'conanfile.py'.
86 create Builds a binary package for recipe (conanfile.py) located in current dir.
87 upload Uploads a recipe and binary packages to a remote.
88 export Copies the recipe (conanfile.py & associated files) to your local cache.
89 export-pkg Exports a recipe & creates a package with given files calling 'package'.
90 test Test a package, consuming it with a conanfile recipe with a test() method.
91 Package development commands
92 source Calls your local conanfile.py 'source()' method.
93 build Calls your local conanfile.py 'build()' method.
94 package Calls your local conanfile.py 'package()' method.
95 Misc commands
96 profile Lists profiles in the '.conan/profiles' folder, or shows profile details.
97 remote Manages the remote list and the package recipes associated to a remote.
98 user Authenticates against a remote with user/pass, caching the auth token.
99 imports Calls your local conanfile.py or conanfile.txt 'imports' method.
100 copy Copies conan recipes and packages to another user/channel.
101 remove Removes packages or binaries matching pattern from local cache or remote.
102 alias Creates and exports an 'alias recipe'.
103 download Downloads recipe and binaries to the local cache, without using settings.
104
105 Conan commands. Type "conan <command> -h" for help
106
107 Running the tests
108 =================
109
110 **Install python requirements**
111
112 .. code-block:: bash
113
114 $ pip install -r conans/requirements.txt
115 $ pip install -r conans/requirements_server.txt
116 $ pip install -r conans/requirements_dev.txt
117
118
119 Only in OSX:
120
121
122 .. code-block:: bash
123
124 $ pip install -r conans/requirements_osx.txt # You can omit this one if not running OSX
125
126
127 If you are not Windows and you are not using a python virtual environment, you will need to run these
128 commands using `sudo`.
129
130 Before you can run the tests, you need to set a few environment variables first.
131
132 .. code-block:: bash
133
134 $ export PYTHONPATH=$PYTHONPATH:$(pwd)
135
136 On Windows it would be (while being in the conan root directory):
137
138 .. code-block:: bash
139
140 $ set PYTHONPATH=.
141
142 Ensure that your ``cmake`` has version 2.8 or later. You can see the
143 version with the following command:
144
145 .. code-block:: bash
146
147 $ cmake --version
148
149 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your
150 operating system and your requirements.
151
152 These should work for the GCC from ``build-essential`` on Ubuntu 14.04:
153
154 .. code-block:: bash
155
156 $ export CONAN_COMPILER=gcc
157 $ export CONAN_COMPILER_VERSION=4.8
158
159 These should work for OS X:
160
161 .. code-block:: bash
162
163 $ export CONAN_COMPILER=clang
164 $ export CONAN_COMPILER_VERSION=3.5
165
166 Finally, there are some tests that use conan to package Go-lang
167 libraries, so you might **need to install go-lang** in your computer and
168 add it to the path.
169
170 You can run the actual tests like this:
171
172 .. code-block:: bash
173
174 $ nosetests .
175
176
177 There are a couple of test attributes defined, as ``slow``, or ``golang`` that you can use
178 to filter the tests, and do not execute them:
179
180 .. code-block:: bash
181
182 $ nosetests . -a !golang
183
184 A few minutes later it should print ``OK``:
185
186 .. code-block:: bash
187
188 ............................................................................................
189 ----------------------------------------------------------------------
190 Ran 146 tests in 50.993s
191
192 OK
193
194 To run specific tests, you can specify the test name too, something like:
195
196 .. code-block:: bash
197
198 $ nosetests conans.test.command.config_install_test:ConfigInstallTest.install_file_test --nocapture
199
200 The ``--nocapture`` argument can be useful to see some output that otherwise is captured by nosetests.
201
202 License
203 -------
204
205 `MIT LICENSE <./LICENSE.md>`__
206
207 .. |Build Status Master| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/master
208 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/master
209
210 .. |Build Status Develop| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/develop
211 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/develop
212
213 .. |Master coverage| image:: https://codecov.io/gh/conan-io/conan/branch/master/graph/badge.svg
214 :target: https://codecov.io/gh/conan-io/conan/branch/master
215
216 .. |Develop coverage| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graph/badge.svg
217 :target: https://codecov.io/gh/conan-io/conan/branch/develop
218
219 .. |Coverage graph| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graphs/tree.svg
220 :height: 50px
221 :width: 50 px
222 :alt: Conan develop coverage
223
224 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/
225
226 .. _`brew homepage`: http://brew.sh/
227
[end of README.rst]
[start of conans/client/build/autotools_environment.py]
1 import copy
2 import os
3 import platform
4
5 from conans.client import join_arguments
6 from conans.client.build.compiler_flags import (architecture_flag, format_libraries,
7 format_library_paths, format_defines,
8 sysroot_flag, format_include_paths,
9 build_type_flags, libcxx_flag, build_type_define,
10 libcxx_define, pic_flag, rpath_flags)
11 from conans.client.build.cppstd_flags import cppstd_flag
12 from conans.model.build_info import DEFAULT_BIN, DEFAULT_LIB, DEFAULT_INCLUDE, DEFAULT_SHARE
13 from conans.client.tools.oss import OSInfo
14 from conans.client.tools.win import unix_path
15 from conans.tools import (environment_append, args_to_string, cpu_count, cross_building,
16 detected_architecture, get_gnu_triplet)
17 from conans.errors import ConanException
18 from conans.util.files import get_abs_path
19
20
21 class AutoToolsBuildEnvironment(object):
22 """
23 - CPPFLAGS (C-PreProcesor-Flags NOT related with c++) (-I -D)
24 - CFLAGS (not CPPFLAGS nor LDFLAGS, used for optimization or debugging)
25 - CXXFLAGS (the CFLAGS for c++)
26 - LDFLAGS (-L, others like -m64 -m32) linker
27 """
28
29 def __init__(self, conanfile, win_bash=False, include_rpath_flags=False):
30 """
31 FIXME: include_rpath_flags CONAN 2.0 to default True? Could break many packages in center
32 """
33 self._conanfile = conanfile
34 self._win_bash = win_bash
35 self._include_rpath_flags = include_rpath_flags
36 self.subsystem = OSInfo().detect_windows_subsystem() if self._win_bash else None
37 self._deps_cpp_info = conanfile.deps_cpp_info
38 self._os = conanfile.settings.get_safe("os")
39 self._arch = conanfile.settings.get_safe("arch")
40 self._build_type = conanfile.settings.get_safe("build_type")
41 self._compiler = conanfile.settings.get_safe("compiler")
42 self._compiler_version = conanfile.settings.get_safe("compiler.version")
43 self._libcxx = conanfile.settings.get_safe("compiler.libcxx")
44 self._cppstd = conanfile.settings.get_safe("cppstd")
45
46 # Set the generic objects before mapping to env vars to let the user
47 # alter some value
48 self.libs = copy.copy(self._deps_cpp_info.libs)
49 self.include_paths = copy.copy(self._deps_cpp_info.include_paths)
50 self.library_paths = copy.copy(self._deps_cpp_info.lib_paths)
51
52 self.defines = self._configure_defines()
53 # Will go to CFLAGS and CXXFLAGS ["-m64" "-m32", "-g", "-s"]
54 self.flags = self._configure_flags()
55 # Only c++ flags [-stdlib, -library], will go to CXXFLAGS
56 self.cxx_flags = self._configure_cxx_flags()
57 # cpp standard
58 self.cppstd_flag = cppstd_flag(self._compiler, self._compiler_version, self._cppstd)
59 # Not -L flags, ["-m64" "-m32"]
60 self.link_flags = self._configure_link_flags() # TEST!
61 # Precalculate -fPIC
62 self.fpic = self._configure_fpic()
63
64 # Precalculate build, host, target triplets
65 self.build, self.host, self.target = self._get_host_build_target_flags()
66
67 def _configure_fpic(self):
68 if str(self._os) not in ["Windows", "WindowsStore"]:
69 fpic = self._conanfile.options.get_safe("fPIC")
70 if fpic is not None:
71 shared = self._conanfile.options.get_safe("shared")
72 return True if (fpic or shared) else None
73
74 def _get_host_build_target_flags(self):
75 """Based on google search for build/host triplets, it could need a lot
76 and complex verification"""
77
78 arch_detected = detected_architecture() or platform.machine()
79 os_detected = platform.system()
80
81 if os_detected is None or arch_detected is None or self._arch is None or self._os is None:
82 return False, False, False
83 if not cross_building(self._conanfile.settings, os_detected, arch_detected):
84 return False, False, False
85
86 try:
87 build = get_gnu_triplet(os_detected, arch_detected, self._compiler)
88 except ConanException as exc:
89 self._conanfile.output.warn(str(exc))
90 build = None
91 try:
92 host = get_gnu_triplet(self._os, self._arch, self._compiler)
93 except ConanException as exc:
94 self._conanfile.output.warn(str(exc))
95 host = None
96 return build, host, None
97
98 def configure(self, configure_dir=None, args=None, build=None, host=None, target=None,
99 pkg_config_paths=None, vars=None, use_default_install_dirs=True):
100 """
101 :param pkg_config_paths: Optional paths to locate the *.pc files
102 :param configure_dir: Absolute or relative path to the configure script
103 :param args: Optional arguments to pass to configure.
104 :param build: In which system the program will be built. "False" skips the --build flag
105 :param host: In which system the generated program will run. "False" skips the --host flag
106 :param target: This option is only used to build a cross-compiling toolchain.
107 "False" skips the --target flag
108 When the tool chain generates executable program, in which target system
109 the program will run.
110
111 http://jingfenghanmax.blogspot.com.es/2010/09/configure-with-host-target-and-build.html
112 https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html
113 :param use_default_install_dirs: Use or not the defaulted installation dirs
114
115 """
116 if not self._conanfile.should_configure:
117 return
118 if configure_dir:
119 configure_dir = configure_dir.rstrip("/")
120 else:
121 configure_dir = "."
122
123 triplet_args = []
124
125 if build is not False: # Skipped by user
126 if build or self.build: # User specified value or automatic
127 triplet_args.append("--build=%s" % (build or self.build))
128
129 if host is not False: # Skipped by user
130 if host or self.host: # User specified value or automatic
131 triplet_args.append("--host=%s" % (host or self.host))
132
133 if target is not False: # Skipped by user
134 if target or self.target: # User specified value or automatic
135 triplet_args.append("--target=%s" % (target or self.target))
136
137 if pkg_config_paths:
138 pkg_env = {"PKG_CONFIG_PATH":
139 [os.pathsep.join(get_abs_path(f, self._conanfile.install_folder)
140 for f in pkg_config_paths)]}
141 else:
142 # If we are using pkg_config generator automate the pcs location, otherwise it could
143 # read wrong files
144 pkg_env = {"PKG_CONFIG_PATH": [self._conanfile.install_folder]} \
145 if "pkg_config" in self._conanfile.generators else {}
146
147 configure_dir = self._adjust_path(configure_dir)
148
149 if self._conanfile.package_folder is not None:
150 if not args:
151 args = ["--prefix=%s" % self._conanfile.package_folder.replace("\\", "/")]
152 elif not self._is_flag_in_args("prefix", args):
153 args.append("--prefix=%s" % self._conanfile.package_folder.replace("\\", "/"))
154
155 all_flags = ["bindir", "sbindir", "libexecdir", "libdir", "includedir", "oldincludedir",
156 "datarootdir"]
157 help_output = self._configure_help_output(configure_dir)
158 available_flags = [flag for flag in all_flags if "--%s" % flag in help_output]
159
160 if use_default_install_dirs:
161 for varname in ["bindir", "sbindir", "libexecdir"]:
162 if self._valid_configure_flag(varname, args, available_flags):
163 args.append("--%s=${prefix}/%s" % (varname, DEFAULT_BIN))
164 if self._valid_configure_flag("libdir", args, available_flags):
165 args.append("--libdir=${prefix}/%s" % DEFAULT_LIB)
166 for varname in ["includedir", "oldincludedir"]:
167 if self._valid_configure_flag(varname, args, available_flags):
168 args.append("--%s=${prefix}/%s" % (varname, DEFAULT_INCLUDE))
169 if self._valid_configure_flag("datarootdir", args, available_flags):
170 args.append("--datarootdir=${prefix}/%s" % DEFAULT_SHARE)
171
172 with environment_append(pkg_env):
173 with environment_append(vars or self.vars):
174 command = '%s/configure %s %s' % (configure_dir, args_to_string(args),
175 " ".join(triplet_args))
176 self._conanfile.output.info("Calling:\n > %s" % command)
177 self._conanfile.run(command, win_bash=self._win_bash, subsystem=self.subsystem)
178
179 def _configure_help_output(self, configure_path):
180 from six import StringIO # Python 2 and 3 compatible
181 mybuf = StringIO()
182 try:
183 self._conanfile.run("%s/configure --help" % configure_path, output=mybuf)
184 except ConanException as e:
185 self._conanfile.output.warn("Error running `configure --help`: %s" % e)
186 return ""
187 return mybuf.getvalue()
188
189 def _adjust_path(self, path):
190 if self._win_bash:
191 path = unix_path(path, path_flavor=self.subsystem)
192 return '"%s"' % path if " " in path else path
193
194 @staticmethod
195 def _valid_configure_flag(varname, args, available_flags):
196 return not AutoToolsBuildEnvironment._is_flag_in_args(varname, args) and \
197 varname in available_flags
198
199 @staticmethod
200 def _is_flag_in_args(varname, args):
201 flag = "--%s=" % varname
202 return any([flag in arg for arg in args])
203
204 def make(self, args="", make_program=None, target=None, vars=None):
205 if not self._conanfile.should_build:
206 return
207 make_program = os.getenv("CONAN_MAKE_PROGRAM") or make_program or "make"
208 with environment_append(vars or self.vars):
209 str_args = args_to_string(args)
210 cpu_count_option = ("-j%s" % cpu_count()) if "-j" not in str_args else None
211 self._conanfile.run("%s" % join_arguments([make_program, target, str_args,
212 cpu_count_option]),
213 win_bash=self._win_bash, subsystem=self.subsystem)
214
215 def install(self, args="", make_program=None, vars=None):
216 if not self._conanfile.should_install:
217 return
218 self.make(args=args, make_program=make_program, target="install", vars=vars)
219
220 def _configure_link_flags(self):
221 """Not the -L"""
222 ret = copy.copy(self._deps_cpp_info.sharedlinkflags)
223 ret.extend(self._deps_cpp_info.exelinkflags)
224 arch_flag = architecture_flag(compiler=self._compiler, arch=self._arch)
225 if arch_flag:
226 ret.append(arch_flag)
227
228 sysf = sysroot_flag(self._deps_cpp_info.sysroot, win_bash=self._win_bash,
229 subsystem=self.subsystem,
230 compiler=self._compiler)
231 if sysf:
232 ret.append(sysf)
233
234 if self._include_rpath_flags:
235 the_os = self._conanfile.settings.get_safe("os_build") or self._os
236 ret.extend(rpath_flags(the_os, self._compiler, self._deps_cpp_info.lib_paths))
237
238 return ret
239
240 def _configure_flags(self):
241 ret = copy.copy(self._deps_cpp_info.cflags)
242 arch_flag = architecture_flag(compiler=self._compiler, arch=self._arch)
243 if arch_flag:
244 ret.append(arch_flag)
245 btfs = build_type_flags(compiler=self._compiler, build_type=self._build_type,
246 vs_toolset=self._conanfile.settings.get_safe("compiler.toolset"))
247 if btfs:
248 ret.extend(btfs)
249 srf = sysroot_flag(self._deps_cpp_info.sysroot, win_bash=self._win_bash,
250 subsystem=self.subsystem,
251 compiler=self._compiler)
252 if srf:
253 ret.append(srf)
254
255 return ret
256
257 def _configure_cxx_flags(self):
258 ret = copy.copy(self._deps_cpp_info.cppflags)
259 cxxf = libcxx_flag(compiler=self._compiler, libcxx=self._libcxx)
260 if cxxf:
261 ret.append(cxxf)
262 return ret
263
264 def _configure_defines(self):
265 # requires declared defines
266 ret = copy.copy(self._deps_cpp_info.defines)
267
268 # Debug definition for GCC
269 btf = build_type_define(build_type=self._build_type)
270 if btf:
271 ret.append(btf)
272
273 # CXX11 ABI
274 abif = libcxx_define(compiler=self._compiler, libcxx=self._libcxx)
275 if abif:
276 ret.append(abif)
277 return ret
278
279 def _get_vars(self):
280 def append(*args):
281 ret = []
282 for arg in args:
283 if arg:
284 if isinstance(arg, list):
285 ret.extend(arg)
286 else:
287 ret.append(arg)
288 return ret
289
290 lib_paths = format_library_paths(self.library_paths, win_bash=self._win_bash,
291 subsystem=self.subsystem, compiler=self._compiler)
292 include_paths = format_include_paths(self.include_paths, win_bash=self._win_bash,
293 subsystem=self.subsystem, compiler=self._compiler)
294
295 ld_flags = append(self.link_flags, lib_paths)
296 cpp_flags = append(include_paths, format_defines(self.defines))
297 libs = format_libraries(self.libs, compiler=self._compiler)
298
299 tmp_compilation_flags = copy.copy(self.flags)
300 if self.fpic:
301 tmp_compilation_flags.append(pic_flag(self._compiler))
302
303 cxx_flags = append(tmp_compilation_flags, self.cxx_flags, self.cppstd_flag)
304 c_flags = tmp_compilation_flags
305
306 return ld_flags, cpp_flags, libs, cxx_flags, c_flags
307
308 @property
309 def vars_dict(self):
310
311 ld_flags, cpp_flags, libs, cxx_flags, c_flags = self._get_vars()
312
313 if os.environ.get("CPPFLAGS", None):
314 cpp_flags.append(os.environ.get("CPPFLAGS", None))
315
316 if os.environ.get("CXXFLAGS", None):
317 cxx_flags.append(os.environ.get("CXXFLAGS", None))
318
319 if os.environ.get("CFLAGS", None):
320 c_flags.append(os.environ.get("CFLAGS", None))
321
322 if os.environ.get("LDFLAGS", None):
323 ld_flags.append(os.environ.get("LDFLAGS", None))
324
325 if os.environ.get("LIBS", None):
326 libs.append(os.environ.get("LIBS", None))
327
328 ret = {"CPPFLAGS": cpp_flags,
329 "CXXFLAGS": cxx_flags,
330 "CFLAGS": c_flags,
331 "LDFLAGS": ld_flags,
332 "LIBS": libs,
333 }
334 return ret
335
336 @property
337 def vars(self):
338 ld_flags, cpp_flags, libs, cxx_flags, c_flags = self._get_vars()
339
340 cpp_flags = " ".join(cpp_flags) + _environ_value_prefix("CPPFLAGS")
341 cxx_flags = " ".join(cxx_flags) + _environ_value_prefix("CXXFLAGS")
342 cflags = " ".join(c_flags) + _environ_value_prefix("CFLAGS")
343 ldflags = " ".join(ld_flags) + _environ_value_prefix("LDFLAGS")
344 libs = " ".join(libs) + _environ_value_prefix("LIBS")
345
346 ret = {"CPPFLAGS": cpp_flags.strip(),
347 "CXXFLAGS": cxx_flags.strip(),
348 "CFLAGS": cflags.strip(),
349 "LDFLAGS": ldflags.strip(),
350 "LIBS": libs.strip(),
351 }
352 return ret
353
354
355 def _environ_value_prefix(var_name, prefix=" "):
356 if os.environ.get(var_name, ""):
357 return "%s%s" % (prefix, os.environ.get(var_name, ""))
358 else:
359 return ""
360
[end of conans/client/build/autotools_environment.py]
[start of conans/client/build/compiler_flags.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 # Visual Studio cl options reference:
6 # https://msdn.microsoft.com/en-us/library/610ecb4h.aspx
7 # "Options are specified by either a forward slash (/) or a dash (–)."
8 # Here we use "-" better than "/" that produces invalid escaped chars using AutoTools.
9 # -LIBPATH, -D, -I, -ZI and so on.
10
11 """
12 from conans import tools
13 from conans.tools import unix_path
14
15
16 def rpath_flags(os_build, compiler, lib_paths):
17 if not os_build:
18 return []
19 if compiler in ("clang", "apple-clang", "gcc"):
20 rpath_separator = "," if os_build in ["Macos", "iOS", "watchOS", "tvOS"] else "="
21 return ['-Wl,-rpath%s"%s"' % (rpath_separator, x.replace("\\", "/"))
22 for x in lib_paths if x]
23 return []
24
25
26 def architecture_flag(compiler, arch):
27 """
28 returns flags specific to the target architecture and compiler
29 """
30 if not compiler or not arch:
31 return ""
32
33 if str(compiler) in ['gcc', 'apple-clang', 'clang', 'sun-cc']:
34 if str(arch) in ['x86_64', 'sparcv9']:
35 return '-m64'
36 elif str(arch) in ['x86', 'sparc']:
37 return '-m32'
38 return ""
39
40
41 def libcxx_define(compiler, libcxx):
42
43 if not compiler or not libcxx:
44 return ""
45
46 if str(compiler) in ['gcc', 'clang', 'apple-clang']:
47 if str(libcxx) == 'libstdc++':
48 return '_GLIBCXX_USE_CXX11_ABI=0'
49 elif str(libcxx) == 'libstdc++11':
50 return '_GLIBCXX_USE_CXX11_ABI=1'
51 return ""
52
53
54 def libcxx_flag(compiler, libcxx):
55 """
56 returns flag specific to the target C++ standard library
57 """
58 if not compiler or not libcxx:
59 return ""
60 if str(compiler) in ['clang', 'apple-clang']:
61 if str(libcxx) in ['libstdc++', 'libstdc++11']:
62 return '-stdlib=libstdc++'
63 elif str(libcxx) == 'libc++':
64 return '-stdlib=libc++'
65 elif str(compiler) == 'sun-cc':
66 return ({"libCstd": "-library=Cstd",
67 "libstdcxx": "-library=stdcxx4",
68 "libstlport": "-library=stlport4",
69 "libstdc++": "-library=stdcpp"}.get(libcxx, ""))
70 return ""
71
72
73 def pic_flag(compiler=None):
74 """
75 returns PIC (position independent code) flags, such as -fPIC
76 """
77 if not compiler or compiler == 'Visual Studio':
78 return ""
79 return '-fPIC'
80
81
82 def build_type_flags(compiler, build_type, vs_toolset=None):
83 """
84 returns flags specific to the build type (Debug, Release, etc.)
85 (-s, -g, /Zi, etc.)
86 """
87 if not compiler or not build_type:
88 return ""
89
90 # https://github.com/Kitware/CMake/blob/d7af8a34b67026feaee558433db3a835d6007e06/
91 # Modules/Platform/Windows-MSVC.cmake
92 if str(compiler) == 'Visual Studio':
93 if vs_toolset and "clang" in str(vs_toolset):
94 flags = {"Debug": ["-gline-tables-only", "-fno-inline", "-O0"],
95 "Release": ["-O2"],
96 "RelWithDebInfo": ["-gline-tables-only", "-O2", "-fno-inline"],
97 "MinSizeRel": []
98 }.get(build_type, ["-O2", "-Ob2"])
99 else:
100 flags = {"Debug": ["-Zi", "-Ob0", "-Od"],
101 "Release": ["-O2", "-Ob2"],
102 "RelWithDebInfo": ["-Zi", "-O2", "-Ob1"],
103 "MinSizeRel": ["-O1", "-Ob1"],
104 }.get(build_type, [])
105 return flags
106 else:
107 # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/
108 # Modules/Compiler/GNU.cmake
109 # clang include the gnu (overriding some things, but not build type) and apple clang
110 # overrides clang but it doesn't touch clang either
111 if str(compiler) in ["clang", "gcc", "apple-clang"]:
112 # FIXME: It is not clear that the "-s" is something related with the build type
113 # cmake is not adjusting it
114 # -s: Remove all symbol table and relocation information from the executable.
115 flags = {"Debug": ["-g"],
116 "Release": ["-O3", "-s"] if str(compiler) == "gcc" else ["-O3"],
117 "RelWithDebInfo": ["-O2", "-g"],
118 "MinSizeRel": ["-Os"],
119 }.get(build_type, [])
120 return flags
121 elif str(compiler) == "sun-cc":
122 # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/
123 # Modules/Compiler/SunPro-CXX.cmake
124 flags = {"Debug": ["-g"],
125 "Release": ["-xO3"],
126 "RelWithDebInfo": ["-xO2", "-g"],
127 "MinSizeRel": ["-xO2", "-xspace"],
128 }.get(build_type, [])
129 return flags
130
131 return ""
132
133
134 def build_type_define(build_type=None):
135 """
136 returns definitions specific to the build type (Debug, Release, etc.)
137 like DEBUG, _DEBUG, NDEBUG
138 """
139 return 'NDEBUG' if build_type == 'Release' else ""
140
141
142 def adjust_path(path, win_bash=False, subsystem=None, compiler=None):
143 """
144 adjusts path to be safely passed to the compiler command line
145 for Windows bash, ensures path is in format according to the subsystem
146 for path with spaces, places double quotes around it
147 converts slashes to backslashes, or vice versa
148 """
149 if str(compiler) == 'Visual Studio':
150 path = path.replace('/', '\\')
151 else:
152 path = path.replace('\\', '/')
153 if win_bash:
154 path = unix_path(path, subsystem)
155 return '"%s"' % path if ' ' in path else path
156
157
158 def sysroot_flag(sysroot, win_bash=False, subsystem=None, compiler=None):
159 if str(compiler) != 'Visual Studio' and sysroot:
160 sysroot = adjust_path(sysroot, win_bash=win_bash, subsystem=subsystem, compiler=compiler)
161 return '--sysroot=%s' % sysroot
162 return ""
163
164
165 def visual_runtime(runtime):
166 if runtime:
167 return "-%s" % runtime
168 return ""
169
170
171 def format_defines(defines):
172 return ["-D%s" % define for define in defines if define]
173
174
175 include_path_option = "-I"
176 visual_linker_option_separator = "-link" # Further options will apply to the linker
177
178
179 def format_include_paths(include_paths, win_bash=False, subsystem=None, compiler=None):
180 return ["%s%s" % (include_path_option, adjust_path(include_path, win_bash=win_bash,
181 subsystem=subsystem, compiler=compiler))
182 for include_path in include_paths if include_path]
183
184
185 def format_library_paths(library_paths, win_bash=False, subsystem=None, compiler=None):
186 pattern = "-LIBPATH:%s" if str(compiler) == 'Visual Studio' else "-L%s"
187 return [pattern % adjust_path(library_path, win_bash=win_bash,
188 subsystem=subsystem, compiler=compiler)
189 for library_path in library_paths if library_path]
190
191
192 def format_libraries(libraries, compiler=None):
193 result = []
194 for library in libraries:
195 if str(compiler) == 'Visual Studio':
196 if not library.endswith(".lib"):
197 library += ".lib"
198 result.append(library)
199 else:
200 result.append("-l%s" % library)
201 return result
202
203
204 def parallel_compiler_cl_flag():
205 cpu_count = tools.cpu_count()
206 return "/MP%s" % cpu_count
207
[end of conans/client/build/compiler_flags.py]
[start of conans/client/build/cppstd_flags.py]
1 from conans.model.version import Version
2
3
4 def cppstd_flag(compiler, compiler_version, cppstd):
5 if not compiler or not compiler_version or not cppstd:
6 return ""
7 func = {"gcc": _cppstd_gcc,
8 "clang": _cppstd_clang,
9 "apple-clang": _cppstd_apple_clang,
10 "Visual Studio": _cppstd_visualstudio}.get(str(compiler), None)
11 flag = None
12 if func:
13 flag = func(str(compiler_version), str(cppstd))
14 return flag
15
16
17 def cppstd_default(compiler, compiler_version):
18 default = {"gcc": _gcc_cppstd_default(compiler_version),
19 "clang": _clang_cppstd_default(compiler_version),
20 "apple-clang": "gnu98", # Confirmed in apple-clang 9.1 with a simple "auto i=1;"
21 "Visual Studio": _visual_cppstd_default(compiler_version)}.get(str(compiler), None)
22 return default
23
24
25 def _clang_cppstd_default(compiler_version):
26 # Official docs are wrong, in 6.0 the default is gnu14 to follow gcc's choice
27 return "gnu98" if Version(compiler_version) < "6" else "gnu14"
28
29
30 def _gcc_cppstd_default(compiler_version):
31 return "gnu98" if Version(compiler_version) < "6" else "gnu14"
32
33
34 def _visual_cppstd_default(compiler_version):
35 if Version(compiler_version) >= "14": # VS 2015 update 3 only
36 return "14"
37 return None
38
39
40 def _cppstd_visualstudio(visual_version, cppstd):
41 # https://docs.microsoft.com/en-us/cpp/build/reference/std-specify-language-standard-version
42 v14 = None
43 v17 = None
44 v20 = None
45
46 if Version(visual_version) >= "14":
47 v14 = "c++14"
48 v17 = "c++latest"
49 if Version(visual_version) >= "15":
50 v17 = "c++17"
51 v20 = "c++latest"
52
53 flag = {"14": v14, "17": v17, "20": v20}.get(str(cppstd), None)
54 return "/std:%s" % flag if flag else None
55
56
57 def _cppstd_apple_clang(clang_version, cppstd):
58 """
59 Inspired in:
60 https://github.com/Kitware/CMake/blob/master/Modules/Compiler/AppleClang-CXX.cmake
61 """
62
63 v98 = vgnu98 = v11 = vgnu11 = v14 = vgnu14 = v17 = vgnu17 = v20 = vgnu20 = None
64
65 if Version(clang_version) >= "4.0":
66 v98 = "c++98"
67 vgnu98 = "gnu++98"
68 v11 = "c++11"
69 vgnu11 = "gnu++11"
70
71 if Version(clang_version) >= "6.1":
72 v14 = "c++14"
73 vgnu14 = "gnu++14"
74 elif Version(clang_version) >= "5.1":
75 v14 = "c++1y"
76 vgnu14 = "gnu++1y"
77
78 if Version(clang_version) >= "6.1":
79 v17 = "c++1z"
80 vgnu17 = "gnu++1z"
81
82 if Version(clang_version) >= "9.1":
83 # Not confirmed that it didn't work before 9.1 but 1z is still valid, so we are ok
84 v17 = "c++17"
85 vgnu17 = "gnu++17"
86
87 flag = {"98": v98, "gnu98": vgnu98,
88 "11": v11, "gnu11": vgnu11,
89 "14": v14, "gnu14": vgnu14,
90 "17": v17, "gnu17": vgnu17,
91 "20": v20, "gnu20": vgnu20}.get(cppstd, None)
92
93 return "-std=%s" % flag if flag else None
94
95
96 def _cppstd_clang(clang_version, cppstd):
97 """
98 Inspired in:
99 https://github.com/Kitware/CMake/blob/
100 1fe2dc5ef2a1f262b125a2ba6a85f624ce150dd2/Modules/Compiler/Clang-CXX.cmake
101
102 https://clang.llvm.org/cxx_status.html
103 """
104 v98 = vgnu98 = v11 = vgnu11 = v14 = vgnu14 = v17 = vgnu17 = v20 = vgnu20 = None
105
106 if Version(clang_version) >= "2.1":
107 v98 = "c++98"
108 vgnu98 = "gnu++98"
109
110 if Version(clang_version) >= "3.1":
111 v11 = "c++11"
112 vgnu11 = "gnu++11"
113 elif Version(clang_version) >= "2.1":
114 v11 = "c++0x"
115 vgnu11 = "gnu++0x"
116
117 if Version(clang_version) >= "3.5":
118 v14 = "c++14"
119 vgnu14 = "gnu++14"
120 elif Version(clang_version) >= "3.4":
121 v14 = "c++1y"
122 vgnu14 = "gnu++1y"
123
124 if Version(clang_version) >= "5":
125 v17 = "c++17"
126 vgnu17 = "gnu++17"
127 elif Version(clang_version) >= "3.5":
128 v17 = "c++1z"
129 vgnu17 = "gnu++1z"
130
131 if Version(clang_version) >= "6":
132 v20 = "c++2a"
133 vgnu20 = "gnu++2a"
134
135 flag = {"98": v98, "gnu98": vgnu98,
136 "11": v11, "gnu11": vgnu11,
137 "14": v14, "gnu14": vgnu14,
138 "17": v17, "gnu17": vgnu17,
139 "20": v20, "gnu20": vgnu20}.get(cppstd, None)
140 return "-std=%s" % flag if flag else None
141
142
143 def _cppstd_gcc(gcc_version, cppstd):
144 """https://github.com/Kitware/CMake/blob/master/Modules/Compiler/GNU-CXX.cmake"""
145 # https://gcc.gnu.org/projects/cxx-status.html
146 v98 = vgnu98 = v11 = vgnu11 = v14 = vgnu14 = v17 = vgnu17 = v20 = vgnu20 = None
147
148 if Version(gcc_version) >= "3.4":
149 v98 = "c++98"
150 vgnu98 = "gnu++98"
151
152 if Version(gcc_version) >= "4.7":
153 v11 = "c++11"
154 vgnu11 = "gnu++11"
155 elif Version(gcc_version) >= "4.3":
156 v11 = "c++0x"
157 vgnu11 = "gnu++0x"
158
159 if Version(gcc_version) >= "4.9":
160 v14 = "c++14"
161 vgnu14 = "gnu++14"
162 elif Version(gcc_version) >= "4.8":
163 v14 = "c++1y"
164 vgnu14 = "gnu++1y"
165
166 if Version(gcc_version) >= "5.1":
167 v17 = "c++1z"
168 vgnu17 = "gnu++1z"
169
170 if Version(gcc_version) >= "5.2": # Not sure if even in 5.1 gnu17 is valid, but gnu1z is
171 v17 = "c++17"
172 vgnu17 = "gnu++17"
173
174 if Version(gcc_version) >= "8":
175 v20 = "c++2a"
176 vgnu20 = "gnu++2a"
177
178 flag = {"98": v98, "gnu98": vgnu98,
179 "11": v11, "gnu11": vgnu11,
180 "14": v14, "gnu14": vgnu14,
181 "17": v17, "gnu17": vgnu17,
182 "20": v20, "gnu20": vgnu20}.get(cppstd)
183 return "-std=%s" % flag if flag else None
184
[end of conans/client/build/cppstd_flags.py]
[start of conans/client/cmd/export.py]
1 import ast
2 import os
3 import shutil
4 import six
5
6 from conans.client.cmd.export_linter import conan_linter
7 from conans.client.file_copier import FileCopier
8 from conans.client.output import ScopedOutput
9 from conans.errors import ConanException
10 from conans.model.manifest import FileTreeManifest
11 from conans.model.scm import SCM, get_scm_data
12 from conans.paths import CONAN_MANIFEST, CONANFILE
13 from conans.search.search import search_recipes
14 from conans.util.files import save, rmdir, is_dirty, set_dirty, mkdir, load
15 from conans.util.log import logger
16
17
18 def export_alias(reference, target_reference, client_cache):
19 if reference.name != target_reference.name:
20 raise ConanException("An alias can only be defined to a package with the same name")
21 conanfile = """
22 from conans import ConanFile
23
24 class AliasConanfile(ConanFile):
25 alias = "%s"
26 """ % str(target_reference)
27
28 export_path = client_cache.export(reference)
29 mkdir(export_path)
30 save(os.path.join(export_path, CONANFILE), conanfile)
31 mkdir(client_cache.export_sources(reference))
32 digest = FileTreeManifest.create(export_path)
33 digest.save(export_path)
34
35
36 def cmd_export(conanfile_path, conanfile, reference, keep_source, output, client_cache,
37 hook_manager):
38 """ Export the recipe
39 param conanfile_path: the original source directory of the user containing a
40 conanfile.py
41 """
42 hook_manager.execute("pre_export", conanfile=conanfile, conanfile_path=conanfile_path,
43 reference=reference)
44 logger.debug("Exporting %s" % conanfile_path)
45 output.highlight("Exporting package recipe")
46
47 conan_linter(conanfile_path, output)
48 # Maybe a platform check could be added, but depends on disk partition
49 conan_ref_str = str(reference)
50 refs = search_recipes(client_cache, conan_ref_str, ignorecase=True)
51 if refs and reference not in refs:
52 raise ConanException("Cannot export package with same name but different case\n"
53 "You exported '%s' but already existing '%s'"
54 % (conan_ref_str, " ".join(str(s) for s in refs)))
55
56 with client_cache.conanfile_write_lock(reference):
57 _export_conanfile(conanfile_path, conanfile.output, client_cache, conanfile, reference,
58 keep_source)
59 conanfile_cache_path = client_cache.conanfile(reference)
60 hook_manager.execute("post_export", conanfile=conanfile, conanfile_path=conanfile_cache_path,
61 reference=reference)
62
63
64 def _capture_export_scm_data(conanfile, conanfile_dir, destination_folder, output, paths, conan_ref):
65
66 scm_src_file = paths.scm_folder(conan_ref)
67 if os.path.exists(scm_src_file):
68 os.unlink(scm_src_file)
69
70 scm_data = get_scm_data(conanfile)
71 captured_revision = scm_data.capture_revision if scm_data else False
72
73 if not scm_data or not (scm_data.capture_origin or scm_data.capture_revision):
74 return None, captured_revision
75
76 scm = SCM(scm_data, conanfile_dir)
77
78 if scm_data.url == "auto":
79 origin = scm.get_qualified_remote_url()
80 if not origin:
81 raise ConanException("Repo origin cannot be deduced by 'auto'")
82 if scm.is_local_repository():
83 output.warn("Repo origin looks like a local path: %s" % origin)
84 output.success("Repo origin deduced by 'auto': %s" % origin)
85 scm_data.url = origin
86 if scm_data.revision == "auto":
87 if not scm.is_pristine():
88 output.warn("Repo status is not pristine: there might be modified files")
89 scm_data.revision = scm.get_revision()
90 output.success("Revision deduced by 'auto': %s" % scm_data.revision)
91
92 # Generate the scm_folder.txt file pointing to the src_path
93 src_path = scm.get_repo_root()
94 save(scm_src_file, src_path.replace("\\", "/"))
95 _replace_scm_data_in_conanfile(os.path.join(destination_folder, "conanfile.py"),
96 scm_data)
97
98 return scm_data, captured_revision
99
100
101 def _replace_scm_data_in_conanfile(conanfile_path, scm_data):
102 # Parsing and replacing the SCM field
103 content = load(conanfile_path)
104 headers = []
105
106 if six.PY2:
107 # Workaround for https://bugs.python.org/issue22221
108 lines_without_headers = []
109 lines = content.splitlines(True)
110 for line in lines:
111 if not lines_without_headers and line.startswith("#"):
112 headers.append(line)
113 else:
114 lines_without_headers.append(line)
115 content = ''.join(lines_without_headers)
116
117 lines = content.splitlines(True)
118 tree = ast.parse(content)
119 to_replace = []
120 for i_body, item in enumerate(tree.body):
121 if isinstance(item, ast.ClassDef):
122 statements = item.body
123 for i, stmt in enumerate(item.body):
124 if isinstance(stmt, ast.Assign) and len(stmt.targets) == 1:
125 if isinstance(stmt.targets[0], ast.Name) and stmt.targets[0].id == "scm":
126 try:
127 if i + 1 == len(statements): # Last statement in my ClassDef
128 if i_body + 1 == len(tree.body): # Last statement over all
129 next_line = len(lines)
130 else:
131 next_line = tree.body[i_body+1].lineno - 1
132 else:
133 next_line = statements[i+1].lineno - 1
134 except IndexError:
135 next_line = stmt.lineno
136 replace = [line for line in lines[(stmt.lineno-1):next_line]
137 if line.strip()]
138 to_replace.append("".join(replace).lstrip())
139 break
140 if len(to_replace) != 1:
141 raise ConanException("The conanfile.py defines more than one class level 'scm' attribute")
142
143 new_text = "scm = " + ",\n ".join(str(scm_data).split(",")) + "\n"
144 content = content.replace(to_replace[0], new_text)
145 content = content if not headers else ''.join(headers) + content
146 save(conanfile_path, content)
147
148
149 def _export_conanfile(conanfile_path, output, client_cache, conanfile, conan_ref, keep_source):
150
151 exports_folder = client_cache.export(conan_ref)
152 exports_source_folder = client_cache.export_sources(conan_ref, conanfile.short_paths)
153
154 previous_digest = _init_export_folder(exports_folder, exports_source_folder)
155 origin_folder = os.path.dirname(conanfile_path)
156 export_recipe(conanfile, origin_folder, exports_folder, output)
157 export_source(conanfile, origin_folder, exports_source_folder, output)
158 shutil.copy2(conanfile_path, os.path.join(exports_folder, CONANFILE))
159
160 scm_data, captured_revision = _capture_export_scm_data(conanfile,
161 os.path.dirname(conanfile_path),
162 exports_folder,
163 output, client_cache, conan_ref)
164
165 digest = FileTreeManifest.create(exports_folder, exports_source_folder)
166
167 if previous_digest and previous_digest == digest:
168 output.info("The stored package has not changed")
169 modified_recipe = False
170 digest = previous_digest # Use the old one, keep old timestamp
171 else:
172 output.success('A new %s version was exported' % CONANFILE)
173 output.info('Folder: %s' % exports_folder)
174 modified_recipe = True
175
176 digest.save(exports_folder)
177
178 revision = scm_data.revision if scm_data and captured_revision else digest.summary_hash
179 with client_cache.update_metadata(conan_ref) as metadata:
180 # Note that there is no time set, the time will come from the remote
181 metadata.recipe.revision = revision
182
183 # FIXME: Conan 2.0 Clear the registry entry if the recipe has changed
184 source = client_cache.source(conan_ref, conanfile.short_paths)
185 remove = False
186 if is_dirty(source):
187 output.info("Source folder is corrupted, forcing removal")
188 remove = True
189 elif modified_recipe and not keep_source and os.path.exists(source):
190 output.info("Package recipe modified in export, forcing source folder removal")
191 output.info("Use the --keep-source, -k option to skip it")
192 remove = True
193 if remove:
194 output.info("Removing 'source' folder, this can take a while for big packages")
195 try:
196 # remove only the internal
197 rmdir(source)
198 except BaseException as e:
199 output.error("Unable to delete source folder. "
200 "Will be marked as corrupted for deletion")
201 output.warn(str(e))
202 set_dirty(source)
203
204
205 def _init_export_folder(destination_folder, destination_src_folder):
206 previous_digest = None
207 try:
208 if os.path.exists(destination_folder):
209 if os.path.exists(os.path.join(destination_folder, CONAN_MANIFEST)):
210 previous_digest = FileTreeManifest.load(destination_folder)
211 # Maybe here we want to invalidate cache
212 rmdir(destination_folder)
213 os.makedirs(destination_folder)
214 except Exception as e:
215 raise ConanException("Unable to create folder %s\n%s" % (destination_folder, str(e)))
216 try:
217 if os.path.exists(destination_src_folder):
218 rmdir(destination_src_folder)
219 os.makedirs(destination_src_folder)
220 except Exception as e:
221 raise ConanException("Unable to create folder %s\n%s" % (destination_src_folder, str(e)))
222 return previous_digest
223
224
225 def _classify_patterns(patterns):
226 patterns = patterns or []
227 included, excluded = [], []
228 for p in patterns:
229 if p.startswith("!"):
230 excluded.append(p[1:])
231 else:
232 included.append(p)
233 return included, excluded
234
235
236 def export_source(conanfile, origin_folder, destination_source_folder, output):
237 if isinstance(conanfile.exports_sources, str):
238 conanfile.exports_sources = (conanfile.exports_sources, )
239
240 included_sources, excluded_sources = _classify_patterns(conanfile.exports_sources)
241 copier = FileCopier(origin_folder, destination_source_folder)
242 for pattern in included_sources:
243 copier(pattern, links=True, excludes=excluded_sources)
244 package_output = ScopedOutput("%s exports_sources" % output.scope, output)
245 copier.report(package_output)
246
247
248 def export_recipe(conanfile, origin_folder, destination_folder, output):
249 if isinstance(conanfile.exports, str):
250 conanfile.exports = (conanfile.exports, )
251
252 included_exports, excluded_exports = _classify_patterns(conanfile.exports)
253
254 try:
255 os.unlink(os.path.join(origin_folder, CONANFILE + 'c'))
256 except OSError:
257 pass
258
259 copier = FileCopier(origin_folder, destination_folder)
260 for pattern in included_exports:
261 copier(pattern, links=True, excludes=excluded_exports)
262 package_output = ScopedOutput("%s exports" % output.scope, output)
263 copier.report(package_output)
264
[end of conans/client/cmd/export.py]
[start of conans/client/cmd/new.py]
1 import re
2 from conans.errors import ConanException
3 from conans.model.ref import ConanFileReference
4 from conans.client.cmd.new_ci import ci_get_files
5
6
7 conanfile = """from conans import ConanFile, CMake, tools
8
9
10 class {package_name}Conan(ConanFile):
11 name = "{name}"
12 version = "{version}"
13 license = "<Put the package license here>"
14 author = "<Put your name here> <And your email here>"
15 url = "<Package recipe repository url here, for issues about the package>"
16 description = "<Description of {package_name} here>"
17 topics = ("<Put some tag here>", "<here>", "<and here>")
18 settings = "os", "compiler", "build_type", "arch"
19 options = {{"shared": [True, False]}}
20 default_options = "shared=False"
21 generators = "cmake"
22
23 def source(self):
24 self.run("git clone https://github.com/memsharded/hello.git")
25 self.run("cd hello && git checkout static_shared")
26 # This small hack might be useful to guarantee proper /MT /MD linkage
27 # in MSVC if the packaged project doesn't have variables to set it
28 # properly
29 tools.replace_in_file("hello/CMakeLists.txt", "PROJECT(MyHello)",
30 '''PROJECT(MyHello)
31 include(${{CMAKE_BINARY_DIR}}/conanbuildinfo.cmake)
32 conan_basic_setup()''')
33
34 def build(self):
35 cmake = CMake(self)
36 cmake.configure(source_folder="hello")
37 cmake.build()
38
39 # Explicit way:
40 # self.run('cmake %s/hello %s'
41 # % (self.source_folder, cmake.command_line))
42 # self.run("cmake --build . %s" % cmake.build_config)
43
44 def package(self):
45 self.copy("*.h", dst="include", src="hello")
46 self.copy("*hello.lib", dst="lib", keep_path=False)
47 self.copy("*.dll", dst="bin", keep_path=False)
48 self.copy("*.so", dst="lib", keep_path=False)
49 self.copy("*.dylib", dst="lib", keep_path=False)
50 self.copy("*.a", dst="lib", keep_path=False)
51
52 def package_info(self):
53 self.cpp_info.libs = ["hello"]
54
55 """
56
57 conanfile_bare = """from conans import ConanFile, tools
58
59
60 class {package_name}Conan(ConanFile):
61 name = "{name}"
62 version = "{version}"
63 settings = "os", "compiler", "build_type", "arch"
64 description = "<Description of {package_name} here>"
65 url = "None"
66 license = "None"
67 author = "None"
68 topics = None
69
70 def package(self):
71 self.copy("*")
72
73 def package_info(self):
74 self.cpp_info.libs = tools.collect_libs(self)
75 """
76
77 conanfile_sources = """from conans import ConanFile, CMake
78
79
80 class {package_name}Conan(ConanFile):
81 name = "{name}"
82 version = "{version}"
83 license = "<Put the package license here>"
84 author = "<Put your name here> <And your email here>"
85 url = "<Package recipe repository url here, for issues about the package>"
86 description = "<Description of {package_name} here>"
87 topics = ("<Put some tag here>", "<here>", "<and here>")
88 settings = "os", "compiler", "build_type", "arch"
89 options = {{"shared": [True, False]}}
90 default_options = "shared=False"
91 generators = "cmake"
92 exports_sources = "src/*"
93
94 def build(self):
95 cmake = CMake(self)
96 cmake.configure(source_folder="src")
97 cmake.build()
98
99 # Explicit way:
100 # self.run('cmake %s/hello %s'
101 # % (self.source_folder, cmake.command_line))
102 # self.run("cmake --build . %s" % cmake.build_config)
103
104 def package(self):
105 self.copy("*.h", dst="include", src="src")
106 self.copy("*.lib", dst="lib", keep_path=False)
107 self.copy("*.dll", dst="bin", keep_path=False)
108 self.copy("*.dylib*", dst="lib", keep_path=False)
109 self.copy("*.so", dst="lib", keep_path=False)
110 self.copy("*.a", dst="lib", keep_path=False)
111
112 def package_info(self):
113 self.cpp_info.libs = ["hello"]
114 """
115
116 conanfile_header = """import os
117
118 from conans import ConanFile, tools
119
120
121 class {package_name}Conan(ConanFile):
122 name = "{name}"
123 version = "{version}"
124 license = "<Put the package license here>"
125 author = "<Put your name here> <And your email here>"
126 url = "<Package recipe repository url here, for issues about the package>"
127 description = "<Description of {package_name} here>"
128 topics = ("<Put some tag here>", "<here>", "<and here>")
129 no_copy_source = True
130 # No settings/options are necessary, this is header only
131
132 def source(self):
133 '''retrieval of the source code here. Remember you can also put the code
134 in the folder and use exports instead of retrieving it with this
135 source() method
136 '''
137 # self.run("git clone ...") or
138 # tools.download("url", "file.zip")
139 # tools.unzip("file.zip" )
140
141 def package(self):
142 self.copy("*.h", "include")
143 """
144
145
146 test_conanfile = """import os
147
148 from conans import ConanFile, CMake, tools
149
150
151 class {package_name}TestConan(ConanFile):
152 settings = "os", "compiler", "build_type", "arch"
153 generators = "cmake"
154
155 def build(self):
156 cmake = CMake(self)
157 # Current dir is "test_package/build/<build_id>" and CMakeLists.txt is
158 # in "test_package"
159 cmake.configure()
160 cmake.build()
161
162 def imports(self):
163 self.copy("*.dll", dst="bin", src="bin")
164 self.copy("*.dylib*", dst="bin", src="lib")
165 self.copy('*.so*', dst='bin', src='lib')
166
167 def test(self):
168 if not tools.cross_building(self.settings):
169 os.chdir("bin")
170 self.run(".%sexample" % os.sep)
171 """
172
173 test_cmake = """cmake_minimum_required(VERSION 2.8.12)
174 project(PackageTest CXX)
175
176 include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
177 conan_basic_setup()
178
179 add_executable(example example.cpp)
180 target_link_libraries(example ${CONAN_LIBS})
181
182 # CTest is a testing tool that can be used to test your project.
183 # enable_testing()
184 # add_test(NAME example
185 # WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/bin
186 # COMMAND example)
187 """
188
189 test_main = """#include <iostream>
190 #include "hello.h"
191
192 int main() {
193 hello();
194 }
195 """
196
197 hello_h = """#pragma once
198
199 #ifdef WIN32
200 #define HELLO_EXPORT __declspec(dllexport)
201 #else
202 #define HELLO_EXPORT
203 #endif
204
205 HELLO_EXPORT void hello();
206 """
207
208 hello_cpp = """#include <iostream>
209 #include "hello.h"
210
211 void hello(){
212 #ifdef NDEBUG
213 std::cout << "Hello World Release!" <<std::endl;
214 #else
215 std::cout << "Hello World Debug!" <<std::endl;
216 #endif
217 }
218 """
219
220 cmake = """cmake_minimum_required(VERSION 2.8)
221 project(MyHello CXX)
222
223 include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
224 conan_basic_setup()
225
226 add_library(hello hello.cpp)
227 """
228
229 gitignore_template = """
230 *.pyc
231 test_package/build
232
233 """
234
235
236 def cmd_new(ref, header=False, pure_c=False, test=False, exports_sources=False, bare=False,
237 visual_versions=None, linux_gcc_versions=None, linux_clang_versions=None, osx_clang_versions=None,
238 shared=None, upload_url=None, gitignore=None, gitlab_gcc_versions=None, gitlab_clang_versions=None,
239 circleci_gcc_versions=None, circleci_clang_versions=None, circleci_osx_versions=None):
240 try:
241 tokens = ref.split("@")
242 name, version = tokens[0].split("/")
243 if len(tokens) == 2:
244 user, channel = tokens[1].split("/")
245 else:
246 user, channel = "user", "channel"
247
248 pattern = re.compile('[\W_]+')
249 package_name = pattern.sub('', name).capitalize()
250 except ValueError:
251 raise ConanException("Bad parameter, please use full package name,"
252 "e.g.: MyLib/1.2.3@user/testing")
253
254 # Validate it is a valid reference
255 ConanFileReference(name, version, user, channel)
256
257 if header and exports_sources:
258 raise ConanException("'header' and 'sources' are incompatible options")
259 if pure_c and (header or exports_sources):
260 raise ConanException("'pure_c' is incompatible with 'header' and 'sources'")
261 if bare and (header or exports_sources):
262 raise ConanException("'bare' is incompatible with 'header' and 'sources'")
263
264 if header:
265 files = {"conanfile.py": conanfile_header.format(name=name, version=version,
266 package_name=package_name)}
267 elif exports_sources:
268 files = {"conanfile.py": conanfile_sources.format(name=name, version=version,
269 package_name=package_name),
270 "src/hello.cpp": hello_cpp,
271 "src/hello.h": hello_h,
272 "src/CMakeLists.txt": cmake}
273 elif bare:
274 files = {"conanfile.py": conanfile_bare.format(name=name, version=version,
275 package_name=package_name)}
276 else:
277 files = {"conanfile.py": conanfile.format(name=name, version=version,
278 package_name=package_name)}
279 if pure_c:
280 config = " def configure(self):\n del self.settings.compiler.libcxx\n"
281 files["conanfile.py"] = files["conanfile.py"] + config
282
283 if test:
284 files["test_package/conanfile.py"] = test_conanfile.format(name=name, version=version,
285 user=user, channel=channel,
286 package_name=package_name)
287 files["test_package/CMakeLists.txt"] = test_cmake
288 files["test_package/example.cpp"] = test_main
289
290 if gitignore:
291 files[".gitignore"] = gitignore_template
292
293 files.update(ci_get_files(name, version, user, channel, visual_versions,
294 linux_gcc_versions, linux_clang_versions,
295 osx_clang_versions, shared, upload_url,
296 gitlab_gcc_versions, gitlab_clang_versions,
297 circleci_gcc_versions, circleci_clang_versions,
298 circleci_osx_versions))
299 return files
300
[end of conans/client/cmd/new.py]
[start of conans/client/generators/cmake_common.py]
1 _cmake_single_dep_vars = """set(CONAN_{dep}_ROOT{build_type} {deps.rootpath})
2 set(CONAN_INCLUDE_DIRS_{dep}{build_type} {deps.include_paths})
3 set(CONAN_LIB_DIRS_{dep}{build_type} {deps.lib_paths})
4 set(CONAN_BIN_DIRS_{dep}{build_type} {deps.bin_paths})
5 set(CONAN_RES_DIRS_{dep}{build_type} {deps.res_paths})
6 set(CONAN_SRC_DIRS_{dep}{build_type} {deps.src_paths})
7 set(CONAN_BUILD_DIRS_{dep}{build_type} {deps.build_paths})
8 set(CONAN_LIBS_{dep}{build_type} {deps.libs})
9 set(CONAN_DEFINES_{dep}{build_type} {deps.defines})
10 # COMPILE_DEFINITIONS are equal to CONAN_DEFINES without -D, for targets
11 set(CONAN_COMPILE_DEFINITIONS_{dep}{build_type} {deps.compile_definitions})
12
13 set(CONAN_C_FLAGS_{dep}{build_type} "{deps.cflags}")
14 set(CONAN_CXX_FLAGS_{dep}{build_type} "{deps.cppflags}")
15 set(CONAN_SHARED_LINKER_FLAGS_{dep}{build_type} "{deps.sharedlinkflags}")
16 set(CONAN_EXE_LINKER_FLAGS_{dep}{build_type} "{deps.exelinkflags}")
17
18 # For modern cmake targets we use the list variables (separated with ;)
19 set(CONAN_C_FLAGS_{dep}{build_type}_LIST "{deps.cflags_list}")
20 set(CONAN_CXX_FLAGS_{dep}{build_type}_LIST "{deps.cppflags_list}")
21 set(CONAN_SHARED_LINKER_FLAGS_{dep}{build_type}_LIST "{deps.sharedlinkflags_list}")
22 set(CONAN_EXE_LINKER_FLAGS_{dep}{build_type}_LIST "{deps.exelinkflags_list}")
23
24 """
25
26
27 def _cmake_string_representation(value):
28 """Escapes the specified string for use in a CMake command surrounded with double quotes
29 :param value the string to escape"""
30 return '"{0}"'.format(value.replace('\\', '\\\\')
31 .replace('$', '\\$')
32 .replace('"', '\\"'))
33
34
35 def _build_type_str(build_type):
36 if build_type:
37 return "_" + str(build_type).upper()
38 return ""
39
40
41 def cmake_user_info_vars(deps_user_info):
42 lines = []
43 for dep, the_vars in deps_user_info.items():
44 for name, value in the_vars.vars.items():
45 lines.append('set(CONAN_USER_%s_%s %s)'
46 % (dep.upper(), name, _cmake_string_representation(value)))
47 return "\n".join(lines)
48
49
50 def cmake_dependency_vars(name, deps, build_type=""):
51 build_type = _build_type_str(build_type)
52 return _cmake_single_dep_vars.format(dep=name.upper(), deps=deps, build_type=build_type)
53
54
55 _cmake_package_info = """set(CONAN_PACKAGE_NAME {name})
56 set(CONAN_PACKAGE_VERSION {version})
57 """
58
59
60 def cmake_package_info(name, version):
61 return _cmake_package_info.format(name=name, version=version)
62
63
64 def cmake_settings_info(settings):
65 settings_info = ""
66 for item in settings.items():
67 key, value = item
68 name = "CONAN_SETTINGS_%s" % key.upper().replace(".", "_")
69 settings_info += "set({key} {value})\n".format(key=name,
70 value=_cmake_string_representation(value))
71 return settings_info
72
73
74 def cmake_dependencies(dependencies, build_type=""):
75 build_type = _build_type_str(build_type)
76 dependencies = " ".join(dependencies)
77 return "set(CONAN_DEPENDENCIES{build_type} {dependencies})".format(dependencies=dependencies,
78 build_type=build_type)
79
80
81 _cmake_multi_dep_vars = """{cmd_line_args}
82 set(CONAN_INCLUDE_DIRS{build_type} {deps.include_paths} ${{CONAN_INCLUDE_DIRS{build_type}}})
83 set(CONAN_LIB_DIRS{build_type} {deps.lib_paths} ${{CONAN_LIB_DIRS{build_type}}})
84 set(CONAN_BIN_DIRS{build_type} {deps.bin_paths} ${{CONAN_BIN_DIRS{build_type}}})
85 set(CONAN_RES_DIRS{build_type} {deps.res_paths} ${{CONAN_RES_DIRS{build_type}}})
86 set(CONAN_LIBS{build_type} {deps.libs} ${{CONAN_LIBS{build_type}}})
87 set(CONAN_DEFINES{build_type} {deps.defines} ${{CONAN_DEFINES{build_type}}})
88 set(CONAN_CMAKE_MODULE_PATH{build_type} {deps.build_paths} ${{CONAN_CMAKE_MODULE_PATH{build_type}}})
89
90 set(CONAN_CXX_FLAGS{build_type} "{deps.cppflags} ${{CONAN_CXX_FLAGS{build_type}}}")
91 set(CONAN_SHARED_LINKER_FLAGS{build_type} "{deps.sharedlinkflags} ${{CONAN_SHARED_LINKER_FLAGS{build_type}}}")
92 set(CONAN_EXE_LINKER_FLAGS{build_type} "{deps.exelinkflags} ${{CONAN_EXE_LINKER_FLAGS{build_type}}}")
93 set(CONAN_C_FLAGS{build_type} "{deps.cflags} ${{CONAN_C_FLAGS{build_type}}}")
94 """
95
96
97 def cmake_global_vars(deps, build_type=""):
98 if not build_type:
99 cmd_line_args = """# Storing original command line args (CMake helper) flags
100 set(CONAN_CMD_CXX_FLAGS ${CONAN_CXX_FLAGS})
101
102 set(CONAN_CMD_SHARED_LINKER_FLAGS ${CONAN_SHARED_LINKER_FLAGS})
103 set(CONAN_CMD_C_FLAGS ${CONAN_C_FLAGS})
104 # Defining accumulated conan variables for all deps
105 """
106 else:
107 cmd_line_args = ""
108 return _cmake_multi_dep_vars.format(cmd_line_args=cmd_line_args,
109 deps=deps, build_type=_build_type_str(build_type))
110
111
112 _target_template = """
113 conan_package_library_targets("${{CONAN_LIBS_{uname}}}" "${{CONAN_LIB_DIRS_{uname}}}"
114 CONAN_PACKAGE_TARGETS_{uname} "{deps}" "" {pkg_name})
115 conan_package_library_targets("${{CONAN_LIBS_{uname}_DEBUG}}" "${{CONAN_LIB_DIRS_{uname}_DEBUG}}"
116 CONAN_PACKAGE_TARGETS_{uname}_DEBUG "{deps}" "debug" {pkg_name})
117 conan_package_library_targets("${{CONAN_LIBS_{uname}_RELEASE}}" "${{CONAN_LIB_DIRS_{uname}_RELEASE}}"
118 CONAN_PACKAGE_TARGETS_{uname}_RELEASE "{deps}" "release" {pkg_name})
119
120 add_library({name} INTERFACE IMPORTED)
121
122 # Property INTERFACE_LINK_FLAGS do not work, necessary to add to INTERFACE_LINK_LIBRARIES
123 set_property(TARGET {name} PROPERTY INTERFACE_LINK_LIBRARIES ${{CONAN_PACKAGE_TARGETS_{uname}}} ${{CONAN_SHARED_LINKER_FLAGS_{uname}_LIST}} ${{CONAN_EXE_LINKER_FLAGS_{uname}_LIST}}
124 $<$<CONFIG:Release>:${{CONAN_PACKAGE_TARGETS_{uname}_RELEASE}} ${{CONAN_SHARED_LINKER_FLAGS_{uname}_RELEASE_LIST}} ${{CONAN_EXE_LINKER_FLAGS_{uname}_RELEASE_LIST}}>
125 $<$<CONFIG:RelWithDebInfo>:${{CONAN_PACKAGE_TARGETS_{uname}_RELEASE}} ${{CONAN_SHARED_LINKER_FLAGS_{uname}_RELEASE_LIST}} ${{CONAN_EXE_LINKER_FLAGS_{uname}_RELEASE_LIST}}>
126 $<$<CONFIG:MinSizeRel>:${{CONAN_PACKAGE_TARGETS_{uname}_RELEASE}} ${{CONAN_SHARED_LINKER_FLAGS_{uname}_RELEASE_LIST}} ${{CONAN_EXE_LINKER_FLAGS_{uname}_RELEASE_LIST}}>
127 $<$<CONFIG:Debug>:${{CONAN_PACKAGE_TARGETS_{uname}_DEBUG}} ${{CONAN_SHARED_LINKER_FLAGS_{uname}_DEBUG_LIST}} ${{CONAN_EXE_LINKER_FLAGS_{uname}_DEBUG_LIST}}>
128 {deps})
129 set_property(TARGET {name} PROPERTY INTERFACE_INCLUDE_DIRECTORIES ${{CONAN_INCLUDE_DIRS_{uname}}}
130 $<$<CONFIG:Release>:${{CONAN_INCLUDE_DIRS_{uname}_RELEASE}}>
131 $<$<CONFIG:RelWithDebInfo>:${{CONAN_INCLUDE_DIRS_{uname}_RELEASE}}>
132 $<$<CONFIG:MinSizeRel>:${{CONAN_INCLUDE_DIRS_{uname}_RELEASE}}>
133 $<$<CONFIG:Debug>:${{CONAN_INCLUDE_DIRS_{uname}_DEBUG}}>)
134 set_property(TARGET {name} PROPERTY INTERFACE_COMPILE_DEFINITIONS ${{CONAN_COMPILE_DEFINITIONS_{uname}}}
135 $<$<CONFIG:Release>:${{CONAN_COMPILE_DEFINITIONS_{uname}_RELEASE}}>
136 $<$<CONFIG:RelWithDebInfo>:${{CONAN_COMPILE_DEFINITIONS_{uname}_RELEASE}}>
137 $<$<CONFIG:MinSizeRel>:${{CONAN_COMPILE_DEFINITIONS_{uname}_RELEASE}}>
138 $<$<CONFIG:Debug>:${{CONAN_COMPILE_DEFINITIONS_{uname}_DEBUG}}>)
139 set_property(TARGET {name} PROPERTY INTERFACE_COMPILE_OPTIONS ${{CONAN_C_FLAGS_{uname}_LIST}} ${{CONAN_CXX_FLAGS_{uname}_LIST}}
140 $<$<CONFIG:Release>:${{CONAN_C_FLAGS_{uname}_RELEASE_LIST}} ${{CONAN_CXX_FLAGS_{uname}_RELEASE_LIST}}>
141 $<$<CONFIG:RelWithDebInfo>:${{CONAN_C_FLAGS_{uname}_RELEASE_LIST}} ${{CONAN_CXX_FLAGS_{uname}_RELEASE_LIST}}>
142 $<$<CONFIG:MinSizeRel>:${{CONAN_C_FLAGS_{uname}_RELEASE_LIST}} ${{CONAN_CXX_FLAGS_{uname}_RELEASE_LIST}}>
143 $<$<CONFIG:Debug>:${{CONAN_C_FLAGS_{uname}_DEBUG_LIST}} ${{CONAN_CXX_FLAGS_{uname}_DEBUG_LIST}}>)
144 """
145
146
147 def generate_targets_section(dependencies):
148 section = []
149 section.append("\n### Definition of macros and functions ###\n")
150 section.append('macro(conan_define_targets)\n'
151 ' if(${CMAKE_VERSION} VERSION_LESS "3.1.2")\n'
152 ' message(FATAL_ERROR "TARGETS not supported by your CMake version!")\n'
153 ' endif() # CMAKE > 3.x\n'
154 ' set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${CONAN_CMD_CXX_FLAGS}")\n'
155 ' set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${CONAN_CMD_C_FLAGS}")\n'
156 ' set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${CONAN_CMD_SHARED_LINKER_FLAGS}")\n')
157
158 for dep_name, dep_info in dependencies:
159 use_deps = ["CONAN_PKG::%s" % d for d in dep_info.public_deps]
160 deps = "" if not use_deps else " ".join(use_deps)
161 section.append(_target_template.format(name="CONAN_PKG::%s" % dep_name, deps=deps,
162 uname=dep_name.upper(), pkg_name=dep_name))
163
164 all_targets = " ".join(["CONAN_PKG::%s" % name for name, _ in dependencies])
165 section.append(' set(CONAN_TARGETS %s)\n' % all_targets)
166 section.append('endmacro()\n')
167 return section
168
169
170 _cmake_common_macros = """
171
172 function(conan_find_libraries_abs_path libraries package_libdir libraries_abs_path)
173 foreach(_LIBRARY_NAME ${libraries})
174 unset(CONAN_FOUND_LIBRARY CACHE)
175 find_library(CONAN_FOUND_LIBRARY NAME ${_LIBRARY_NAME} PATHS ${package_libdir}
176 NO_DEFAULT_PATH NO_CMAKE_FIND_ROOT_PATH)
177 if(CONAN_FOUND_LIBRARY)
178 message(STATUS "Library ${_LIBRARY_NAME} found ${CONAN_FOUND_LIBRARY}")
179 set(CONAN_FULLPATH_LIBS ${CONAN_FULLPATH_LIBS} ${CONAN_FOUND_LIBRARY})
180 else()
181 message(STATUS "Library ${_LIBRARY_NAME} not found in package, might be system one")
182 set(CONAN_FULLPATH_LIBS ${CONAN_FULLPATH_LIBS} ${_LIBRARY_NAME})
183 endif()
184 endforeach()
185 unset(CONAN_FOUND_LIBRARY CACHE)
186 set(${libraries_abs_path} ${CONAN_FULLPATH_LIBS} PARENT_SCOPE)
187 endfunction()
188
189 function(conan_package_library_targets libraries package_libdir libraries_abs_path deps build_type package_name)
190 foreach(_LIBRARY_NAME ${libraries})
191 unset(CONAN_FOUND_LIBRARY CACHE)
192 find_library(CONAN_FOUND_LIBRARY NAME ${_LIBRARY_NAME} PATHS ${package_libdir}
193 NO_DEFAULT_PATH NO_CMAKE_FIND_ROOT_PATH)
194 if(CONAN_FOUND_LIBRARY)
195 message(STATUS "Library ${_LIBRARY_NAME} found ${CONAN_FOUND_LIBRARY}")
196 set(_LIB_NAME CONAN_LIB::${package_name}_${_LIBRARY_NAME}${build_type})
197 add_library(${_LIB_NAME} UNKNOWN IMPORTED)
198 set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION ${CONAN_FOUND_LIBRARY})
199 string(REPLACE " " ";" deps_list "${deps}")
200 set_property(TARGET ${_LIB_NAME} PROPERTY INTERFACE_LINK_LIBRARIES ${deps_list})
201 set(CONAN_FULLPATH_LIBS ${CONAN_FULLPATH_LIBS} ${_LIB_NAME})
202 else()
203 message(STATUS "Library ${_LIBRARY_NAME} not found in package, might be system one")
204 set(CONAN_FULLPATH_LIBS ${CONAN_FULLPATH_LIBS} ${_LIBRARY_NAME})
205 endif()
206 endforeach()
207 unset(CONAN_FOUND_LIBRARY CACHE)
208 set(${libraries_abs_path} ${CONAN_FULLPATH_LIBS} PARENT_SCOPE)
209 endfunction()
210
211 macro(conan_set_libcxx)
212 if(DEFINED CONAN_LIBCXX)
213 message(STATUS "Conan: C++ stdlib: ${CONAN_LIBCXX}")
214 if(CONAN_COMPILER STREQUAL "clang" OR CONAN_COMPILER STREQUAL "apple-clang")
215 if(CONAN_LIBCXX STREQUAL "libstdc++" OR CONAN_LIBCXX STREQUAL "libstdc++11" )
216 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libstdc++")
217 elseif(CONAN_LIBCXX STREQUAL "libc++")
218 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++")
219 endif()
220 endif()
221 if(CONAN_COMPILER STREQUAL "sun-cc")
222 if(CONAN_LIBCXX STREQUAL "libCstd")
223 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -library=Cstd")
224 elseif(CONAN_LIBCXX STREQUAL "libstdcxx")
225 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -library=stdcxx4")
226 elseif(CONAN_LIBCXX STREQUAL "libstlport")
227 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -library=stlport4")
228 elseif(CONAN_LIBCXX STREQUAL "libstdc++")
229 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -library=stdcpp")
230 endif()
231 endif()
232 if(CONAN_LIBCXX STREQUAL "libstdc++11")
233 add_definitions(-D_GLIBCXX_USE_CXX11_ABI=1)
234 elseif(CONAN_LIBCXX STREQUAL "libstdc++")
235 add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0)
236 endif()
237 endif()
238 endmacro()
239
240 macro(conan_set_std)
241 # Do not warn "Manually-specified variables were not used by the project"
242 set(ignorevar "${CONAN_STD_CXX_FLAG}${CONAN_CMAKE_CXX_STANDARD}${CONAN_CMAKE_CXX_EXTENSIONS}")
243 if (CMAKE_VERSION VERSION_LESS "3.1" OR
244 (CMAKE_VERSION VERSION_LESS "3.12" AND ("${CONAN_CMAKE_CXX_STANDARD}" STREQUAL "20" OR "${CONAN_CMAKE_CXX_STANDARD}" STREQUAL "gnu20")))
245 if(CONAN_STD_CXX_FLAG)
246 message(STATUS "Conan setting CXX_FLAGS flags: ${CONAN_STD_CXX_FLAG}")
247 set(CMAKE_CXX_FLAGS "${CONAN_STD_CXX_FLAG} ${CMAKE_CXX_FLAGS}")
248 endif()
249 else()
250 if(CONAN_CMAKE_CXX_STANDARD)
251 message(STATUS "Conan setting CPP STANDARD: ${CONAN_CMAKE_CXX_STANDARD} WITH EXTENSIONS ${CONAN_CMAKE_CXX_EXTENSIONS}")
252 set(CMAKE_CXX_STANDARD ${CONAN_CMAKE_CXX_STANDARD})
253 set(CMAKE_CXX_EXTENSIONS ${CONAN_CMAKE_CXX_EXTENSIONS})
254 endif()
255 endif()
256 endmacro()
257
258 macro(conan_set_rpath)
259 if(APPLE)
260 # https://cmake.org/Wiki/CMake_RPATH_handling
261 # CONAN GUIDE: All generated libraries should have the id and dependencies to other
262 # dylibs without path, just the name, EX:
263 # libMyLib1.dylib:
264 # libMyLib1.dylib (compatibility version 0.0.0, current version 0.0.0)
265 # libMyLib0.dylib (compatibility version 0.0.0, current version 0.0.0)
266 # /usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 120.0.0)
267 # /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1197.1.1)
268 set(CMAKE_SKIP_RPATH 1) # AVOID RPATH FOR *.dylib, ALL LIBS BETWEEN THEM AND THE EXE
269 # SHOULD BE ON THE LINKER RESOLVER PATH (./ IS ONE OF THEM)
270 # Policy CMP0068
271 # We want the old behavior, in CMake >= 3.9 CMAKE_SKIP_RPATH won't affect the install_name in OSX
272 set(CMAKE_INSTALL_NAME_DIR "")
273 endif()
274 endmacro()
275
276 macro(conan_set_fpic)
277 if(DEFINED CONAN_CMAKE_POSITION_INDEPENDENT_CODE)
278 message(STATUS "Conan: Adjusting fPIC flag (${CONAN_CMAKE_POSITION_INDEPENDENT_CODE})")
279 set(CMAKE_POSITION_INDEPENDENT_CODE ${CONAN_CMAKE_POSITION_INDEPENDENT_CODE})
280 endif()
281 endmacro()
282
283 macro(conan_output_dirs_setup)
284 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/bin)
285 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
286 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_RELWITHDEBINFO ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
287 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_MINSIZEREL ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
288 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_DEBUG ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
289
290 set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/lib)
291 set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_RELEASE ${CMAKE_ARCHIVE_OUTPUT_DIRECTORY})
292 set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_RELWITHDEBINFO ${CMAKE_ARCHIVE_OUTPUT_DIRECTORY})
293 set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_MINSIZEREL ${CMAKE_ARCHIVE_OUTPUT_DIRECTORY})
294 set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_DEBUG ${CMAKE_ARCHIVE_OUTPUT_DIRECTORY})
295
296 set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/lib)
297 set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE ${CMAKE_LIBRARY_OUTPUT_DIRECTORY})
298 set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO ${CMAKE_LIBRARY_OUTPUT_DIRECTORY})
299 set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_MINSIZEREL ${CMAKE_LIBRARY_OUTPUT_DIRECTORY})
300 set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_DEBUG ${CMAKE_LIBRARY_OUTPUT_DIRECTORY})
301 endmacro()
302
303 macro(conan_split_version VERSION_STRING MAJOR MINOR)
304 #make a list from the version string
305 string(REPLACE "." ";" VERSION_LIST "${VERSION_STRING}")
306
307 #write output values
308 list(LENGTH VERSION_LIST _version_len)
309 list(GET VERSION_LIST 0 ${MAJOR})
310 if(${_version_len} GREATER 1)
311 list(GET VERSION_LIST 1 ${MINOR})
312 endif()
313 endmacro()
314
315 macro(conan_error_compiler_version)
316 message(FATAL_ERROR "Incorrect '${CONAN_COMPILER}' version 'compiler.version=${CONAN_COMPILER_VERSION}'"
317 " is not the one detected by CMake: '${CMAKE_CXX_COMPILER_ID}=" ${VERSION_MAJOR}.${VERSION_MINOR}')
318 endmacro()
319
320 set(_CONAN_CURRENT_DIR ${CMAKE_CURRENT_LIST_DIR})
321 function(conan_get_compiler CONAN_INFO_COMPILER CONAN_INFO_COMPILER_VERSION)
322 MESSAGE(STATUS "Current conanbuildinfo.cmake directory: " ${_CONAN_CURRENT_DIR})
323 if(NOT EXISTS ${_CONAN_CURRENT_DIR}/conaninfo.txt)
324 message(STATUS "WARN: conaninfo.txt not found")
325 return()
326 endif()
327
328 file (READ "${_CONAN_CURRENT_DIR}/conaninfo.txt" CONANINFO)
329
330 string(REGEX MATCH "compiler=([-A-Za-z0-9_ ]+)" _MATCHED ${CONANINFO})
331 if(DEFINED CMAKE_MATCH_1)
332 string(STRIP "${CMAKE_MATCH_1}" _CONAN_INFO_COMPILER)
333 set(${CONAN_INFO_COMPILER} ${_CONAN_INFO_COMPILER} PARENT_SCOPE)
334 endif()
335
336 string(REGEX MATCH "compiler.version=([-A-Za-z0-9_.]+)" _MATCHED ${CONANINFO})
337 if(DEFINED CMAKE_MATCH_1)
338 string(STRIP "${CMAKE_MATCH_1}" _CONAN_INFO_COMPILER_VERSION)
339 set(${CONAN_INFO_COMPILER_VERSION} ${_CONAN_INFO_COMPILER_VERSION} PARENT_SCOPE)
340 endif()
341 endfunction()
342
343 function(check_compiler_version)
344 conan_split_version(${CMAKE_CXX_COMPILER_VERSION} VERSION_MAJOR VERSION_MINOR)
345 if(CMAKE_CXX_COMPILER_ID MATCHES MSVC)
346 # https://cmake.org/cmake/help/v3.2/variable/MSVC_VERSION.html
347 if( (CONAN_COMPILER_VERSION STREQUAL "14" AND NOT VERSION_MAJOR STREQUAL "19") OR
348 (CONAN_COMPILER_VERSION STREQUAL "12" AND NOT VERSION_MAJOR STREQUAL "18") OR
349 (CONAN_COMPILER_VERSION STREQUAL "11" AND NOT VERSION_MAJOR STREQUAL "17") OR
350 (CONAN_COMPILER_VERSION STREQUAL "10" AND NOT VERSION_MAJOR STREQUAL "16") OR
351 (CONAN_COMPILER_VERSION STREQUAL "9" AND NOT VERSION_MAJOR STREQUAL "15") OR
352 (CONAN_COMPILER_VERSION STREQUAL "8" AND NOT VERSION_MAJOR STREQUAL "14") OR
353 (CONAN_COMPILER_VERSION STREQUAL "7" AND NOT VERSION_MAJOR STREQUAL "13") OR
354 (CONAN_COMPILER_VERSION STREQUAL "6" AND NOT VERSION_MAJOR STREQUAL "12") )
355 conan_error_compiler_version()
356 endif()
357 elseif(CONAN_COMPILER STREQUAL "gcc")
358 set(_CHECK_VERSION ${VERSION_MAJOR}.${VERSION_MINOR})
359 if(NOT ${CONAN_COMPILER_VERSION} VERSION_LESS 5.0)
360 message(STATUS "Conan: Compiler GCC>=5, checking major version ${CONAN_COMPILER_VERSION}")
361 conan_split_version(${CONAN_COMPILER_VERSION} CONAN_COMPILER_MAJOR CONAN_COMPILER_MINOR)
362 if("${CONAN_COMPILER_MINOR}" STREQUAL "")
363 set(_CHECK_VERSION ${VERSION_MAJOR})
364 endif()
365 endif()
366 message(STATUS "Conan: Checking correct version: ${_CHECK_VERSION}")
367 if(NOT ${_CHECK_VERSION} VERSION_EQUAL CONAN_COMPILER_VERSION)
368 conan_error_compiler_version()
369 endif()
370 elseif(CONAN_COMPILER STREQUAL "clang")
371 set(_CHECK_VERSION ${VERSION_MAJOR}.${VERSION_MINOR})
372 if(NOT ${CONAN_COMPILER_VERSION} VERSION_LESS 8.0)
373 message(STATUS "Conan: Compiler Clang>=8, checking major version ${CONAN_COMPILER_VERSION}")
374 conan_split_version(${CONAN_COMPILER_VERSION} CONAN_COMPILER_MAJOR CONAN_COMPILER_MINOR)
375 if("${CONAN_COMPILER_MINOR}" STREQUAL "")
376 set(_CHECK_VERSION ${VERSION_MAJOR})
377 endif()
378 endif()
379 message(STATUS "Conan: Checking correct version: ${_CHECK_VERSION}")
380 if(NOT ${_CHECK_VERSION} VERSION_EQUAL CONAN_COMPILER_VERSION)
381 conan_error_compiler_version()
382 endif()
383 elseif(CONAN_COMPILER STREQUAL "apple-clang" OR CONAN_COMPILER STREQUAL "sun-cc")
384 conan_split_version(${CONAN_COMPILER_VERSION} CONAN_COMPILER_MAJOR CONAN_COMPILER_MINOR)
385 if(NOT ${VERSION_MAJOR}.${VERSION_MINOR} VERSION_EQUAL ${CONAN_COMPILER_MAJOR}.${CONAN_COMPILER_MINOR})
386 conan_error_compiler_version()
387 endif()
388 else()
389 message(STATUS "WARN: Unknown compiler '${CONAN_COMPILER}', skipping the version check...")
390 endif()
391 endfunction()
392
393 function(conan_check_compiler)
394 if(NOT DEFINED CMAKE_CXX_COMPILER_ID)
395 if(DEFINED CMAKE_C_COMPILER_ID)
396 message(STATUS "This project seems to be plain C, using '${CMAKE_C_COMPILER_ID}' compiler")
397 set(CMAKE_CXX_COMPILER_ID ${CMAKE_C_COMPILER_ID})
398 set(CMAKE_CXX_COMPILER_VERSION ${CMAKE_C_COMPILER_VERSION})
399 else()
400 message(FATAL_ERROR "This project seems to be plain C, but no compiler defined")
401 endif()
402 endif()
403 if(CONAN_DISABLE_CHECK_COMPILER)
404 message(STATUS "WARN: Disabled conan compiler checks")
405 return()
406 endif()
407 if(NOT CMAKE_CXX_COMPILER_ID AND NOT CMAKE_C_COMPILER_ID)
408 # This use case happens when compiler is not identified by CMake, but the compilers are there and work
409 message(STATUS "*** WARN: CMake was not able to identify a C or C++ compiler ***")
410 message(STATUS "*** WARN: Disabling compiler checks. Please make sure your settings match your environment ***")
411 return()
412 endif()
413 if(NOT DEFINED CONAN_COMPILER)
414 conan_get_compiler(CONAN_COMPILER CONAN_COMPILER_VERSION)
415 if(NOT DEFINED CONAN_COMPILER)
416 message(STATUS "WARN: CONAN_COMPILER variable not set, please make sure yourself that "
417 "your compiler and version matches your declared settings")
418 return()
419 endif()
420 endif()
421
422 if(NOT CMAKE_HOST_SYSTEM_NAME STREQUAL ${CMAKE_SYSTEM_NAME})
423 set(CROSS_BUILDING 1)
424 endif()
425
426 # If using VS, verify toolset
427 if (CONAN_COMPILER STREQUAL "Visual Studio")
428 if (CONAN_SETTINGS_COMPILER_TOOLSET MATCHES "LLVM" OR
429 CONAN_SETTINGS_COMPILER_TOOLSET MATCHES "clang")
430 set(EXPECTED_CMAKE_CXX_COMPILER_ID "Clang")
431 elseif (CONAN_SETTINGS_COMPILER_TOOLSET MATCHES "Intel")
432 set(EXPECTED_CMAKE_CXX_COMPILER_ID "Intel")
433 else()
434 set(EXPECTED_CMAKE_CXX_COMPILER_ID "MSVC")
435 endif()
436
437 if (NOT CMAKE_CXX_COMPILER_ID MATCHES ${EXPECTED_CMAKE_CXX_COMPILER_ID})
438 message(FATAL_ERROR "Incorrect '${CONAN_COMPILER}'. Toolset specifies compiler as '${EXPECTED_CMAKE_CXX_COMPILER_ID}' "
439 "but CMake detected '${CMAKE_CXX_COMPILER_ID}'")
440 endif()
441
442 # Avoid checks when cross compiling, apple-clang crashes because its APPLE but not apple-clang
443 # Actually CMake is detecting "clang" when you are using apple-clang, only if CMP0025 is set to NEW will detect apple-clang
444 elseif((CONAN_COMPILER STREQUAL "gcc" AND NOT CMAKE_CXX_COMPILER_ID MATCHES "GNU") OR
445 (CONAN_COMPILER STREQUAL "apple-clang" AND NOT CROSS_BUILDING AND (NOT APPLE OR NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang")) OR
446 (CONAN_COMPILER STREQUAL "clang" AND NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang") OR
447 (CONAN_COMPILER STREQUAL "sun-cc" AND NOT CMAKE_CXX_COMPILER_ID MATCHES "SunPro") )
448 message(FATAL_ERROR "Incorrect '${CONAN_COMPILER}', is not the one detected by CMake: '${CMAKE_CXX_COMPILER_ID}'")
449 endif()
450
451
452 if(NOT DEFINED CONAN_COMPILER_VERSION)
453 message(STATUS "WARN: CONAN_COMPILER_VERSION variable not set, please make sure yourself "
454 "that your compiler version matches your declared settings")
455 return()
456 endif()
457 check_compiler_version()
458 endfunction()
459
460 macro(conan_set_flags build_type)
461 set(CMAKE_CXX_FLAGS${build_type} "${CMAKE_CXX_FLAGS${build_type}} ${CONAN_CXX_FLAGS${build_type}}")
462 set(CMAKE_C_FLAGS${build_type} "${CMAKE_C_FLAGS${build_type}} ${CONAN_C_FLAGS${build_type}}")
463 set(CMAKE_SHARED_LINKER_FLAGS${build_type} "${CMAKE_SHARED_LINKER_FLAGS${build_type}} ${CONAN_SHARED_LINKER_FLAGS${build_type}}")
464 set(CMAKE_EXE_LINKER_FLAGS${build_type} "${CMAKE_EXE_LINKER_FLAGS${build_type}} ${CONAN_EXE_LINKER_FLAGS${build_type}}")
465 endmacro()
466
467 macro(conan_global_flags)
468 if(CONAN_SYSTEM_INCLUDES)
469 include_directories(SYSTEM ${CONAN_INCLUDE_DIRS}
470 "$<$<CONFIG:Release>:${CONAN_INCLUDE_DIRS_RELEASE}>"
471 "$<$<CONFIG:RelWithDebInfo>:${CONAN_INCLUDE_DIRS_RELEASE}>"
472 "$<$<CONFIG:MinSizeRel>:${CONAN_INCLUDE_DIRS_RELEASE}>"
473 "$<$<CONFIG:Debug>:${CONAN_INCLUDE_DIRS_DEBUG}>")
474 else()
475 include_directories(${CONAN_INCLUDE_DIRS}
476 "$<$<CONFIG:Release>:${CONAN_INCLUDE_DIRS_RELEASE}>"
477 "$<$<CONFIG:RelWithDebInfo>:${CONAN_INCLUDE_DIRS_RELEASE}>"
478 "$<$<CONFIG:MinSizeRel>:${CONAN_INCLUDE_DIRS_RELEASE}>"
479 "$<$<CONFIG:Debug>:${CONAN_INCLUDE_DIRS_DEBUG}>")
480 endif()
481
482 link_directories(${CONAN_LIB_DIRS})
483
484 conan_find_libraries_abs_path("${CONAN_LIBS_DEBUG}" "${CONAN_LIB_DIRS_DEBUG}"
485 CONAN_LIBS_DEBUG)
486 conan_find_libraries_abs_path("${CONAN_LIBS_RELEASE}" "${CONAN_LIB_DIRS_RELEASE}"
487 CONAN_LIBS_RELEASE)
488
489 add_compile_options(${CONAN_DEFINES}
490 "$<$<CONFIG:Debug>:${CONAN_DEFINES_DEBUG}>"
491 "$<$<CONFIG:Release>:${CONAN_DEFINES_RELEASE}>"
492 "$<$<CONFIG:RelWithDebInfo>:${CONAN_DEFINES_RELEASE}>"
493 "$<$<CONFIG:MinSizeRel>:${CONAN_DEFINES_RELEASE}>")
494
495 conan_set_flags("")
496 conan_set_flags("_RELEASE")
497 conan_set_flags("_DEBUG")
498
499 endmacro()
500
501 macro(conan_target_link_libraries target)
502 if(CONAN_TARGETS)
503 target_link_libraries(${target} ${CONAN_TARGETS})
504 else()
505 target_link_libraries(${target} ${CONAN_LIBS})
506 foreach(_LIB ${CONAN_LIBS_RELEASE})
507 target_link_libraries(${target} optimized ${_LIB})
508 endforeach()
509 foreach(_LIB ${CONAN_LIBS_DEBUG})
510 target_link_libraries(${target} debug ${_LIB})
511 endforeach()
512 endif()
513 endmacro()
514 """
515
516 cmake_macros = """
517 macro(conan_basic_setup)
518 set(options TARGETS NO_OUTPUT_DIRS SKIP_RPATH KEEP_RPATHS SKIP_STD SKIP_FPIC)
519 cmake_parse_arguments(ARGUMENTS "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN} )
520 if(CONAN_EXPORTED)
521 message(STATUS "Conan: called by CMake conan helper")
522 endif()
523 if(CONAN_IN_LOCAL_CACHE)
524 message(STATUS "Conan: called inside local cache")
525 endif()
526 conan_check_compiler()
527 if(NOT ARGUMENTS_NO_OUTPUT_DIRS)
528 conan_output_dirs_setup()
529 endif()
530 conan_set_find_library_paths()
531 if(NOT ARGUMENTS_TARGETS)
532 message(STATUS "Conan: Using cmake global configuration")
533 conan_global_flags()
534 else()
535 message(STATUS "Conan: Using cmake targets configuration")
536 conan_define_targets()
537 endif()
538 if(ARGUMENTS_SKIP_RPATH)
539 # Change by "DEPRECATION" or "SEND_ERROR" when we are ready
540 message(WARNING "Conan: SKIP_RPATH is deprecated, it has been renamed to KEEP_RPATHS")
541 endif()
542 if(NOT ARGUMENTS_SKIP_RPATH AND NOT ARGUMENTS_KEEP_RPATHS)
543 # Parameter has renamed, but we keep the compatibility with old SKIP_RPATH
544 message(STATUS "Conan: Adjusting default RPATHs Conan policies")
545 conan_set_rpath()
546 endif()
547 if(NOT ARGUMENTS_SKIP_STD)
548 message(STATUS "Conan: Adjusting language standard")
549 conan_set_std()
550 endif()
551 if(NOT ARGUMENTS_SKIP_FPIC)
552 conan_set_fpic()
553 endif()
554 conan_set_vs_runtime()
555 conan_set_libcxx()
556 conan_set_find_paths()
557 endmacro()
558
559 macro(conan_set_find_paths)
560 # CMAKE_MODULE_PATH does not have Debug/Release config, but there are variables
561 # CONAN_CMAKE_MODULE_PATH_DEBUG to be used by the consumer
562 # CMake can find findXXX.cmake files in the root of packages
563 set(CMAKE_MODULE_PATH ${CONAN_CMAKE_MODULE_PATH} ${CMAKE_MODULE_PATH})
564
565 # Make find_package() to work
566 set(CMAKE_PREFIX_PATH ${CONAN_CMAKE_MODULE_PATH} ${CMAKE_PREFIX_PATH})
567
568 # Set the find root path (cross build)
569 set(CMAKE_FIND_ROOT_PATH ${CONAN_CMAKE_FIND_ROOT_PATH} ${CMAKE_FIND_ROOT_PATH})
570 if(CONAN_CMAKE_FIND_ROOT_PATH_MODE_PROGRAM)
571 set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM ${CONAN_CMAKE_FIND_ROOT_PATH_MODE_PROGRAM})
572 endif()
573 if(CONAN_CMAKE_FIND_ROOT_PATH_MODE_LIBRARY)
574 set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ${CONAN_CMAKE_FIND_ROOT_PATH_MODE_LIBRARY})
575 endif()
576 if(CONAN_CMAKE_FIND_ROOT_PATH_MODE_INCLUDE)
577 set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ${CONAN_CMAKE_FIND_ROOT_PATH_MODE_INCLUDE})
578 endif()
579 endmacro()
580
581 macro(conan_set_find_library_paths)
582 # CMAKE_INCLUDE_PATH, CMAKE_LIBRARY_PATH does not have Debug/Release config, but there are variables
583 # CONAN_INCLUDE_DIRS_DEBUG/RELEASE CONAN_LIB_DIRS_DEBUG/RELEASE to be used by the consumer
584 # For find_library
585 set(CMAKE_INCLUDE_PATH ${CONAN_INCLUDE_DIRS} ${CMAKE_INCLUDE_PATH})
586 set(CMAKE_LIBRARY_PATH ${CONAN_LIB_DIRS} ${CMAKE_LIBRARY_PATH})
587 endmacro()
588
589 macro(conan_set_vs_runtime)
590 if(CONAN_LINK_RUNTIME)
591 foreach(flag CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE
592 CMAKE_C_FLAGS_RELWITHDEBINFO CMAKE_CXX_FLAGS_RELWITHDEBINFO
593 CMAKE_C_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_MINSIZEREL)
594 if(DEFINED ${flag})
595 string(REPLACE "/MD" ${CONAN_LINK_RUNTIME} ${flag} "${${flag}}")
596 endif()
597 endforeach()
598 foreach(flag CMAKE_C_FLAGS_DEBUG CMAKE_CXX_FLAGS_DEBUG)
599 if(DEFINED ${flag})
600 string(REPLACE "/MDd" ${CONAN_LINK_RUNTIME} ${flag} "${${flag}}")
601 endif()
602 endforeach()
603 endif()
604 endmacro()
605
606 macro(conan_flags_setup)
607 # Macro maintained for backwards compatibility
608 conan_set_find_library_paths()
609 conan_global_flags()
610 conan_set_rpath()
611 conan_set_vs_runtime()
612 conan_set_libcxx()
613 endmacro()
614
615 """ + _cmake_common_macros
616
617
618 cmake_macros_multi = """
619 if(EXISTS ${CMAKE_CURRENT_LIST_DIR}/conanbuildinfo_release.cmake)
620 include(${CMAKE_CURRENT_LIST_DIR}/conanbuildinfo_release.cmake)
621 else()
622 message(FATAL_ERROR "No conanbuildinfo_release.cmake, please install the Release conf first")
623 endif()
624 if(EXISTS ${CMAKE_CURRENT_LIST_DIR}/conanbuildinfo_debug.cmake)
625 include(${CMAKE_CURRENT_LIST_DIR}/conanbuildinfo_debug.cmake)
626 else()
627 message(FATAL_ERROR "No conanbuildinfo_debug.cmake, please install the Debug conf first")
628 endif()
629
630 macro(conan_basic_setup)
631 set(options TARGETS)
632 cmake_parse_arguments(ARGUMENTS "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN} )
633 if(CONAN_EXPORTED)
634 message(STATUS "Conan: called by CMake conan helper")
635 endif()
636 if(CONAN_IN_LOCAL_CACHE)
637 message(STATUS "Conan: called inside local cache")
638 endif()
639 conan_check_compiler()
640 # conan_output_dirs_setup()
641 if(NOT ARGUMENTS_TARGETS)
642 message(STATUS "Conan: Using cmake global configuration")
643 conan_global_flags()
644 else()
645 message(STATUS "Conan: Using cmake targets configuration")
646 conan_define_targets()
647 endif()
648 conan_set_rpath()
649 conan_set_vs_runtime()
650 conan_set_libcxx()
651 conan_set_find_paths()
652 conan_set_fpic()
653 endmacro()
654
655 macro(conan_set_vs_runtime)
656 # This conan_set_vs_runtime is MORE opinionated than the regular one. It will
657 # Leave the defaults MD (MDd) or replace them with MT (MTd) but taking into account the
658 # debug, forcing MXd for debug builds. It will generate MSVCRT warnings if the dependencies
659 # are installed with "conan install" and the wrong build time.
660 if(CONAN_LINK_RUNTIME MATCHES "MT")
661 foreach(flag CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE
662 CMAKE_C_FLAGS_RELWITHDEBINFO CMAKE_CXX_FLAGS_RELWITHDEBINFO
663 CMAKE_C_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_MINSIZEREL)
664 if(DEFINED ${flag})
665 string(REPLACE "/MD" "/MT" ${flag} "${${flag}}")
666 endif()
667 endforeach()
668 foreach(flag CMAKE_C_FLAGS_DEBUG CMAKE_CXX_FLAGS_DEBUG)
669 if(DEFINED ${flag})
670 string(REPLACE "/MDd" "/MTd" ${flag} "${${flag}}")
671 endif()
672 endforeach()
673 endif()
674 endmacro()
675
676 macro(conan_set_find_paths)
677 if(CMAKE_BUILD_TYPE)
678 if(${CMAKE_BUILD_TYPE} MATCHES "Debug")
679 set(CMAKE_PREFIX_PATH ${CONAN_CMAKE_MODULE_PATH_DEBUG} ${CMAKE_PREFIX_PATH})
680 set(CMAKE_MODULE_PATH ${CONAN_CMAKE_MODULE_PATH_DEBUG} ${CMAKE_MODULE_PATH})
681 else()
682 set(CMAKE_PREFIX_PATH ${CONAN_CMAKE_MODULE_PATH_RELEASE} ${CMAKE_PREFIX_PATH})
683 set(CMAKE_MODULE_PATH ${CONAN_CMAKE_MODULE_PATH_RELEASE} ${CMAKE_MODULE_PATH})
684 endif()
685 endif()
686 endmacro()
687 """ + _cmake_common_macros
688
[end of conans/client/generators/cmake_common.py]
[start of conans/client/generators/premake.py]
1 from conans.model import Generator
2 from conans.paths import BUILD_INFO_PREMAKE
3
4
5 class PremakeDeps(object):
6 def __init__(self, deps_cpp_info):
7 self.include_paths = ",\n".join('"%s"' % p.replace("\\", "/")
8 for p in deps_cpp_info.include_paths)
9 self.lib_paths = ",\n".join('"%s"' % p.replace("\\", "/")
10 for p in deps_cpp_info.lib_paths)
11 self.bin_paths = ",\n".join('"%s"' % p.replace("\\", "/")
12 for p in deps_cpp_info.bin_paths)
13 self.libs = ", ".join('"%s"' % p for p in deps_cpp_info.libs)
14 self.defines = ", ".join('"%s"' % p for p in deps_cpp_info.defines)
15 self.cppflags = ", ".join('"%s"' % p for p in deps_cpp_info.cppflags)
16 self.cflags = ", ".join('"%s"' % p for p in deps_cpp_info.cflags)
17 self.sharedlinkflags = ", ".join('"%s"' % p for p in deps_cpp_info.sharedlinkflags)
18 self.exelinkflags = ", ".join('"%s"' % p for p in deps_cpp_info.exelinkflags)
19
20 self.rootpath = "%s" % deps_cpp_info.rootpath.replace("\\", "/")
21
22
23 class PremakeGenerator(Generator):
24 @property
25 def filename(self):
26 return BUILD_INFO_PREMAKE
27
28 @property
29 def content(self):
30 deps = PremakeDeps(self.deps_build_info)
31
32 template = ('conan_includedirs{dep} = {{{deps.include_paths}}}\n'
33 'conan_libdirs{dep} = {{{deps.lib_paths}}}\n'
34 'conan_bindirs{dep} = {{{deps.bin_paths}}}\n'
35 'conan_libs{dep} = {{{deps.libs}}}\n'
36 'conan_cppdefines{dep} = {{{deps.defines}}}\n'
37 'conan_cppflags{dep} = {{{deps.cppflags}}}\n'
38 'conan_cflags{dep} = {{{deps.cflags}}}\n'
39 'conan_sharedlinkflags{dep} = {{{deps.sharedlinkflags}}}\n'
40 'conan_exelinkflags{dep} = {{{deps.exelinkflags}}}\n')
41
42 sections = ["#!lua"]
43 all_flags = template.format(dep="", deps=deps)
44 sections.append(all_flags)
45 template_deps = template + 'conan_rootpath{dep} = "{deps.rootpath}"\n'
46
47 for dep_name, dep_cpp_info in self.deps_build_info.dependencies:
48 deps = PremakeDeps(dep_cpp_info)
49 dep_name = dep_name.replace("-", "_")
50 dep_flags = template_deps.format(dep="_" + dep_name, deps=deps)
51 sections.append(dep_flags)
52
53 return "\n".join(sections)
54
[end of conans/client/generators/premake.py]
[start of conans/client/generators/visualstudio_multi.py]
1 import os
2 from conans.model import Generator
3 from conans.client.generators import VisualStudioGenerator
4 from xml.dom import minidom
5 from conans.util.files import load
6 from conans.errors import ConanException
7
8
9 class _VSSettings(object):
10 def __init__(self, settings):
11 self._props = [("Configuration", settings.get_safe("build_type")),
12 ("Platform", {'x86': 'Win32', 'x86_64': 'x64'}.get(settings.get_safe("arch")))]
13
14 toolset = settings.get_safe("compiler.toolset")
15 if not toolset:
16 default_toolset = {"15": "v141",
17 "14": "v140",
18 "12": "v120",
19 "11": "v110",
20 "10": "v100",
21 "9": "v90",
22 "8": "v80"}
23 try:
24 vs_version = settings.get_safe("compiler.version")
25 toolset = default_toolset[vs_version]
26 except KeyError:
27 raise ConanException("Undefined Visual Studio version %s" % vs_version)
28 self._props.append(("PlatformToolset", toolset))
29
30 @property
31 def filename(self):
32 name = "conanbuildinfo%s.props" % "".join("_%s" % v for _, v in self._props if v)
33 return name.lower()
34
35 @property
36 def condition(self):
37 return " And ".join("'$(%s)' == '%s'" % (k, v) for k, v in self._props if v)
38
39
40 class VisualStudioMultiGenerator(Generator):
41
42 multi_content_template = """<?xml version="1.0" encoding="utf-8"?>
43 <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
44 <ImportGroup Label="PropertySheets" >
45 </ImportGroup>
46 <PropertyGroup Label="UserMacros" />
47 <PropertyGroup />
48 <ItemDefinitionGroup />
49 <ItemGroup />
50 </Project>
51 """
52
53 @property
54 def filename(self):
55 pass
56
57 @property
58 def content(self):
59 vs_settings = _VSSettings(self.conanfile.settings)
60 condition = vs_settings.condition
61 name_current = vs_settings.filename
62 name_multi = "conanbuildinfo_multi.props"
63
64 # read the exising mult_filename or use the template if it doesn't exist
65 multi_path = os.path.join(self.output_path, name_multi)
66 if os.path.isfile(multi_path):
67 content_multi = load(multi_path)
68 else:
69 content_multi = self.multi_content_template
70
71 # parse the multi_file and add a new import statement if needed
72 dom = minidom.parseString(content_multi)
73 import_group = dom.getElementsByTagName('ImportGroup')[0]
74 children = import_group.getElementsByTagName("Import")
75 for node in children:
76 if name_current == node.getAttribute("Project") and condition == node.getAttribute("Condition"):
77 # the import statement already exists
78 break
79 else:
80 # create a new import statement
81 import_node = dom.createElement('Import')
82 import_node.setAttribute('Condition', condition)
83 import_node.setAttribute('Project', name_current)
84 # add it to the import group
85 import_group.appendChild(import_node)
86 content_multi = dom.toprettyxml()
87 content_multi = "\n".join(line for line in content_multi.splitlines() if line.strip())
88
89 return {name_multi: content_multi,
90 vs_settings.filename: VisualStudioGenerator(self.conanfile).content}
91
[end of conans/client/generators/visualstudio_multi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| conan-io/conan | 4486c5d6ca77e979ac0a991b964a86cdf26e95d2 | GNU Make generator
https://github.com/solvingj/conan-make_generator/blob/master/conanfile.py by @solvingj is almost it.
I agree it could be built-in.
Can use conditional:
```
ifneq ($(USE_CONAN),)
INC_PATHS += $(CONAN_INC_PATHS)
LD_PATHS += $(CONAN_LIB_PATHS)
LD_LIBS += $(CONAN_LIBS)
CXXFLAGS += $(CONAN_CPP_FLAGS)
CFLAGS += $(CONAN_CFLAGS)
DEFINES += $(CONAN_DEFINES)
LDFLAGS_SHARED += $(CONAN_SHAREDLINKFLAGS)
LDFLAGS_EXE += $(CONAN_EXELINKFLAGS)
C_SRCS += $(CONAN_C_SRCS)
CXX_SRCS += $(CONAN_CXX_SRCS)
endif
```
| Labeled as high because the invest should be minimal. | 2018-11-26T17:02:07Z | <patch>
diff --git a/conans/client/generators/__init__.py b/conans/client/generators/__init__.py
--- a/conans/client/generators/__init__.py
+++ b/conans/client/generators/__init__.py
@@ -28,6 +28,7 @@
from conans.util.env_reader import get_env
from .b2 import B2Generator
from .premake import PremakeGenerator
+from .make import MakeGenerator
class _GeneratorManager(object):
@@ -74,6 +75,7 @@ def __getitem__(self, key):
registered_generators.add("json", JsonGenerator)
registered_generators.add("b2", B2Generator)
registered_generators.add("premake", PremakeGenerator)
+registered_generators.add("make", MakeGenerator)
def write_generators(conanfile, path, output):
diff --git a/conans/client/generators/make.py b/conans/client/generators/make.py
new file mode 100644
--- /dev/null
+++ b/conans/client/generators/make.py
@@ -0,0 +1,109 @@
+from conans.model import Generator
+from conans.paths import BUILD_INFO_MAKE
+
+
+class MakeGenerator(Generator):
+
+ def __init__(self, conanfile):
+ Generator.__init__(self, conanfile)
+ self.makefile_newline = "\n"
+ self.makefile_line_continuation = " \\\n"
+ self.assignment_if_absent = " ?= "
+ self.assignment_append = " += "
+
+ @property
+ def filename(self):
+ return BUILD_INFO_MAKE
+
+ @property
+ def content(self):
+
+ content = [
+ "#-------------------------------------------------------------------#",
+ "# Makefile variables from Conan Dependencies #",
+ "#-------------------------------------------------------------------#",
+ "",
+ ]
+
+ for line_as_list in self.create_deps_content():
+ content.append("".join(line_as_list))
+
+ content.append("#-------------------------------------------------------------------#")
+ content.append(self.makefile_newline)
+ return self.makefile_newline.join(content)
+
+ def create_deps_content(self):
+ deps_content = self.create_content_from_deps()
+ deps_content.extend(self.create_combined_content())
+ return deps_content
+
+ def create_content_from_deps(self):
+ content = []
+ for pkg_name, cpp_info in self.deps_build_info.dependencies:
+ content.extend(self.create_content_from_dep(pkg_name, cpp_info))
+ return content
+
+ def create_content_from_dep(self, pkg_name, cpp_info):
+
+ vars_info = [("ROOT", self.assignment_if_absent, [cpp_info.rootpath]),
+ ("SYSROOT", self.assignment_if_absent, [cpp_info.sysroot]),
+ ("INCLUDE_PATHS", self.assignment_append, cpp_info.include_paths),
+ ("LIB_PATHS", self.assignment_append, cpp_info.lib_paths),
+ ("BIN_PATHS", self.assignment_append, cpp_info.bin_paths),
+ ("BUILD_PATHS", self.assignment_append, cpp_info.build_paths),
+ ("RES_PATHS", self.assignment_append, cpp_info.res_paths),
+ ("LIBS", self.assignment_append, cpp_info.libs),
+ ("DEFINES", self.assignment_append, cpp_info.defines),
+ ("CFLAGS", self.assignment_append, cpp_info.cflags),
+ ("CPPFLAGS", self.assignment_append, cpp_info.cppflags),
+ ("SHAREDLINKFLAGS", self.assignment_append, cpp_info.sharedlinkflags),
+ ("EXELINKFLAGS", self.assignment_append, cpp_info.exelinkflags)]
+
+ return [self.create_makefile_var_pkg(var_name, pkg_name, operator, info)
+ for var_name, operator, info in vars_info]
+
+ def create_combined_content(self):
+ content = []
+ for var_name in self.all_dep_vars():
+ content.append(self.create_makefile_var_global(var_name, self.assignment_append,
+ self.create_combined_var_list(var_name)))
+ return content
+
+ def create_combined_var_list(self, var_name):
+ make_vars = []
+ for pkg_name, _ in self.deps_build_info.dependencies:
+ pkg_var = self.create_makefile_var_name_pkg(var_name, pkg_name)
+ make_vars.append("$({pkg_var})".format(pkg_var=pkg_var))
+ return make_vars
+
+ def create_makefile_var_global(self, var_name, operator, values):
+ make_var = [self.create_makefile_var_name_global(var_name)]
+ make_var.extend(self.create_makefile_var_common(operator, values))
+ return make_var
+
+ def create_makefile_var_pkg(self, var_name, pkg_name, operator, values):
+ make_var = [self.create_makefile_var_name_pkg(var_name, pkg_name)]
+ make_var.extend(self.create_makefile_var_common(operator, values))
+ return make_var
+
+ def create_makefile_var_common(self, operator, values):
+ return [operator, self.makefile_line_continuation, self.create_makefile_var_value(values),
+ self.makefile_newline]
+
+ @staticmethod
+ def create_makefile_var_name_global(var_name):
+ return "CONAN_{var}".format(var=var_name).upper()
+
+ @staticmethod
+ def create_makefile_var_name_pkg(var_name, pkg_name):
+ return "CONAN_{var}_{lib}".format(var=var_name, lib=pkg_name).upper()
+
+ def create_makefile_var_value(self, values):
+ formatted_values = [value.replace("\\", "/") for value in values]
+ return self.makefile_line_continuation.join(formatted_values)
+
+ @staticmethod
+ def all_dep_vars():
+ return ["rootpath", "sysroot", "include_paths", "lib_paths", "bin_paths", "build_paths",
+ "res_paths", "libs", "defines", "cflags", "cppflags", "sharedlinkflags",
+ "exelinkflags"]
diff --git a/conans/client/generators/premake.py b/conans/client/generators/premake.py
--- a/conans/client/generators/premake.py
+++ b/conans/client/generators/premake.py
@@ -3,6 +3,7 @@
class PremakeDeps(object):
+
def __init__(self, deps_cpp_info):
self.include_paths = ",\n".join('"%s"' % p.replace("\\", "/")
for p in deps_cpp_info.include_paths)
diff --git a/conans/paths.py b/conans/paths.py
--- a/conans/paths.py
+++ b/conans/paths.py
@@ -35,6 +35,7 @@ def path_shortener(x, _):
BUILD_INFO_VISUAL_STUDIO = 'conanbuildinfo.props'
BUILD_INFO_XCODE = 'conanbuildinfo.xcconfig'
BUILD_INFO_PREMAKE = 'conanbuildinfo.lua'
+BUILD_INFO_MAKE = 'conanbuildinfo.mak'
CONANINFO = "conaninfo.txt"
CONANENV = "conanenv.txt"
SYSTEM_REQS = "system_reqs.txt"
</patch> | [] | [] | |||
pypa__pip-7289 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip 19.3 doesn't send client certificate
**Ubuntu 18.04 virtual environment**
* pip version: 19.3
* Python version: 3.6.8
* OS: Ubuntu 18.04.3 LTS
We have a private Pypi server hosted with [pypicloud](https://pypicloud.readthedocs.io/en/latest/index.html). We use client certificates to authenticate users for downloading/uploading packages.
**Description**
pip 19.3 doesn't seem to send our client certificates so authentication fails and packages cannot be installed:
`WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:852)'),)': /simple/<our package name>/
`
I captured some of the SSL traffic from pip install in Wireshark and the client certificate option is there in the SSL handshake, but the certificates length is 0 with pip 19.3:
![image](https://user-images.githubusercontent.com/9781018/66789548-28f54080-eeba-11e9-8124-315e814564bc.png)
In 19.2.1, the length is non-zero and Wireshark shows the client certificate I expect.
**Expected behavior**
We should not get an SSL error if our client certificates and CA certificates are not expired. I have checked our server logs there don't appear to be any errors there with our certificates.
If I downgrade to pip 19.2.1 or 19.2.3 in my virtual environment, then the SSL error goes away.
I also checked with the `openssl s_client` that a handshake succeeded with the same client certificate:
```
openssl s_client -connect <my server> -cert <cert> -key <key> -state
CONNECTED(00000005)
SSL_connect:before SSL initialization
SSL_connect:SSLv3/TLS write client hello
SSL_connect:SSLv3/TLS write client hello
SSL_connect:SSLv3/TLS read server hello
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = <my server>
verify return:1
SSL_connect:SSLv3/TLS read server certificate
SSL_connect:SSLv3/TLS read server key exchange
SSL_connect:SSLv3/TLS read server certificate request
SSL_connect:SSLv3/TLS read server done
SSL_connect:SSLv3/TLS write client certificate
...
SSL handshake has read 4268 bytes and written 1546 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID:
```
**How to Reproduce**
1. Setup pip.conf or command-line arguments to use client certificate
2. pip install <package>
3. sslv3 alert handshake failure occurs
**Output**
```
pip install <my package>
Looking in indexes: https://pypi.org/simple/, https://<my server>/simple/
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:852)'),)': /simple/<my package>/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:852)'),)': /simple/<my package>/
```
</issue>
<code>
[start of README.rst]
1 pip - The Python Package Installer
2 ==================================
3
4 .. image:: https://img.shields.io/pypi/v/pip.svg
5 :target: https://pypi.org/project/pip/
6
7 .. image:: https://readthedocs.org/projects/pip/badge/?version=latest
8 :target: https://pip.pypa.io/en/latest
9
10 pip is the `package installer`_ for Python. You can use pip to install packages from the `Python Package Index`_ and other indexes.
11
12 Please take a look at our documentation for how to install and use pip:
13
14 * `Installation`_
15 * `Usage`_
16
17 Updates are released regularly, with a new version every 3 months. More details can be found in our documentation:
18
19 * `Release notes`_
20 * `Release process`_
21
22 If you find bugs, need help, or want to talk to the developers please use our mailing lists or chat rooms:
23
24 * `Issue tracking`_
25 * `Discourse channel`_
26 * `User IRC`_
27
28 If you want to get involved head over to GitHub to get the source code, look at our development documentation and feel free to jump on the developer mailing lists and chat rooms:
29
30 * `GitHub page`_
31 * `Dev documentation`_
32 * `Dev mailing list`_
33 * `Dev IRC`_
34
35 Code of Conduct
36 ---------------
37
38 Everyone interacting in the pip project's codebases, issue trackers, chat
39 rooms, and mailing lists is expected to follow the `PyPA Code of Conduct`_.
40
41 .. _package installer: https://packaging.python.org/en/latest/current/
42 .. _Python Package Index: https://pypi.org
43 .. _Installation: https://pip.pypa.io/en/stable/installing.html
44 .. _Usage: https://pip.pypa.io/en/stable/
45 .. _Release notes: https://pip.pypa.io/en/stable/news.html
46 .. _Release process: https://pip.pypa.io/en/latest/development/release-process/
47 .. _GitHub page: https://github.com/pypa/pip
48 .. _Dev documentation: https://pip.pypa.io/en/latest/development
49 .. _Issue tracking: https://github.com/pypa/pip/issues
50 .. _Discourse channel: https://discuss.python.org/c/packaging
51 .. _Dev mailing list: https://groups.google.com/forum/#!forum/pypa-dev
52 .. _User IRC: https://webchat.freenode.net/?channels=%23pypa
53 .. _Dev IRC: https://webchat.freenode.net/?channels=%23pypa-dev
54 .. _PyPA Code of Conduct: https://www.pypa.io/en/latest/code-of-conduct/
55
[end of README.rst]
[start of src/pip/_vendor/requests/adapters.py]
1 # -*- coding: utf-8 -*-
2
3 """
4 requests.adapters
5 ~~~~~~~~~~~~~~~~~
6
7 This module contains the transport adapters that Requests uses to define
8 and maintain connections.
9 """
10
11 import os.path
12 import socket
13
14 from pip._vendor.urllib3.poolmanager import PoolManager, proxy_from_url
15 from pip._vendor.urllib3.response import HTTPResponse
16 from pip._vendor.urllib3.util import parse_url
17 from pip._vendor.urllib3.util import Timeout as TimeoutSauce
18 from pip._vendor.urllib3.util.retry import Retry
19 from pip._vendor.urllib3.exceptions import ClosedPoolError
20 from pip._vendor.urllib3.exceptions import ConnectTimeoutError
21 from pip._vendor.urllib3.exceptions import HTTPError as _HTTPError
22 from pip._vendor.urllib3.exceptions import MaxRetryError
23 from pip._vendor.urllib3.exceptions import NewConnectionError
24 from pip._vendor.urllib3.exceptions import ProxyError as _ProxyError
25 from pip._vendor.urllib3.exceptions import ProtocolError
26 from pip._vendor.urllib3.exceptions import ReadTimeoutError
27 from pip._vendor.urllib3.exceptions import SSLError as _SSLError
28 from pip._vendor.urllib3.exceptions import ResponseError
29 from pip._vendor.urllib3.exceptions import LocationValueError
30
31 from .models import Response
32 from .compat import urlparse, basestring
33 from .utils import (DEFAULT_CA_BUNDLE_PATH, extract_zipped_paths,
34 get_encoding_from_headers, prepend_scheme_if_needed,
35 get_auth_from_url, urldefragauth, select_proxy)
36 from .structures import CaseInsensitiveDict
37 from .cookies import extract_cookies_to_jar
38 from .exceptions import (ConnectionError, ConnectTimeout, ReadTimeout, SSLError,
39 ProxyError, RetryError, InvalidSchema, InvalidProxyURL,
40 InvalidURL)
41 from .auth import _basic_auth_str
42
43 try:
44 from pip._vendor.urllib3.contrib.socks import SOCKSProxyManager
45 except ImportError:
46 def SOCKSProxyManager(*args, **kwargs):
47 raise InvalidSchema("Missing dependencies for SOCKS support.")
48
49 DEFAULT_POOLBLOCK = False
50 DEFAULT_POOLSIZE = 10
51 DEFAULT_RETRIES = 0
52 DEFAULT_POOL_TIMEOUT = None
53
54
55 class BaseAdapter(object):
56 """The Base Transport Adapter"""
57
58 def __init__(self):
59 super(BaseAdapter, self).__init__()
60
61 def send(self, request, stream=False, timeout=None, verify=True,
62 cert=None, proxies=None):
63 """Sends PreparedRequest object. Returns Response object.
64
65 :param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
66 :param stream: (optional) Whether to stream the request content.
67 :param timeout: (optional) How long to wait for the server to send
68 data before giving up, as a float, or a :ref:`(connect timeout,
69 read timeout) <timeouts>` tuple.
70 :type timeout: float or tuple
71 :param verify: (optional) Either a boolean, in which case it controls whether we verify
72 the server's TLS certificate, or a string, in which case it must be a path
73 to a CA bundle to use
74 :param cert: (optional) Any user-provided SSL certificate to be trusted.
75 :param proxies: (optional) The proxies dictionary to apply to the request.
76 """
77 raise NotImplementedError
78
79 def close(self):
80 """Cleans up adapter specific items."""
81 raise NotImplementedError
82
83
84 class HTTPAdapter(BaseAdapter):
85 """The built-in HTTP Adapter for urllib3.
86
87 Provides a general-case interface for Requests sessions to contact HTTP and
88 HTTPS urls by implementing the Transport Adapter interface. This class will
89 usually be created by the :class:`Session <Session>` class under the
90 covers.
91
92 :param pool_connections: The number of urllib3 connection pools to cache.
93 :param pool_maxsize: The maximum number of connections to save in the pool.
94 :param max_retries: The maximum number of retries each connection
95 should attempt. Note, this applies only to failed DNS lookups, socket
96 connections and connection timeouts, never to requests where data has
97 made it to the server. By default, Requests does not retry failed
98 connections. If you need granular control over the conditions under
99 which we retry a request, import urllib3's ``Retry`` class and pass
100 that instead.
101 :param pool_block: Whether the connection pool should block for connections.
102
103 Usage::
104
105 >>> import requests
106 >>> s = requests.Session()
107 >>> a = requests.adapters.HTTPAdapter(max_retries=3)
108 >>> s.mount('http://', a)
109 """
110 __attrs__ = ['max_retries', 'config', '_pool_connections', '_pool_maxsize',
111 '_pool_block']
112
113 def __init__(self, pool_connections=DEFAULT_POOLSIZE,
114 pool_maxsize=DEFAULT_POOLSIZE, max_retries=DEFAULT_RETRIES,
115 pool_block=DEFAULT_POOLBLOCK):
116 if max_retries == DEFAULT_RETRIES:
117 self.max_retries = Retry(0, read=False)
118 else:
119 self.max_retries = Retry.from_int(max_retries)
120 self.config = {}
121 self.proxy_manager = {}
122
123 super(HTTPAdapter, self).__init__()
124
125 self._pool_connections = pool_connections
126 self._pool_maxsize = pool_maxsize
127 self._pool_block = pool_block
128
129 self.init_poolmanager(pool_connections, pool_maxsize, block=pool_block)
130
131 def __getstate__(self):
132 return {attr: getattr(self, attr, None) for attr in self.__attrs__}
133
134 def __setstate__(self, state):
135 # Can't handle by adding 'proxy_manager' to self.__attrs__ because
136 # self.poolmanager uses a lambda function, which isn't pickleable.
137 self.proxy_manager = {}
138 self.config = {}
139
140 for attr, value in state.items():
141 setattr(self, attr, value)
142
143 self.init_poolmanager(self._pool_connections, self._pool_maxsize,
144 block=self._pool_block)
145
146 def init_poolmanager(self, connections, maxsize, block=DEFAULT_POOLBLOCK, **pool_kwargs):
147 """Initializes a urllib3 PoolManager.
148
149 This method should not be called from user code, and is only
150 exposed for use when subclassing the
151 :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
152
153 :param connections: The number of urllib3 connection pools to cache.
154 :param maxsize: The maximum number of connections to save in the pool.
155 :param block: Block when no free connections are available.
156 :param pool_kwargs: Extra keyword arguments used to initialize the Pool Manager.
157 """
158 # save these values for pickling
159 self._pool_connections = connections
160 self._pool_maxsize = maxsize
161 self._pool_block = block
162
163 self.poolmanager = PoolManager(num_pools=connections, maxsize=maxsize,
164 block=block, strict=True, **pool_kwargs)
165
166 def proxy_manager_for(self, proxy, **proxy_kwargs):
167 """Return urllib3 ProxyManager for the given proxy.
168
169 This method should not be called from user code, and is only
170 exposed for use when subclassing the
171 :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
172
173 :param proxy: The proxy to return a urllib3 ProxyManager for.
174 :param proxy_kwargs: Extra keyword arguments used to configure the Proxy Manager.
175 :returns: ProxyManager
176 :rtype: urllib3.ProxyManager
177 """
178 if proxy in self.proxy_manager:
179 manager = self.proxy_manager[proxy]
180 elif proxy.lower().startswith('socks'):
181 username, password = get_auth_from_url(proxy)
182 manager = self.proxy_manager[proxy] = SOCKSProxyManager(
183 proxy,
184 username=username,
185 password=password,
186 num_pools=self._pool_connections,
187 maxsize=self._pool_maxsize,
188 block=self._pool_block,
189 **proxy_kwargs
190 )
191 else:
192 proxy_headers = self.proxy_headers(proxy)
193 manager = self.proxy_manager[proxy] = proxy_from_url(
194 proxy,
195 proxy_headers=proxy_headers,
196 num_pools=self._pool_connections,
197 maxsize=self._pool_maxsize,
198 block=self._pool_block,
199 **proxy_kwargs)
200
201 return manager
202
203 def cert_verify(self, conn, url, verify, cert):
204 """Verify a SSL certificate. This method should not be called from user
205 code, and is only exposed for use when subclassing the
206 :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
207
208 :param conn: The urllib3 connection object associated with the cert.
209 :param url: The requested URL.
210 :param verify: Either a boolean, in which case it controls whether we verify
211 the server's TLS certificate, or a string, in which case it must be a path
212 to a CA bundle to use
213 :param cert: The SSL certificate to verify.
214 """
215 if url.lower().startswith('https') and verify:
216
217 cert_loc = None
218
219 # Allow self-specified cert location.
220 if verify is not True:
221 cert_loc = verify
222
223 if not cert_loc:
224 cert_loc = extract_zipped_paths(DEFAULT_CA_BUNDLE_PATH)
225
226 if not cert_loc or not os.path.exists(cert_loc):
227 raise IOError("Could not find a suitable TLS CA certificate bundle, "
228 "invalid path: {}".format(cert_loc))
229
230 conn.cert_reqs = 'CERT_REQUIRED'
231
232 if not os.path.isdir(cert_loc):
233 conn.ca_certs = cert_loc
234 else:
235 conn.ca_cert_dir = cert_loc
236 else:
237 conn.cert_reqs = 'CERT_NONE'
238 conn.ca_certs = None
239 conn.ca_cert_dir = None
240
241 if cert:
242 if not isinstance(cert, basestring):
243 conn.cert_file = cert[0]
244 conn.key_file = cert[1]
245 else:
246 conn.cert_file = cert
247 conn.key_file = None
248 if conn.cert_file and not os.path.exists(conn.cert_file):
249 raise IOError("Could not find the TLS certificate file, "
250 "invalid path: {}".format(conn.cert_file))
251 if conn.key_file and not os.path.exists(conn.key_file):
252 raise IOError("Could not find the TLS key file, "
253 "invalid path: {}".format(conn.key_file))
254
255 def build_response(self, req, resp):
256 """Builds a :class:`Response <requests.Response>` object from a urllib3
257 response. This should not be called from user code, and is only exposed
258 for use when subclassing the
259 :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`
260
261 :param req: The :class:`PreparedRequest <PreparedRequest>` used to generate the response.
262 :param resp: The urllib3 response object.
263 :rtype: requests.Response
264 """
265 response = Response()
266
267 # Fallback to None if there's no status_code, for whatever reason.
268 response.status_code = getattr(resp, 'status', None)
269
270 # Make headers case-insensitive.
271 response.headers = CaseInsensitiveDict(getattr(resp, 'headers', {}))
272
273 # Set encoding.
274 response.encoding = get_encoding_from_headers(response.headers)
275 response.raw = resp
276 response.reason = response.raw.reason
277
278 if isinstance(req.url, bytes):
279 response.url = req.url.decode('utf-8')
280 else:
281 response.url = req.url
282
283 # Add new cookies from the server.
284 extract_cookies_to_jar(response.cookies, req, resp)
285
286 # Give the Response some context.
287 response.request = req
288 response.connection = self
289
290 return response
291
292 def get_connection(self, url, proxies=None):
293 """Returns a urllib3 connection for the given URL. This should not be
294 called from user code, and is only exposed for use when subclassing the
295 :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
296
297 :param url: The URL to connect to.
298 :param proxies: (optional) A Requests-style dictionary of proxies used on this request.
299 :rtype: urllib3.ConnectionPool
300 """
301 proxy = select_proxy(url, proxies)
302
303 if proxy:
304 proxy = prepend_scheme_if_needed(proxy, 'http')
305 proxy_url = parse_url(proxy)
306 if not proxy_url.host:
307 raise InvalidProxyURL("Please check proxy URL. It is malformed"
308 " and could be missing the host.")
309 proxy_manager = self.proxy_manager_for(proxy)
310 conn = proxy_manager.connection_from_url(url)
311 else:
312 # Only scheme should be lower case
313 parsed = urlparse(url)
314 url = parsed.geturl()
315 conn = self.poolmanager.connection_from_url(url)
316
317 return conn
318
319 def close(self):
320 """Disposes of any internal state.
321
322 Currently, this closes the PoolManager and any active ProxyManager,
323 which closes any pooled connections.
324 """
325 self.poolmanager.clear()
326 for proxy in self.proxy_manager.values():
327 proxy.clear()
328
329 def request_url(self, request, proxies):
330 """Obtain the url to use when making the final request.
331
332 If the message is being sent through a HTTP proxy, the full URL has to
333 be used. Otherwise, we should only use the path portion of the URL.
334
335 This should not be called from user code, and is only exposed for use
336 when subclassing the
337 :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
338
339 :param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
340 :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs.
341 :rtype: str
342 """
343 proxy = select_proxy(request.url, proxies)
344 scheme = urlparse(request.url).scheme
345
346 is_proxied_http_request = (proxy and scheme != 'https')
347 using_socks_proxy = False
348 if proxy:
349 proxy_scheme = urlparse(proxy).scheme.lower()
350 using_socks_proxy = proxy_scheme.startswith('socks')
351
352 url = request.path_url
353 if is_proxied_http_request and not using_socks_proxy:
354 url = urldefragauth(request.url)
355
356 return url
357
358 def add_headers(self, request, **kwargs):
359 """Add any headers needed by the connection. As of v2.0 this does
360 nothing by default, but is left for overriding by users that subclass
361 the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
362
363 This should not be called from user code, and is only exposed for use
364 when subclassing the
365 :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
366
367 :param request: The :class:`PreparedRequest <PreparedRequest>` to add headers to.
368 :param kwargs: The keyword arguments from the call to send().
369 """
370 pass
371
372 def proxy_headers(self, proxy):
373 """Returns a dictionary of the headers to add to any request sent
374 through a proxy. This works with urllib3 magic to ensure that they are
375 correctly sent to the proxy, rather than in a tunnelled request if
376 CONNECT is being used.
377
378 This should not be called from user code, and is only exposed for use
379 when subclassing the
380 :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
381
382 :param proxy: The url of the proxy being used for this request.
383 :rtype: dict
384 """
385 headers = {}
386 username, password = get_auth_from_url(proxy)
387
388 if username:
389 headers['Proxy-Authorization'] = _basic_auth_str(username,
390 password)
391
392 return headers
393
394 def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
395 """Sends PreparedRequest object. Returns Response object.
396
397 :param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
398 :param stream: (optional) Whether to stream the request content.
399 :param timeout: (optional) How long to wait for the server to send
400 data before giving up, as a float, or a :ref:`(connect timeout,
401 read timeout) <timeouts>` tuple.
402 :type timeout: float or tuple or urllib3 Timeout object
403 :param verify: (optional) Either a boolean, in which case it controls whether
404 we verify the server's TLS certificate, or a string, in which case it
405 must be a path to a CA bundle to use
406 :param cert: (optional) Any user-provided SSL certificate to be trusted.
407 :param proxies: (optional) The proxies dictionary to apply to the request.
408 :rtype: requests.Response
409 """
410
411 try:
412 conn = self.get_connection(request.url, proxies)
413 except LocationValueError as e:
414 raise InvalidURL(e, request=request)
415
416 self.cert_verify(conn, request.url, verify, cert)
417 url = self.request_url(request, proxies)
418 self.add_headers(request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
419
420 chunked = not (request.body is None or 'Content-Length' in request.headers)
421
422 if isinstance(timeout, tuple):
423 try:
424 connect, read = timeout
425 timeout = TimeoutSauce(connect=connect, read=read)
426 except ValueError as e:
427 # this may raise a string formatting error.
428 err = ("Invalid timeout {}. Pass a (connect, read) "
429 "timeout tuple, or a single float to set "
430 "both timeouts to the same value".format(timeout))
431 raise ValueError(err)
432 elif isinstance(timeout, TimeoutSauce):
433 pass
434 else:
435 timeout = TimeoutSauce(connect=timeout, read=timeout)
436
437 try:
438 if not chunked:
439 resp = conn.urlopen(
440 method=request.method,
441 url=url,
442 body=request.body,
443 headers=request.headers,
444 redirect=False,
445 assert_same_host=False,
446 preload_content=False,
447 decode_content=False,
448 retries=self.max_retries,
449 timeout=timeout
450 )
451
452 # Send the request.
453 else:
454 if hasattr(conn, 'proxy_pool'):
455 conn = conn.proxy_pool
456
457 low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
458
459 try:
460 low_conn.putrequest(request.method,
461 url,
462 skip_accept_encoding=True)
463
464 for header, value in request.headers.items():
465 low_conn.putheader(header, value)
466
467 low_conn.endheaders()
468
469 for i in request.body:
470 low_conn.send(hex(len(i))[2:].encode('utf-8'))
471 low_conn.send(b'\r\n')
472 low_conn.send(i)
473 low_conn.send(b'\r\n')
474 low_conn.send(b'0\r\n\r\n')
475
476 # Receive the response from the server
477 try:
478 # For Python 2.7, use buffering of HTTP responses
479 r = low_conn.getresponse(buffering=True)
480 except TypeError:
481 # For compatibility with Python 3.3+
482 r = low_conn.getresponse()
483
484 resp = HTTPResponse.from_httplib(
485 r,
486 pool=conn,
487 connection=low_conn,
488 preload_content=False,
489 decode_content=False
490 )
491 except:
492 # If we hit any problems here, clean up the connection.
493 # Then, reraise so that we can handle the actual exception.
494 low_conn.close()
495 raise
496
497 except (ProtocolError, socket.error) as err:
498 raise ConnectionError(err, request=request)
499
500 except MaxRetryError as e:
501 if isinstance(e.reason, ConnectTimeoutError):
502 # TODO: Remove this in 3.0.0: see #2811
503 if not isinstance(e.reason, NewConnectionError):
504 raise ConnectTimeout(e, request=request)
505
506 if isinstance(e.reason, ResponseError):
507 raise RetryError(e, request=request)
508
509 if isinstance(e.reason, _ProxyError):
510 raise ProxyError(e, request=request)
511
512 if isinstance(e.reason, _SSLError):
513 # This branch is for urllib3 v1.22 and later.
514 raise SSLError(e, request=request)
515
516 raise ConnectionError(e, request=request)
517
518 except ClosedPoolError as e:
519 raise ConnectionError(e, request=request)
520
521 except _ProxyError as e:
522 raise ProxyError(e)
523
524 except (_SSLError, _HTTPError) as e:
525 if isinstance(e, _SSLError):
526 # This branch is for urllib3 versions earlier than v1.22
527 raise SSLError(e, request=request)
528 elif isinstance(e, ReadTimeoutError):
529 raise ReadTimeout(e, request=request)
530 else:
531 raise
532
533 return self.build_response(request, resp)
534
[end of src/pip/_vendor/requests/adapters.py]
[start of src/pip/_vendor/urllib3/contrib/pyopenssl.py]
1 """
2 SSL with SNI_-support for Python 2. Follow these instructions if you would
3 like to verify SSL certificates in Python 2. Note, the default libraries do
4 *not* do certificate checking; you need to do additional work to validate
5 certificates yourself.
6
7 This needs the following packages installed:
8
9 * pyOpenSSL (tested with 16.0.0)
10 * cryptography (minimum 1.3.4, from pyopenssl)
11 * idna (minimum 2.0, from cryptography)
12
13 However, pyopenssl depends on cryptography, which depends on idna, so while we
14 use all three directly here we end up having relatively few packages required.
15
16 You can install them with the following command:
17
18 pip install pyopenssl cryptography idna
19
20 To activate certificate checking, call
21 :func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code
22 before you begin making HTTP requests. This can be done in a ``sitecustomize``
23 module, or at any other time before your application begins using ``urllib3``,
24 like this::
25
26 try:
27 import urllib3.contrib.pyopenssl
28 urllib3.contrib.pyopenssl.inject_into_urllib3()
29 except ImportError:
30 pass
31
32 Now you can use :mod:`urllib3` as you normally would, and it will support SNI
33 when the required modules are installed.
34
35 Activating this module also has the positive side effect of disabling SSL/TLS
36 compression in Python 2 (see `CRIME attack`_).
37
38 If you want to configure the default list of supported cipher suites, you can
39 set the ``urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST`` variable.
40
41 .. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication
42 .. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit)
43 """
44 from __future__ import absolute_import
45
46 import OpenSSL.SSL
47 from cryptography import x509
48 from cryptography.hazmat.backends.openssl import backend as openssl_backend
49 from cryptography.hazmat.backends.openssl.x509 import _Certificate
50
51 try:
52 from cryptography.x509 import UnsupportedExtension
53 except ImportError:
54 # UnsupportedExtension is gone in cryptography >= 2.1.0
55 class UnsupportedExtension(Exception):
56 pass
57
58
59 from socket import timeout, error as SocketError
60 from io import BytesIO
61
62 try: # Platform-specific: Python 2
63 from socket import _fileobject
64 except ImportError: # Platform-specific: Python 3
65 _fileobject = None
66 from ..packages.backports.makefile import backport_makefile
67
68 import logging
69 import ssl
70 from ..packages import six
71 import sys
72
73 from .. import util
74
75
76 __all__ = ["inject_into_urllib3", "extract_from_urllib3"]
77
78 # SNI always works.
79 HAS_SNI = True
80
81 # Map from urllib3 to PyOpenSSL compatible parameter-values.
82 _openssl_versions = {
83 util.PROTOCOL_TLS: OpenSSL.SSL.SSLv23_METHOD,
84 ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD,
85 }
86
87 if hasattr(ssl, "PROTOCOL_SSLv3") and hasattr(OpenSSL.SSL, "SSLv3_METHOD"):
88 _openssl_versions[ssl.PROTOCOL_SSLv3] = OpenSSL.SSL.SSLv3_METHOD
89
90 if hasattr(ssl, "PROTOCOL_TLSv1_1") and hasattr(OpenSSL.SSL, "TLSv1_1_METHOD"):
91 _openssl_versions[ssl.PROTOCOL_TLSv1_1] = OpenSSL.SSL.TLSv1_1_METHOD
92
93 if hasattr(ssl, "PROTOCOL_TLSv1_2") and hasattr(OpenSSL.SSL, "TLSv1_2_METHOD"):
94 _openssl_versions[ssl.PROTOCOL_TLSv1_2] = OpenSSL.SSL.TLSv1_2_METHOD
95
96
97 _stdlib_to_openssl_verify = {
98 ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE,
99 ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER,
100 ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER
101 + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT,
102 }
103 _openssl_to_stdlib_verify = dict((v, k) for k, v in _stdlib_to_openssl_verify.items())
104
105 # OpenSSL will only write 16K at a time
106 SSL_WRITE_BLOCKSIZE = 16384
107
108 orig_util_HAS_SNI = util.HAS_SNI
109 orig_util_SSLContext = util.ssl_.SSLContext
110
111
112 log = logging.getLogger(__name__)
113
114
115 def inject_into_urllib3():
116 "Monkey-patch urllib3 with PyOpenSSL-backed SSL-support."
117
118 _validate_dependencies_met()
119
120 util.SSLContext = PyOpenSSLContext
121 util.ssl_.SSLContext = PyOpenSSLContext
122 util.HAS_SNI = HAS_SNI
123 util.ssl_.HAS_SNI = HAS_SNI
124 util.IS_PYOPENSSL = True
125 util.ssl_.IS_PYOPENSSL = True
126
127
128 def extract_from_urllib3():
129 "Undo monkey-patching by :func:`inject_into_urllib3`."
130
131 util.SSLContext = orig_util_SSLContext
132 util.ssl_.SSLContext = orig_util_SSLContext
133 util.HAS_SNI = orig_util_HAS_SNI
134 util.ssl_.HAS_SNI = orig_util_HAS_SNI
135 util.IS_PYOPENSSL = False
136 util.ssl_.IS_PYOPENSSL = False
137
138
139 def _validate_dependencies_met():
140 """
141 Verifies that PyOpenSSL's package-level dependencies have been met.
142 Throws `ImportError` if they are not met.
143 """
144 # Method added in `cryptography==1.1`; not available in older versions
145 from cryptography.x509.extensions import Extensions
146
147 if getattr(Extensions, "get_extension_for_class", None) is None:
148 raise ImportError(
149 "'cryptography' module missing required functionality. "
150 "Try upgrading to v1.3.4 or newer."
151 )
152
153 # pyOpenSSL 0.14 and above use cryptography for OpenSSL bindings. The _x509
154 # attribute is only present on those versions.
155 from OpenSSL.crypto import X509
156
157 x509 = X509()
158 if getattr(x509, "_x509", None) is None:
159 raise ImportError(
160 "'pyOpenSSL' module missing required functionality. "
161 "Try upgrading to v0.14 or newer."
162 )
163
164
165 def _dnsname_to_stdlib(name):
166 """
167 Converts a dNSName SubjectAlternativeName field to the form used by the
168 standard library on the given Python version.
169
170 Cryptography produces a dNSName as a unicode string that was idna-decoded
171 from ASCII bytes. We need to idna-encode that string to get it back, and
172 then on Python 3 we also need to convert to unicode via UTF-8 (the stdlib
173 uses PyUnicode_FromStringAndSize on it, which decodes via UTF-8).
174
175 If the name cannot be idna-encoded then we return None signalling that
176 the name given should be skipped.
177 """
178
179 def idna_encode(name):
180 """
181 Borrowed wholesale from the Python Cryptography Project. It turns out
182 that we can't just safely call `idna.encode`: it can explode for
183 wildcard names. This avoids that problem.
184 """
185 from pip._vendor import idna
186
187 try:
188 for prefix in [u"*.", u"."]:
189 if name.startswith(prefix):
190 name = name[len(prefix) :]
191 return prefix.encode("ascii") + idna.encode(name)
192 return idna.encode(name)
193 except idna.core.IDNAError:
194 return None
195
196 # Don't send IPv6 addresses through the IDNA encoder.
197 if ":" in name:
198 return name
199
200 name = idna_encode(name)
201 if name is None:
202 return None
203 elif sys.version_info >= (3, 0):
204 name = name.decode("utf-8")
205 return name
206
207
208 def get_subj_alt_name(peer_cert):
209 """
210 Given an PyOpenSSL certificate, provides all the subject alternative names.
211 """
212 # Pass the cert to cryptography, which has much better APIs for this.
213 if hasattr(peer_cert, "to_cryptography"):
214 cert = peer_cert.to_cryptography()
215 else:
216 # This is technically using private APIs, but should work across all
217 # relevant versions before PyOpenSSL got a proper API for this.
218 cert = _Certificate(openssl_backend, peer_cert._x509)
219
220 # We want to find the SAN extension. Ask Cryptography to locate it (it's
221 # faster than looping in Python)
222 try:
223 ext = cert.extensions.get_extension_for_class(x509.SubjectAlternativeName).value
224 except x509.ExtensionNotFound:
225 # No such extension, return the empty list.
226 return []
227 except (
228 x509.DuplicateExtension,
229 UnsupportedExtension,
230 x509.UnsupportedGeneralNameType,
231 UnicodeError,
232 ) as e:
233 # A problem has been found with the quality of the certificate. Assume
234 # no SAN field is present.
235 log.warning(
236 "A problem was encountered with the certificate that prevented "
237 "urllib3 from finding the SubjectAlternativeName field. This can "
238 "affect certificate validation. The error was %s",
239 e,
240 )
241 return []
242
243 # We want to return dNSName and iPAddress fields. We need to cast the IPs
244 # back to strings because the match_hostname function wants them as
245 # strings.
246 # Sadly the DNS names need to be idna encoded and then, on Python 3, UTF-8
247 # decoded. This is pretty frustrating, but that's what the standard library
248 # does with certificates, and so we need to attempt to do the same.
249 # We also want to skip over names which cannot be idna encoded.
250 names = [
251 ("DNS", name)
252 for name in map(_dnsname_to_stdlib, ext.get_values_for_type(x509.DNSName))
253 if name is not None
254 ]
255 names.extend(
256 ("IP Address", str(name)) for name in ext.get_values_for_type(x509.IPAddress)
257 )
258
259 return names
260
261
262 class WrappedSocket(object):
263 """API-compatibility wrapper for Python OpenSSL's Connection-class.
264
265 Note: _makefile_refs, _drop() and _reuse() are needed for the garbage
266 collector of pypy.
267 """
268
269 def __init__(self, connection, socket, suppress_ragged_eofs=True):
270 self.connection = connection
271 self.socket = socket
272 self.suppress_ragged_eofs = suppress_ragged_eofs
273 self._makefile_refs = 0
274 self._closed = False
275
276 def fileno(self):
277 return self.socket.fileno()
278
279 # Copy-pasted from Python 3.5 source code
280 def _decref_socketios(self):
281 if self._makefile_refs > 0:
282 self._makefile_refs -= 1
283 if self._closed:
284 self.close()
285
286 def recv(self, *args, **kwargs):
287 try:
288 data = self.connection.recv(*args, **kwargs)
289 except OpenSSL.SSL.SysCallError as e:
290 if self.suppress_ragged_eofs and e.args == (-1, "Unexpected EOF"):
291 return b""
292 else:
293 raise SocketError(str(e))
294 except OpenSSL.SSL.ZeroReturnError:
295 if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN:
296 return b""
297 else:
298 raise
299 except OpenSSL.SSL.WantReadError:
300 if not util.wait_for_read(self.socket, self.socket.gettimeout()):
301 raise timeout("The read operation timed out")
302 else:
303 return self.recv(*args, **kwargs)
304
305 # TLS 1.3 post-handshake authentication
306 except OpenSSL.SSL.Error as e:
307 raise ssl.SSLError("read error: %r" % e)
308 else:
309 return data
310
311 def recv_into(self, *args, **kwargs):
312 try:
313 return self.connection.recv_into(*args, **kwargs)
314 except OpenSSL.SSL.SysCallError as e:
315 if self.suppress_ragged_eofs and e.args == (-1, "Unexpected EOF"):
316 return 0
317 else:
318 raise SocketError(str(e))
319 except OpenSSL.SSL.ZeroReturnError:
320 if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN:
321 return 0
322 else:
323 raise
324 except OpenSSL.SSL.WantReadError:
325 if not util.wait_for_read(self.socket, self.socket.gettimeout()):
326 raise timeout("The read operation timed out")
327 else:
328 return self.recv_into(*args, **kwargs)
329
330 # TLS 1.3 post-handshake authentication
331 except OpenSSL.SSL.Error as e:
332 raise ssl.SSLError("read error: %r" % e)
333
334 def settimeout(self, timeout):
335 return self.socket.settimeout(timeout)
336
337 def _send_until_done(self, data):
338 while True:
339 try:
340 return self.connection.send(data)
341 except OpenSSL.SSL.WantWriteError:
342 if not util.wait_for_write(self.socket, self.socket.gettimeout()):
343 raise timeout()
344 continue
345 except OpenSSL.SSL.SysCallError as e:
346 raise SocketError(str(e))
347
348 def sendall(self, data):
349 total_sent = 0
350 while total_sent < len(data):
351 sent = self._send_until_done(
352 data[total_sent : total_sent + SSL_WRITE_BLOCKSIZE]
353 )
354 total_sent += sent
355
356 def shutdown(self):
357 # FIXME rethrow compatible exceptions should we ever use this
358 self.connection.shutdown()
359
360 def close(self):
361 if self._makefile_refs < 1:
362 try:
363 self._closed = True
364 return self.connection.close()
365 except OpenSSL.SSL.Error:
366 return
367 else:
368 self._makefile_refs -= 1
369
370 def getpeercert(self, binary_form=False):
371 x509 = self.connection.get_peer_certificate()
372
373 if not x509:
374 return x509
375
376 if binary_form:
377 return OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_ASN1, x509)
378
379 return {
380 "subject": ((("commonName", x509.get_subject().CN),),),
381 "subjectAltName": get_subj_alt_name(x509),
382 }
383
384 def version(self):
385 return self.connection.get_protocol_version_name()
386
387 def _reuse(self):
388 self._makefile_refs += 1
389
390 def _drop(self):
391 if self._makefile_refs < 1:
392 self.close()
393 else:
394 self._makefile_refs -= 1
395
396
397 if _fileobject: # Platform-specific: Python 2
398
399 def makefile(self, mode, bufsize=-1):
400 self._makefile_refs += 1
401 return _fileobject(self, mode, bufsize, close=True)
402
403
404 else: # Platform-specific: Python 3
405 makefile = backport_makefile
406
407 WrappedSocket.makefile = makefile
408
409
410 class PyOpenSSLContext(object):
411 """
412 I am a wrapper class for the PyOpenSSL ``Context`` object. I am responsible
413 for translating the interface of the standard library ``SSLContext`` object
414 to calls into PyOpenSSL.
415 """
416
417 def __init__(self, protocol):
418 self.protocol = _openssl_versions[protocol]
419 self._ctx = OpenSSL.SSL.Context(self.protocol)
420 self._options = 0
421 self.check_hostname = False
422
423 @property
424 def options(self):
425 return self._options
426
427 @options.setter
428 def options(self, value):
429 self._options = value
430 self._ctx.set_options(value)
431
432 @property
433 def verify_mode(self):
434 return _openssl_to_stdlib_verify[self._ctx.get_verify_mode()]
435
436 @verify_mode.setter
437 def verify_mode(self, value):
438 self._ctx.set_verify(_stdlib_to_openssl_verify[value], _verify_callback)
439
440 def set_default_verify_paths(self):
441 self._ctx.set_default_verify_paths()
442
443 def set_ciphers(self, ciphers):
444 if isinstance(ciphers, six.text_type):
445 ciphers = ciphers.encode("utf-8")
446 self._ctx.set_cipher_list(ciphers)
447
448 def load_verify_locations(self, cafile=None, capath=None, cadata=None):
449 if cafile is not None:
450 cafile = cafile.encode("utf-8")
451 if capath is not None:
452 capath = capath.encode("utf-8")
453 self._ctx.load_verify_locations(cafile, capath)
454 if cadata is not None:
455 self._ctx.load_verify_locations(BytesIO(cadata))
456
457 def load_cert_chain(self, certfile, keyfile=None, password=None):
458 self._ctx.use_certificate_chain_file(certfile)
459 if password is not None:
460 if not isinstance(password, six.binary_type):
461 password = password.encode("utf-8")
462 self._ctx.set_passwd_cb(lambda *_: password)
463 self._ctx.use_privatekey_file(keyfile or certfile)
464
465 def wrap_socket(
466 self,
467 sock,
468 server_side=False,
469 do_handshake_on_connect=True,
470 suppress_ragged_eofs=True,
471 server_hostname=None,
472 ):
473 cnx = OpenSSL.SSL.Connection(self._ctx, sock)
474
475 if isinstance(server_hostname, six.text_type): # Platform-specific: Python 3
476 server_hostname = server_hostname.encode("utf-8")
477
478 if server_hostname is not None:
479 cnx.set_tlsext_host_name(server_hostname)
480
481 cnx.set_connect_state()
482
483 while True:
484 try:
485 cnx.do_handshake()
486 except OpenSSL.SSL.WantReadError:
487 if not util.wait_for_read(sock, sock.gettimeout()):
488 raise timeout("select timed out")
489 continue
490 except OpenSSL.SSL.Error as e:
491 raise ssl.SSLError("bad handshake: %r" % e)
492 break
493
494 return WrappedSocket(cnx, sock)
495
496
497 def _verify_callback(cnx, x509, err_no, err_depth, return_code):
498 return err_no == 0
499
[end of src/pip/_vendor/urllib3/contrib/pyopenssl.py]
[start of src/pip/_vendor/urllib3/contrib/securetransport.py]
1 """
2 SecureTranport support for urllib3 via ctypes.
3
4 This makes platform-native TLS available to urllib3 users on macOS without the
5 use of a compiler. This is an important feature because the Python Package
6 Index is moving to become a TLSv1.2-or-higher server, and the default OpenSSL
7 that ships with macOS is not capable of doing TLSv1.2. The only way to resolve
8 this is to give macOS users an alternative solution to the problem, and that
9 solution is to use SecureTransport.
10
11 We use ctypes here because this solution must not require a compiler. That's
12 because pip is not allowed to require a compiler either.
13
14 This is not intended to be a seriously long-term solution to this problem.
15 The hope is that PEP 543 will eventually solve this issue for us, at which
16 point we can retire this contrib module. But in the short term, we need to
17 solve the impending tire fire that is Python on Mac without this kind of
18 contrib module. So...here we are.
19
20 To use this module, simply import and inject it::
21
22 import urllib3.contrib.securetransport
23 urllib3.contrib.securetransport.inject_into_urllib3()
24
25 Happy TLSing!
26
27 This code is a bastardised version of the code found in Will Bond's oscrypto
28 library. An enormous debt is owed to him for blazing this trail for us. For
29 that reason, this code should be considered to be covered both by urllib3's
30 license and by oscrypto's:
31
32 Copyright (c) 2015-2016 Will Bond <will@wbond.net>
33
34 Permission is hereby granted, free of charge, to any person obtaining a
35 copy of this software and associated documentation files (the "Software"),
36 to deal in the Software without restriction, including without limitation
37 the rights to use, copy, modify, merge, publish, distribute, sublicense,
38 and/or sell copies of the Software, and to permit persons to whom the
39 Software is furnished to do so, subject to the following conditions:
40
41 The above copyright notice and this permission notice shall be included in
42 all copies or substantial portions of the Software.
43
44 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
45 IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
46 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
47 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
48 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
49 FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
50 DEALINGS IN THE SOFTWARE.
51 """
52 from __future__ import absolute_import
53
54 import contextlib
55 import ctypes
56 import errno
57 import os.path
58 import shutil
59 import socket
60 import ssl
61 import threading
62 import weakref
63
64 from .. import util
65 from ._securetransport.bindings import Security, SecurityConst, CoreFoundation
66 from ._securetransport.low_level import (
67 _assert_no_error,
68 _cert_array_from_pem,
69 _temporary_keychain,
70 _load_client_cert_chain,
71 )
72
73 try: # Platform-specific: Python 2
74 from socket import _fileobject
75 except ImportError: # Platform-specific: Python 3
76 _fileobject = None
77 from ..packages.backports.makefile import backport_makefile
78
79 __all__ = ["inject_into_urllib3", "extract_from_urllib3"]
80
81 # SNI always works
82 HAS_SNI = True
83
84 orig_util_HAS_SNI = util.HAS_SNI
85 orig_util_SSLContext = util.ssl_.SSLContext
86
87 # This dictionary is used by the read callback to obtain a handle to the
88 # calling wrapped socket. This is a pretty silly approach, but for now it'll
89 # do. I feel like I should be able to smuggle a handle to the wrapped socket
90 # directly in the SSLConnectionRef, but for now this approach will work I
91 # guess.
92 #
93 # We need to lock around this structure for inserts, but we don't do it for
94 # reads/writes in the callbacks. The reasoning here goes as follows:
95 #
96 # 1. It is not possible to call into the callbacks before the dictionary is
97 # populated, so once in the callback the id must be in the dictionary.
98 # 2. The callbacks don't mutate the dictionary, they only read from it, and
99 # so cannot conflict with any of the insertions.
100 #
101 # This is good: if we had to lock in the callbacks we'd drastically slow down
102 # the performance of this code.
103 _connection_refs = weakref.WeakValueDictionary()
104 _connection_ref_lock = threading.Lock()
105
106 # Limit writes to 16kB. This is OpenSSL's limit, but we'll cargo-cult it over
107 # for no better reason than we need *a* limit, and this one is right there.
108 SSL_WRITE_BLOCKSIZE = 16384
109
110 # This is our equivalent of util.ssl_.DEFAULT_CIPHERS, but expanded out to
111 # individual cipher suites. We need to do this because this is how
112 # SecureTransport wants them.
113 CIPHER_SUITES = [
114 SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
115 SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
116 SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
117 SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
118 SecurityConst.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
119 SecurityConst.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
120 SecurityConst.TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,
121 SecurityConst.TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,
122 SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
123 SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
124 SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
125 SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
126 SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
127 SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
128 SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
129 SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
130 SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
131 SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA,
132 SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
133 SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA,
134 SecurityConst.TLS_AES_256_GCM_SHA384,
135 SecurityConst.TLS_AES_128_GCM_SHA256,
136 SecurityConst.TLS_RSA_WITH_AES_256_GCM_SHA384,
137 SecurityConst.TLS_RSA_WITH_AES_128_GCM_SHA256,
138 SecurityConst.TLS_AES_128_CCM_8_SHA256,
139 SecurityConst.TLS_AES_128_CCM_SHA256,
140 SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA256,
141 SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA256,
142 SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA,
143 SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA,
144 ]
145
146 # Basically this is simple: for PROTOCOL_SSLv23 we turn it into a low of
147 # TLSv1 and a high of TLSv1.3. For everything else, we pin to that version.
148 # TLSv1 to 1.2 are supported on macOS 10.8+ and TLSv1.3 is macOS 10.13+
149 _protocol_to_min_max = {
150 util.PROTOCOL_TLS: (
151 SecurityConst.kTLSProtocol1,
152 SecurityConst.kTLSProtocolMaxSupported,
153 )
154 }
155
156 if hasattr(ssl, "PROTOCOL_SSLv2"):
157 _protocol_to_min_max[ssl.PROTOCOL_SSLv2] = (
158 SecurityConst.kSSLProtocol2,
159 SecurityConst.kSSLProtocol2,
160 )
161 if hasattr(ssl, "PROTOCOL_SSLv3"):
162 _protocol_to_min_max[ssl.PROTOCOL_SSLv3] = (
163 SecurityConst.kSSLProtocol3,
164 SecurityConst.kSSLProtocol3,
165 )
166 if hasattr(ssl, "PROTOCOL_TLSv1"):
167 _protocol_to_min_max[ssl.PROTOCOL_TLSv1] = (
168 SecurityConst.kTLSProtocol1,
169 SecurityConst.kTLSProtocol1,
170 )
171 if hasattr(ssl, "PROTOCOL_TLSv1_1"):
172 _protocol_to_min_max[ssl.PROTOCOL_TLSv1_1] = (
173 SecurityConst.kTLSProtocol11,
174 SecurityConst.kTLSProtocol11,
175 )
176 if hasattr(ssl, "PROTOCOL_TLSv1_2"):
177 _protocol_to_min_max[ssl.PROTOCOL_TLSv1_2] = (
178 SecurityConst.kTLSProtocol12,
179 SecurityConst.kTLSProtocol12,
180 )
181
182
183 def inject_into_urllib3():
184 """
185 Monkey-patch urllib3 with SecureTransport-backed SSL-support.
186 """
187 util.SSLContext = SecureTransportContext
188 util.ssl_.SSLContext = SecureTransportContext
189 util.HAS_SNI = HAS_SNI
190 util.ssl_.HAS_SNI = HAS_SNI
191 util.IS_SECURETRANSPORT = True
192 util.ssl_.IS_SECURETRANSPORT = True
193
194
195 def extract_from_urllib3():
196 """
197 Undo monkey-patching by :func:`inject_into_urllib3`.
198 """
199 util.SSLContext = orig_util_SSLContext
200 util.ssl_.SSLContext = orig_util_SSLContext
201 util.HAS_SNI = orig_util_HAS_SNI
202 util.ssl_.HAS_SNI = orig_util_HAS_SNI
203 util.IS_SECURETRANSPORT = False
204 util.ssl_.IS_SECURETRANSPORT = False
205
206
207 def _read_callback(connection_id, data_buffer, data_length_pointer):
208 """
209 SecureTransport read callback. This is called by ST to request that data
210 be returned from the socket.
211 """
212 wrapped_socket = None
213 try:
214 wrapped_socket = _connection_refs.get(connection_id)
215 if wrapped_socket is None:
216 return SecurityConst.errSSLInternal
217 base_socket = wrapped_socket.socket
218
219 requested_length = data_length_pointer[0]
220
221 timeout = wrapped_socket.gettimeout()
222 error = None
223 read_count = 0
224
225 try:
226 while read_count < requested_length:
227 if timeout is None or timeout >= 0:
228 if not util.wait_for_read(base_socket, timeout):
229 raise socket.error(errno.EAGAIN, "timed out")
230
231 remaining = requested_length - read_count
232 buffer = (ctypes.c_char * remaining).from_address(
233 data_buffer + read_count
234 )
235 chunk_size = base_socket.recv_into(buffer, remaining)
236 read_count += chunk_size
237 if not chunk_size:
238 if not read_count:
239 return SecurityConst.errSSLClosedGraceful
240 break
241 except (socket.error) as e:
242 error = e.errno
243
244 if error is not None and error != errno.EAGAIN:
245 data_length_pointer[0] = read_count
246 if error == errno.ECONNRESET or error == errno.EPIPE:
247 return SecurityConst.errSSLClosedAbort
248 raise
249
250 data_length_pointer[0] = read_count
251
252 if read_count != requested_length:
253 return SecurityConst.errSSLWouldBlock
254
255 return 0
256 except Exception as e:
257 if wrapped_socket is not None:
258 wrapped_socket._exception = e
259 return SecurityConst.errSSLInternal
260
261
262 def _write_callback(connection_id, data_buffer, data_length_pointer):
263 """
264 SecureTransport write callback. This is called by ST to request that data
265 actually be sent on the network.
266 """
267 wrapped_socket = None
268 try:
269 wrapped_socket = _connection_refs.get(connection_id)
270 if wrapped_socket is None:
271 return SecurityConst.errSSLInternal
272 base_socket = wrapped_socket.socket
273
274 bytes_to_write = data_length_pointer[0]
275 data = ctypes.string_at(data_buffer, bytes_to_write)
276
277 timeout = wrapped_socket.gettimeout()
278 error = None
279 sent = 0
280
281 try:
282 while sent < bytes_to_write:
283 if timeout is None or timeout >= 0:
284 if not util.wait_for_write(base_socket, timeout):
285 raise socket.error(errno.EAGAIN, "timed out")
286 chunk_sent = base_socket.send(data)
287 sent += chunk_sent
288
289 # This has some needless copying here, but I'm not sure there's
290 # much value in optimising this data path.
291 data = data[chunk_sent:]
292 except (socket.error) as e:
293 error = e.errno
294
295 if error is not None and error != errno.EAGAIN:
296 data_length_pointer[0] = sent
297 if error == errno.ECONNRESET or error == errno.EPIPE:
298 return SecurityConst.errSSLClosedAbort
299 raise
300
301 data_length_pointer[0] = sent
302
303 if sent != bytes_to_write:
304 return SecurityConst.errSSLWouldBlock
305
306 return 0
307 except Exception as e:
308 if wrapped_socket is not None:
309 wrapped_socket._exception = e
310 return SecurityConst.errSSLInternal
311
312
313 # We need to keep these two objects references alive: if they get GC'd while
314 # in use then SecureTransport could attempt to call a function that is in freed
315 # memory. That would be...uh...bad. Yeah, that's the word. Bad.
316 _read_callback_pointer = Security.SSLReadFunc(_read_callback)
317 _write_callback_pointer = Security.SSLWriteFunc(_write_callback)
318
319
320 class WrappedSocket(object):
321 """
322 API-compatibility wrapper for Python's OpenSSL wrapped socket object.
323
324 Note: _makefile_refs, _drop(), and _reuse() are needed for the garbage
325 collector of PyPy.
326 """
327
328 def __init__(self, socket):
329 self.socket = socket
330 self.context = None
331 self._makefile_refs = 0
332 self._closed = False
333 self._exception = None
334 self._keychain = None
335 self._keychain_dir = None
336 self._client_cert_chain = None
337
338 # We save off the previously-configured timeout and then set it to
339 # zero. This is done because we use select and friends to handle the
340 # timeouts, but if we leave the timeout set on the lower socket then
341 # Python will "kindly" call select on that socket again for us. Avoid
342 # that by forcing the timeout to zero.
343 self._timeout = self.socket.gettimeout()
344 self.socket.settimeout(0)
345
346 @contextlib.contextmanager
347 def _raise_on_error(self):
348 """
349 A context manager that can be used to wrap calls that do I/O from
350 SecureTransport. If any of the I/O callbacks hit an exception, this
351 context manager will correctly propagate the exception after the fact.
352 This avoids silently swallowing those exceptions.
353
354 It also correctly forces the socket closed.
355 """
356 self._exception = None
357
358 # We explicitly don't catch around this yield because in the unlikely
359 # event that an exception was hit in the block we don't want to swallow
360 # it.
361 yield
362 if self._exception is not None:
363 exception, self._exception = self._exception, None
364 self.close()
365 raise exception
366
367 def _set_ciphers(self):
368 """
369 Sets up the allowed ciphers. By default this matches the set in
370 util.ssl_.DEFAULT_CIPHERS, at least as supported by macOS. This is done
371 custom and doesn't allow changing at this time, mostly because parsing
372 OpenSSL cipher strings is going to be a freaking nightmare.
373 """
374 ciphers = (Security.SSLCipherSuite * len(CIPHER_SUITES))(*CIPHER_SUITES)
375 result = Security.SSLSetEnabledCiphers(
376 self.context, ciphers, len(CIPHER_SUITES)
377 )
378 _assert_no_error(result)
379
380 def _custom_validate(self, verify, trust_bundle):
381 """
382 Called when we have set custom validation. We do this in two cases:
383 first, when cert validation is entirely disabled; and second, when
384 using a custom trust DB.
385 """
386 # If we disabled cert validation, just say: cool.
387 if not verify:
388 return
389
390 # We want data in memory, so load it up.
391 if os.path.isfile(trust_bundle):
392 with open(trust_bundle, "rb") as f:
393 trust_bundle = f.read()
394
395 cert_array = None
396 trust = Security.SecTrustRef()
397
398 try:
399 # Get a CFArray that contains the certs we want.
400 cert_array = _cert_array_from_pem(trust_bundle)
401
402 # Ok, now the hard part. We want to get the SecTrustRef that ST has
403 # created for this connection, shove our CAs into it, tell ST to
404 # ignore everything else it knows, and then ask if it can build a
405 # chain. This is a buuuunch of code.
406 result = Security.SSLCopyPeerTrust(self.context, ctypes.byref(trust))
407 _assert_no_error(result)
408 if not trust:
409 raise ssl.SSLError("Failed to copy trust reference")
410
411 result = Security.SecTrustSetAnchorCertificates(trust, cert_array)
412 _assert_no_error(result)
413
414 result = Security.SecTrustSetAnchorCertificatesOnly(trust, True)
415 _assert_no_error(result)
416
417 trust_result = Security.SecTrustResultType()
418 result = Security.SecTrustEvaluate(trust, ctypes.byref(trust_result))
419 _assert_no_error(result)
420 finally:
421 if trust:
422 CoreFoundation.CFRelease(trust)
423
424 if cert_array is not None:
425 CoreFoundation.CFRelease(cert_array)
426
427 # Ok, now we can look at what the result was.
428 successes = (
429 SecurityConst.kSecTrustResultUnspecified,
430 SecurityConst.kSecTrustResultProceed,
431 )
432 if trust_result.value not in successes:
433 raise ssl.SSLError(
434 "certificate verify failed, error code: %d" % trust_result.value
435 )
436
437 def handshake(
438 self,
439 server_hostname,
440 verify,
441 trust_bundle,
442 min_version,
443 max_version,
444 client_cert,
445 client_key,
446 client_key_passphrase,
447 ):
448 """
449 Actually performs the TLS handshake. This is run automatically by
450 wrapped socket, and shouldn't be needed in user code.
451 """
452 # First, we do the initial bits of connection setup. We need to create
453 # a context, set its I/O funcs, and set the connection reference.
454 self.context = Security.SSLCreateContext(
455 None, SecurityConst.kSSLClientSide, SecurityConst.kSSLStreamType
456 )
457 result = Security.SSLSetIOFuncs(
458 self.context, _read_callback_pointer, _write_callback_pointer
459 )
460 _assert_no_error(result)
461
462 # Here we need to compute the handle to use. We do this by taking the
463 # id of self modulo 2**31 - 1. If this is already in the dictionary, we
464 # just keep incrementing by one until we find a free space.
465 with _connection_ref_lock:
466 handle = id(self) % 2147483647
467 while handle in _connection_refs:
468 handle = (handle + 1) % 2147483647
469 _connection_refs[handle] = self
470
471 result = Security.SSLSetConnection(self.context, handle)
472 _assert_no_error(result)
473
474 # If we have a server hostname, we should set that too.
475 if server_hostname:
476 if not isinstance(server_hostname, bytes):
477 server_hostname = server_hostname.encode("utf-8")
478
479 result = Security.SSLSetPeerDomainName(
480 self.context, server_hostname, len(server_hostname)
481 )
482 _assert_no_error(result)
483
484 # Setup the ciphers.
485 self._set_ciphers()
486
487 # Set the minimum and maximum TLS versions.
488 result = Security.SSLSetProtocolVersionMin(self.context, min_version)
489 _assert_no_error(result)
490
491 # TLS 1.3 isn't necessarily enabled by the OS
492 # so we have to detect when we error out and try
493 # setting TLS 1.3 if it's allowed. kTLSProtocolMaxSupported
494 # was added in macOS 10.13 along with kTLSProtocol13.
495 result = Security.SSLSetProtocolVersionMax(self.context, max_version)
496 if result != 0 and max_version == SecurityConst.kTLSProtocolMaxSupported:
497 result = Security.SSLSetProtocolVersionMax(
498 self.context, SecurityConst.kTLSProtocol12
499 )
500 _assert_no_error(result)
501
502 # If there's a trust DB, we need to use it. We do that by telling
503 # SecureTransport to break on server auth. We also do that if we don't
504 # want to validate the certs at all: we just won't actually do any
505 # authing in that case.
506 if not verify or trust_bundle is not None:
507 result = Security.SSLSetSessionOption(
508 self.context, SecurityConst.kSSLSessionOptionBreakOnServerAuth, True
509 )
510 _assert_no_error(result)
511
512 # If there's a client cert, we need to use it.
513 if client_cert:
514 self._keychain, self._keychain_dir = _temporary_keychain()
515 self._client_cert_chain = _load_client_cert_chain(
516 self._keychain, client_cert, client_key
517 )
518 result = Security.SSLSetCertificate(self.context, self._client_cert_chain)
519 _assert_no_error(result)
520
521 while True:
522 with self._raise_on_error():
523 result = Security.SSLHandshake(self.context)
524
525 if result == SecurityConst.errSSLWouldBlock:
526 raise socket.timeout("handshake timed out")
527 elif result == SecurityConst.errSSLServerAuthCompleted:
528 self._custom_validate(verify, trust_bundle)
529 continue
530 else:
531 _assert_no_error(result)
532 break
533
534 def fileno(self):
535 return self.socket.fileno()
536
537 # Copy-pasted from Python 3.5 source code
538 def _decref_socketios(self):
539 if self._makefile_refs > 0:
540 self._makefile_refs -= 1
541 if self._closed:
542 self.close()
543
544 def recv(self, bufsiz):
545 buffer = ctypes.create_string_buffer(bufsiz)
546 bytes_read = self.recv_into(buffer, bufsiz)
547 data = buffer[:bytes_read]
548 return data
549
550 def recv_into(self, buffer, nbytes=None):
551 # Read short on EOF.
552 if self._closed:
553 return 0
554
555 if nbytes is None:
556 nbytes = len(buffer)
557
558 buffer = (ctypes.c_char * nbytes).from_buffer(buffer)
559 processed_bytes = ctypes.c_size_t(0)
560
561 with self._raise_on_error():
562 result = Security.SSLRead(
563 self.context, buffer, nbytes, ctypes.byref(processed_bytes)
564 )
565
566 # There are some result codes that we want to treat as "not always
567 # errors". Specifically, those are errSSLWouldBlock,
568 # errSSLClosedGraceful, and errSSLClosedNoNotify.
569 if result == SecurityConst.errSSLWouldBlock:
570 # If we didn't process any bytes, then this was just a time out.
571 # However, we can get errSSLWouldBlock in situations when we *did*
572 # read some data, and in those cases we should just read "short"
573 # and return.
574 if processed_bytes.value == 0:
575 # Timed out, no data read.
576 raise socket.timeout("recv timed out")
577 elif result in (
578 SecurityConst.errSSLClosedGraceful,
579 SecurityConst.errSSLClosedNoNotify,
580 ):
581 # The remote peer has closed this connection. We should do so as
582 # well. Note that we don't actually return here because in
583 # principle this could actually be fired along with return data.
584 # It's unlikely though.
585 self.close()
586 else:
587 _assert_no_error(result)
588
589 # Ok, we read and probably succeeded. We should return whatever data
590 # was actually read.
591 return processed_bytes.value
592
593 def settimeout(self, timeout):
594 self._timeout = timeout
595
596 def gettimeout(self):
597 return self._timeout
598
599 def send(self, data):
600 processed_bytes = ctypes.c_size_t(0)
601
602 with self._raise_on_error():
603 result = Security.SSLWrite(
604 self.context, data, len(data), ctypes.byref(processed_bytes)
605 )
606
607 if result == SecurityConst.errSSLWouldBlock and processed_bytes.value == 0:
608 # Timed out
609 raise socket.timeout("send timed out")
610 else:
611 _assert_no_error(result)
612
613 # We sent, and probably succeeded. Tell them how much we sent.
614 return processed_bytes.value
615
616 def sendall(self, data):
617 total_sent = 0
618 while total_sent < len(data):
619 sent = self.send(data[total_sent : total_sent + SSL_WRITE_BLOCKSIZE])
620 total_sent += sent
621
622 def shutdown(self):
623 with self._raise_on_error():
624 Security.SSLClose(self.context)
625
626 def close(self):
627 # TODO: should I do clean shutdown here? Do I have to?
628 if self._makefile_refs < 1:
629 self._closed = True
630 if self.context:
631 CoreFoundation.CFRelease(self.context)
632 self.context = None
633 if self._client_cert_chain:
634 CoreFoundation.CFRelease(self._client_cert_chain)
635 self._client_cert_chain = None
636 if self._keychain:
637 Security.SecKeychainDelete(self._keychain)
638 CoreFoundation.CFRelease(self._keychain)
639 shutil.rmtree(self._keychain_dir)
640 self._keychain = self._keychain_dir = None
641 return self.socket.close()
642 else:
643 self._makefile_refs -= 1
644
645 def getpeercert(self, binary_form=False):
646 # Urgh, annoying.
647 #
648 # Here's how we do this:
649 #
650 # 1. Call SSLCopyPeerTrust to get hold of the trust object for this
651 # connection.
652 # 2. Call SecTrustGetCertificateAtIndex for index 0 to get the leaf.
653 # 3. To get the CN, call SecCertificateCopyCommonName and process that
654 # string so that it's of the appropriate type.
655 # 4. To get the SAN, we need to do something a bit more complex:
656 # a. Call SecCertificateCopyValues to get the data, requesting
657 # kSecOIDSubjectAltName.
658 # b. Mess about with this dictionary to try to get the SANs out.
659 #
660 # This is gross. Really gross. It's going to be a few hundred LoC extra
661 # just to repeat something that SecureTransport can *already do*. So my
662 # operating assumption at this time is that what we want to do is
663 # instead to just flag to urllib3 that it shouldn't do its own hostname
664 # validation when using SecureTransport.
665 if not binary_form:
666 raise ValueError("SecureTransport only supports dumping binary certs")
667 trust = Security.SecTrustRef()
668 certdata = None
669 der_bytes = None
670
671 try:
672 # Grab the trust store.
673 result = Security.SSLCopyPeerTrust(self.context, ctypes.byref(trust))
674 _assert_no_error(result)
675 if not trust:
676 # Probably we haven't done the handshake yet. No biggie.
677 return None
678
679 cert_count = Security.SecTrustGetCertificateCount(trust)
680 if not cert_count:
681 # Also a case that might happen if we haven't handshaked.
682 # Handshook? Handshaken?
683 return None
684
685 leaf = Security.SecTrustGetCertificateAtIndex(trust, 0)
686 assert leaf
687
688 # Ok, now we want the DER bytes.
689 certdata = Security.SecCertificateCopyData(leaf)
690 assert certdata
691
692 data_length = CoreFoundation.CFDataGetLength(certdata)
693 data_buffer = CoreFoundation.CFDataGetBytePtr(certdata)
694 der_bytes = ctypes.string_at(data_buffer, data_length)
695 finally:
696 if certdata:
697 CoreFoundation.CFRelease(certdata)
698 if trust:
699 CoreFoundation.CFRelease(trust)
700
701 return der_bytes
702
703 def version(self):
704 protocol = Security.SSLProtocol()
705 result = Security.SSLGetNegotiatedProtocolVersion(
706 self.context, ctypes.byref(protocol)
707 )
708 _assert_no_error(result)
709 if protocol.value == SecurityConst.kTLSProtocol13:
710 return "TLSv1.3"
711 elif protocol.value == SecurityConst.kTLSProtocol12:
712 return "TLSv1.2"
713 elif protocol.value == SecurityConst.kTLSProtocol11:
714 return "TLSv1.1"
715 elif protocol.value == SecurityConst.kTLSProtocol1:
716 return "TLSv1"
717 elif protocol.value == SecurityConst.kSSLProtocol3:
718 return "SSLv3"
719 elif protocol.value == SecurityConst.kSSLProtocol2:
720 return "SSLv2"
721 else:
722 raise ssl.SSLError("Unknown TLS version: %r" % protocol)
723
724 def _reuse(self):
725 self._makefile_refs += 1
726
727 def _drop(self):
728 if self._makefile_refs < 1:
729 self.close()
730 else:
731 self._makefile_refs -= 1
732
733
734 if _fileobject: # Platform-specific: Python 2
735
736 def makefile(self, mode, bufsize=-1):
737 self._makefile_refs += 1
738 return _fileobject(self, mode, bufsize, close=True)
739
740
741 else: # Platform-specific: Python 3
742
743 def makefile(self, mode="r", buffering=None, *args, **kwargs):
744 # We disable buffering with SecureTransport because it conflicts with
745 # the buffering that ST does internally (see issue #1153 for more).
746 buffering = 0
747 return backport_makefile(self, mode, buffering, *args, **kwargs)
748
749
750 WrappedSocket.makefile = makefile
751
752
753 class SecureTransportContext(object):
754 """
755 I am a wrapper class for the SecureTransport library, to translate the
756 interface of the standard library ``SSLContext`` object to calls into
757 SecureTransport.
758 """
759
760 def __init__(self, protocol):
761 self._min_version, self._max_version = _protocol_to_min_max[protocol]
762 self._options = 0
763 self._verify = False
764 self._trust_bundle = None
765 self._client_cert = None
766 self._client_key = None
767 self._client_key_passphrase = None
768
769 @property
770 def check_hostname(self):
771 """
772 SecureTransport cannot have its hostname checking disabled. For more,
773 see the comment on getpeercert() in this file.
774 """
775 return True
776
777 @check_hostname.setter
778 def check_hostname(self, value):
779 """
780 SecureTransport cannot have its hostname checking disabled. For more,
781 see the comment on getpeercert() in this file.
782 """
783 pass
784
785 @property
786 def options(self):
787 # TODO: Well, crap.
788 #
789 # So this is the bit of the code that is the most likely to cause us
790 # trouble. Essentially we need to enumerate all of the SSL options that
791 # users might want to use and try to see if we can sensibly translate
792 # them, or whether we should just ignore them.
793 return self._options
794
795 @options.setter
796 def options(self, value):
797 # TODO: Update in line with above.
798 self._options = value
799
800 @property
801 def verify_mode(self):
802 return ssl.CERT_REQUIRED if self._verify else ssl.CERT_NONE
803
804 @verify_mode.setter
805 def verify_mode(self, value):
806 self._verify = True if value == ssl.CERT_REQUIRED else False
807
808 def set_default_verify_paths(self):
809 # So, this has to do something a bit weird. Specifically, what it does
810 # is nothing.
811 #
812 # This means that, if we had previously had load_verify_locations
813 # called, this does not undo that. We need to do that because it turns
814 # out that the rest of the urllib3 code will attempt to load the
815 # default verify paths if it hasn't been told about any paths, even if
816 # the context itself was sometime earlier. We resolve that by just
817 # ignoring it.
818 pass
819
820 def load_default_certs(self):
821 return self.set_default_verify_paths()
822
823 def set_ciphers(self, ciphers):
824 # For now, we just require the default cipher string.
825 if ciphers != util.ssl_.DEFAULT_CIPHERS:
826 raise ValueError("SecureTransport doesn't support custom cipher strings")
827
828 def load_verify_locations(self, cafile=None, capath=None, cadata=None):
829 # OK, we only really support cadata and cafile.
830 if capath is not None:
831 raise ValueError("SecureTransport does not support cert directories")
832
833 self._trust_bundle = cafile or cadata
834
835 def load_cert_chain(self, certfile, keyfile=None, password=None):
836 self._client_cert = certfile
837 self._client_key = keyfile
838 self._client_cert_passphrase = password
839
840 def wrap_socket(
841 self,
842 sock,
843 server_side=False,
844 do_handshake_on_connect=True,
845 suppress_ragged_eofs=True,
846 server_hostname=None,
847 ):
848 # So, what do we do here? Firstly, we assert some properties. This is a
849 # stripped down shim, so there is some functionality we don't support.
850 # See PEP 543 for the real deal.
851 assert not server_side
852 assert do_handshake_on_connect
853 assert suppress_ragged_eofs
854
855 # Ok, we're good to go. Now we want to create the wrapped socket object
856 # and store it in the appropriate place.
857 wrapped_socket = WrappedSocket(sock)
858
859 # Now we can handshake
860 wrapped_socket.handshake(
861 server_hostname,
862 self._verify,
863 self._trust_bundle,
864 self._min_version,
865 self._max_version,
866 self._client_cert,
867 self._client_key,
868 self._client_key_passphrase,
869 )
870 return wrapped_socket
871
[end of src/pip/_vendor/urllib3/contrib/securetransport.py]
[start of src/pip/_vendor/urllib3/exceptions.py]
1 from __future__ import absolute_import
2 from .packages.six.moves.http_client import IncompleteRead as httplib_IncompleteRead
3
4 # Base Exceptions
5
6
7 class HTTPError(Exception):
8 "Base exception used by this module."
9 pass
10
11
12 class HTTPWarning(Warning):
13 "Base warning used by this module."
14 pass
15
16
17 class PoolError(HTTPError):
18 "Base exception for errors caused within a pool."
19
20 def __init__(self, pool, message):
21 self.pool = pool
22 HTTPError.__init__(self, "%s: %s" % (pool, message))
23
24 def __reduce__(self):
25 # For pickling purposes.
26 return self.__class__, (None, None)
27
28
29 class RequestError(PoolError):
30 "Base exception for PoolErrors that have associated URLs."
31
32 def __init__(self, pool, url, message):
33 self.url = url
34 PoolError.__init__(self, pool, message)
35
36 def __reduce__(self):
37 # For pickling purposes.
38 return self.__class__, (None, self.url, None)
39
40
41 class SSLError(HTTPError):
42 "Raised when SSL certificate fails in an HTTPS connection."
43 pass
44
45
46 class ProxyError(HTTPError):
47 "Raised when the connection to a proxy fails."
48 pass
49
50
51 class DecodeError(HTTPError):
52 "Raised when automatic decoding based on Content-Type fails."
53 pass
54
55
56 class ProtocolError(HTTPError):
57 "Raised when something unexpected happens mid-request/response."
58 pass
59
60
61 #: Renamed to ProtocolError but aliased for backwards compatibility.
62 ConnectionError = ProtocolError
63
64
65 # Leaf Exceptions
66
67
68 class MaxRetryError(RequestError):
69 """Raised when the maximum number of retries is exceeded.
70
71 :param pool: The connection pool
72 :type pool: :class:`~urllib3.connectionpool.HTTPConnectionPool`
73 :param string url: The requested Url
74 :param exceptions.Exception reason: The underlying error
75
76 """
77
78 def __init__(self, pool, url, reason=None):
79 self.reason = reason
80
81 message = "Max retries exceeded with url: %s (Caused by %r)" % (url, reason)
82
83 RequestError.__init__(self, pool, url, message)
84
85
86 class HostChangedError(RequestError):
87 "Raised when an existing pool gets a request for a foreign host."
88
89 def __init__(self, pool, url, retries=3):
90 message = "Tried to open a foreign host with url: %s" % url
91 RequestError.__init__(self, pool, url, message)
92 self.retries = retries
93
94
95 class TimeoutStateError(HTTPError):
96 """ Raised when passing an invalid state to a timeout """
97
98 pass
99
100
101 class TimeoutError(HTTPError):
102 """ Raised when a socket timeout error occurs.
103
104 Catching this error will catch both :exc:`ReadTimeoutErrors
105 <ReadTimeoutError>` and :exc:`ConnectTimeoutErrors <ConnectTimeoutError>`.
106 """
107
108 pass
109
110
111 class ReadTimeoutError(TimeoutError, RequestError):
112 "Raised when a socket timeout occurs while receiving data from a server"
113 pass
114
115
116 # This timeout error does not have a URL attached and needs to inherit from the
117 # base HTTPError
118 class ConnectTimeoutError(TimeoutError):
119 "Raised when a socket timeout occurs while connecting to a server"
120 pass
121
122
123 class NewConnectionError(ConnectTimeoutError, PoolError):
124 "Raised when we fail to establish a new connection. Usually ECONNREFUSED."
125 pass
126
127
128 class EmptyPoolError(PoolError):
129 "Raised when a pool runs out of connections and no more are allowed."
130 pass
131
132
133 class ClosedPoolError(PoolError):
134 "Raised when a request enters a pool after the pool has been closed."
135 pass
136
137
138 class LocationValueError(ValueError, HTTPError):
139 "Raised when there is something wrong with a given URL input."
140 pass
141
142
143 class LocationParseError(LocationValueError):
144 "Raised when get_host or similar fails to parse the URL input."
145
146 def __init__(self, location):
147 message = "Failed to parse: %s" % location
148 HTTPError.__init__(self, message)
149
150 self.location = location
151
152
153 class ResponseError(HTTPError):
154 "Used as a container for an error reason supplied in a MaxRetryError."
155 GENERIC_ERROR = "too many error responses"
156 SPECIFIC_ERROR = "too many {status_code} error responses"
157
158
159 class SecurityWarning(HTTPWarning):
160 "Warned when performing security reducing actions"
161 pass
162
163
164 class SubjectAltNameWarning(SecurityWarning):
165 "Warned when connecting to a host with a certificate missing a SAN."
166 pass
167
168
169 class InsecureRequestWarning(SecurityWarning):
170 "Warned when making an unverified HTTPS request."
171 pass
172
173
174 class SystemTimeWarning(SecurityWarning):
175 "Warned when system time is suspected to be wrong"
176 pass
177
178
179 class InsecurePlatformWarning(SecurityWarning):
180 "Warned when certain SSL configuration is not available on a platform."
181 pass
182
183
184 class SNIMissingWarning(HTTPWarning):
185 "Warned when making a HTTPS request without SNI available."
186 pass
187
188
189 class DependencyWarning(HTTPWarning):
190 """
191 Warned when an attempt is made to import a module with missing optional
192 dependencies.
193 """
194
195 pass
196
197
198 class ResponseNotChunked(ProtocolError, ValueError):
199 "Response needs to be chunked in order to read it as chunks."
200 pass
201
202
203 class BodyNotHttplibCompatible(HTTPError):
204 """
205 Body should be httplib.HTTPResponse like (have an fp attribute which
206 returns raw chunks) for read_chunked().
207 """
208
209 pass
210
211
212 class IncompleteRead(HTTPError, httplib_IncompleteRead):
213 """
214 Response length doesn't match expected Content-Length
215
216 Subclass of http_client.IncompleteRead to allow int value
217 for `partial` to avoid creating large objects on streamed
218 reads.
219 """
220
221 def __init__(self, partial, expected):
222 super(IncompleteRead, self).__init__(partial, expected)
223
224 def __repr__(self):
225 return "IncompleteRead(%i bytes read, " "%i more expected)" % (
226 self.partial,
227 self.expected,
228 )
229
230
231 class InvalidHeader(HTTPError):
232 "The header provided was somehow invalid."
233 pass
234
235
236 class ProxySchemeUnknown(AssertionError, ValueError):
237 "ProxyManager does not support the supplied scheme"
238 # TODO(t-8ch): Stop inheriting from AssertionError in v2.0.
239
240 def __init__(self, scheme):
241 message = "Not supported proxy scheme %s" % scheme
242 super(ProxySchemeUnknown, self).__init__(message)
243
244
245 class HeaderParsingError(HTTPError):
246 "Raised by assert_header_parsing, but we convert it to a log.warning statement."
247
248 def __init__(self, defects, unparsed_data):
249 message = "%s, unparsed data: %r" % (defects or "Unknown", unparsed_data)
250 super(HeaderParsingError, self).__init__(message)
251
252
253 class UnrewindableBodyError(HTTPError):
254 "urllib3 encountered an error when trying to rewind a body"
255 pass
256
[end of src/pip/_vendor/urllib3/exceptions.py]
[start of src/pip/_vendor/urllib3/util/ssl_.py]
1 from __future__ import absolute_import
2 import errno
3 import warnings
4 import hmac
5 import sys
6
7 from binascii import hexlify, unhexlify
8 from hashlib import md5, sha1, sha256
9
10 from .url import IPV4_RE, BRACELESS_IPV6_ADDRZ_RE
11 from ..exceptions import SSLError, InsecurePlatformWarning, SNIMissingWarning
12 from ..packages import six
13
14
15 SSLContext = None
16 HAS_SNI = False
17 IS_PYOPENSSL = False
18 IS_SECURETRANSPORT = False
19
20 # Maps the length of a digest to a possible hash function producing this digest
21 HASHFUNC_MAP = {32: md5, 40: sha1, 64: sha256}
22
23
24 def _const_compare_digest_backport(a, b):
25 """
26 Compare two digests of equal length in constant time.
27
28 The digests must be of type str/bytes.
29 Returns True if the digests match, and False otherwise.
30 """
31 result = abs(len(a) - len(b))
32 for l, r in zip(bytearray(a), bytearray(b)):
33 result |= l ^ r
34 return result == 0
35
36
37 _const_compare_digest = getattr(hmac, "compare_digest", _const_compare_digest_backport)
38
39 try: # Test for SSL features
40 import ssl
41 from ssl import wrap_socket, CERT_REQUIRED
42 from ssl import HAS_SNI # Has SNI?
43 except ImportError:
44 pass
45
46 try: # Platform-specific: Python 3.6
47 from ssl import PROTOCOL_TLS
48
49 PROTOCOL_SSLv23 = PROTOCOL_TLS
50 except ImportError:
51 try:
52 from ssl import PROTOCOL_SSLv23 as PROTOCOL_TLS
53
54 PROTOCOL_SSLv23 = PROTOCOL_TLS
55 except ImportError:
56 PROTOCOL_SSLv23 = PROTOCOL_TLS = 2
57
58
59 try:
60 from ssl import OP_NO_SSLv2, OP_NO_SSLv3, OP_NO_COMPRESSION
61 except ImportError:
62 OP_NO_SSLv2, OP_NO_SSLv3 = 0x1000000, 0x2000000
63 OP_NO_COMPRESSION = 0x20000
64
65
66 # A secure default.
67 # Sources for more information on TLS ciphers:
68 #
69 # - https://wiki.mozilla.org/Security/Server_Side_TLS
70 # - https://www.ssllabs.com/projects/best-practices/index.html
71 # - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
72 #
73 # The general intent is:
74 # - prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE),
75 # - prefer ECDHE over DHE for better performance,
76 # - prefer any AES-GCM and ChaCha20 over any AES-CBC for better performance and
77 # security,
78 # - prefer AES-GCM over ChaCha20 because hardware-accelerated AES is common,
79 # - disable NULL authentication, MD5 MACs, DSS, and other
80 # insecure ciphers for security reasons.
81 # - NOTE: TLS 1.3 cipher suites are managed through a different interface
82 # not exposed by CPython (yet!) and are enabled by default if they're available.
83 DEFAULT_CIPHERS = ":".join(
84 [
85 "ECDHE+AESGCM",
86 "ECDHE+CHACHA20",
87 "DHE+AESGCM",
88 "DHE+CHACHA20",
89 "ECDH+AESGCM",
90 "DH+AESGCM",
91 "ECDH+AES",
92 "DH+AES",
93 "RSA+AESGCM",
94 "RSA+AES",
95 "!aNULL",
96 "!eNULL",
97 "!MD5",
98 "!DSS",
99 ]
100 )
101
102 try:
103 from ssl import SSLContext # Modern SSL?
104 except ImportError:
105
106 class SSLContext(object): # Platform-specific: Python 2
107 def __init__(self, protocol_version):
108 self.protocol = protocol_version
109 # Use default values from a real SSLContext
110 self.check_hostname = False
111 self.verify_mode = ssl.CERT_NONE
112 self.ca_certs = None
113 self.options = 0
114 self.certfile = None
115 self.keyfile = None
116 self.ciphers = None
117
118 def load_cert_chain(self, certfile, keyfile):
119 self.certfile = certfile
120 self.keyfile = keyfile
121
122 def load_verify_locations(self, cafile=None, capath=None):
123 self.ca_certs = cafile
124
125 if capath is not None:
126 raise SSLError("CA directories not supported in older Pythons")
127
128 def set_ciphers(self, cipher_suite):
129 self.ciphers = cipher_suite
130
131 def wrap_socket(self, socket, server_hostname=None, server_side=False):
132 warnings.warn(
133 "A true SSLContext object is not available. This prevents "
134 "urllib3 from configuring SSL appropriately and may cause "
135 "certain SSL connections to fail. You can upgrade to a newer "
136 "version of Python to solve this. For more information, see "
137 "https://urllib3.readthedocs.io/en/latest/advanced-usage.html"
138 "#ssl-warnings",
139 InsecurePlatformWarning,
140 )
141 kwargs = {
142 "keyfile": self.keyfile,
143 "certfile": self.certfile,
144 "ca_certs": self.ca_certs,
145 "cert_reqs": self.verify_mode,
146 "ssl_version": self.protocol,
147 "server_side": server_side,
148 }
149 return wrap_socket(socket, ciphers=self.ciphers, **kwargs)
150
151
152 def assert_fingerprint(cert, fingerprint):
153 """
154 Checks if given fingerprint matches the supplied certificate.
155
156 :param cert:
157 Certificate as bytes object.
158 :param fingerprint:
159 Fingerprint as string of hexdigits, can be interspersed by colons.
160 """
161
162 fingerprint = fingerprint.replace(":", "").lower()
163 digest_length = len(fingerprint)
164 hashfunc = HASHFUNC_MAP.get(digest_length)
165 if not hashfunc:
166 raise SSLError("Fingerprint of invalid length: {0}".format(fingerprint))
167
168 # We need encode() here for py32; works on py2 and p33.
169 fingerprint_bytes = unhexlify(fingerprint.encode())
170
171 cert_digest = hashfunc(cert).digest()
172
173 if not _const_compare_digest(cert_digest, fingerprint_bytes):
174 raise SSLError(
175 'Fingerprints did not match. Expected "{0}", got "{1}".'.format(
176 fingerprint, hexlify(cert_digest)
177 )
178 )
179
180
181 def resolve_cert_reqs(candidate):
182 """
183 Resolves the argument to a numeric constant, which can be passed to
184 the wrap_socket function/method from the ssl module.
185 Defaults to :data:`ssl.CERT_NONE`.
186 If given a string it is assumed to be the name of the constant in the
187 :mod:`ssl` module or its abbreviation.
188 (So you can specify `REQUIRED` instead of `CERT_REQUIRED`.
189 If it's neither `None` nor a string we assume it is already the numeric
190 constant which can directly be passed to wrap_socket.
191 """
192 if candidate is None:
193 return CERT_REQUIRED
194
195 if isinstance(candidate, str):
196 res = getattr(ssl, candidate, None)
197 if res is None:
198 res = getattr(ssl, "CERT_" + candidate)
199 return res
200
201 return candidate
202
203
204 def resolve_ssl_version(candidate):
205 """
206 like resolve_cert_reqs
207 """
208 if candidate is None:
209 return PROTOCOL_TLS
210
211 if isinstance(candidate, str):
212 res = getattr(ssl, candidate, None)
213 if res is None:
214 res = getattr(ssl, "PROTOCOL_" + candidate)
215 return res
216
217 return candidate
218
219
220 def create_urllib3_context(
221 ssl_version=None, cert_reqs=None, options=None, ciphers=None
222 ):
223 """All arguments have the same meaning as ``ssl_wrap_socket``.
224
225 By default, this function does a lot of the same work that
226 ``ssl.create_default_context`` does on Python 3.4+. It:
227
228 - Disables SSLv2, SSLv3, and compression
229 - Sets a restricted set of server ciphers
230
231 If you wish to enable SSLv3, you can do::
232
233 from pip._vendor.urllib3.util import ssl_
234 context = ssl_.create_urllib3_context()
235 context.options &= ~ssl_.OP_NO_SSLv3
236
237 You can do the same to enable compression (substituting ``COMPRESSION``
238 for ``SSLv3`` in the last line above).
239
240 :param ssl_version:
241 The desired protocol version to use. This will default to
242 PROTOCOL_SSLv23 which will negotiate the highest protocol that both
243 the server and your installation of OpenSSL support.
244 :param cert_reqs:
245 Whether to require the certificate verification. This defaults to
246 ``ssl.CERT_REQUIRED``.
247 :param options:
248 Specific OpenSSL options. These default to ``ssl.OP_NO_SSLv2``,
249 ``ssl.OP_NO_SSLv3``, ``ssl.OP_NO_COMPRESSION``.
250 :param ciphers:
251 Which cipher suites to allow the server to select.
252 :returns:
253 Constructed SSLContext object with specified options
254 :rtype: SSLContext
255 """
256 context = SSLContext(ssl_version or PROTOCOL_TLS)
257
258 context.set_ciphers(ciphers or DEFAULT_CIPHERS)
259
260 # Setting the default here, as we may have no ssl module on import
261 cert_reqs = ssl.CERT_REQUIRED if cert_reqs is None else cert_reqs
262
263 if options is None:
264 options = 0
265 # SSLv2 is easily broken and is considered harmful and dangerous
266 options |= OP_NO_SSLv2
267 # SSLv3 has several problems and is now dangerous
268 options |= OP_NO_SSLv3
269 # Disable compression to prevent CRIME attacks for OpenSSL 1.0+
270 # (issue #309)
271 options |= OP_NO_COMPRESSION
272
273 context.options |= options
274
275 # Enable post-handshake authentication for TLS 1.3, see GH #1634. PHA is
276 # necessary for conditional client cert authentication with TLS 1.3.
277 # The attribute is None for OpenSSL <= 1.1.0 or does not exist in older
278 # versions of Python. We only enable on Python 3.7.4+ or if certificate
279 # verification is enabled to work around Python issue #37428
280 # See: https://bugs.python.org/issue37428
281 if (cert_reqs == ssl.CERT_REQUIRED or sys.version_info >= (3, 7, 4)) and getattr(
282 context, "post_handshake_auth", None
283 ) is not None:
284 context.post_handshake_auth = True
285
286 context.verify_mode = cert_reqs
287 if (
288 getattr(context, "check_hostname", None) is not None
289 ): # Platform-specific: Python 3.2
290 # We do our own verification, including fingerprints and alternative
291 # hostnames. So disable it here
292 context.check_hostname = False
293 return context
294
295
296 def ssl_wrap_socket(
297 sock,
298 keyfile=None,
299 certfile=None,
300 cert_reqs=None,
301 ca_certs=None,
302 server_hostname=None,
303 ssl_version=None,
304 ciphers=None,
305 ssl_context=None,
306 ca_cert_dir=None,
307 key_password=None,
308 ):
309 """
310 All arguments except for server_hostname, ssl_context, and ca_cert_dir have
311 the same meaning as they do when using :func:`ssl.wrap_socket`.
312
313 :param server_hostname:
314 When SNI is supported, the expected hostname of the certificate
315 :param ssl_context:
316 A pre-made :class:`SSLContext` object. If none is provided, one will
317 be created using :func:`create_urllib3_context`.
318 :param ciphers:
319 A string of ciphers we wish the client to support.
320 :param ca_cert_dir:
321 A directory containing CA certificates in multiple separate files, as
322 supported by OpenSSL's -CApath flag or the capath argument to
323 SSLContext.load_verify_locations().
324 :param key_password:
325 Optional password if the keyfile is encrypted.
326 """
327 context = ssl_context
328 if context is None:
329 # Note: This branch of code and all the variables in it are no longer
330 # used by urllib3 itself. We should consider deprecating and removing
331 # this code.
332 context = create_urllib3_context(ssl_version, cert_reqs, ciphers=ciphers)
333
334 if ca_certs or ca_cert_dir:
335 try:
336 context.load_verify_locations(ca_certs, ca_cert_dir)
337 except IOError as e: # Platform-specific: Python 2.7
338 raise SSLError(e)
339 # Py33 raises FileNotFoundError which subclasses OSError
340 # These are not equivalent unless we check the errno attribute
341 except OSError as e: # Platform-specific: Python 3.3 and beyond
342 if e.errno == errno.ENOENT:
343 raise SSLError(e)
344 raise
345
346 elif ssl_context is None and hasattr(context, "load_default_certs"):
347 # try to load OS default certs; works well on Windows (require Python3.4+)
348 context.load_default_certs()
349
350 # Attempt to detect if we get the goofy behavior of the
351 # keyfile being encrypted and OpenSSL asking for the
352 # passphrase via the terminal and instead error out.
353 if keyfile and key_password is None and _is_key_file_encrypted(keyfile):
354 raise SSLError("Client private key is encrypted, password is required")
355
356 if certfile:
357 if key_password is None:
358 context.load_cert_chain(certfile, keyfile)
359 else:
360 context.load_cert_chain(certfile, keyfile, key_password)
361
362 # If we detect server_hostname is an IP address then the SNI
363 # extension should not be used according to RFC3546 Section 3.1
364 # We shouldn't warn the user if SNI isn't available but we would
365 # not be using SNI anyways due to IP address for server_hostname.
366 if (
367 server_hostname is not None and not is_ipaddress(server_hostname)
368 ) or IS_SECURETRANSPORT:
369 if HAS_SNI and server_hostname is not None:
370 return context.wrap_socket(sock, server_hostname=server_hostname)
371
372 warnings.warn(
373 "An HTTPS request has been made, but the SNI (Server Name "
374 "Indication) extension to TLS is not available on this platform. "
375 "This may cause the server to present an incorrect TLS "
376 "certificate, which can cause validation failures. You can upgrade to "
377 "a newer version of Python to solve this. For more information, see "
378 "https://urllib3.readthedocs.io/en/latest/advanced-usage.html"
379 "#ssl-warnings",
380 SNIMissingWarning,
381 )
382
383 return context.wrap_socket(sock)
384
385
386 def is_ipaddress(hostname):
387 """Detects whether the hostname given is an IPv4 or IPv6 address.
388 Also detects IPv6 addresses with Zone IDs.
389
390 :param str hostname: Hostname to examine.
391 :return: True if the hostname is an IP address, False otherwise.
392 """
393 if not six.PY2 and isinstance(hostname, bytes):
394 # IDN A-label bytes are ASCII compatible.
395 hostname = hostname.decode("ascii")
396 return bool(IPV4_RE.match(hostname) or BRACELESS_IPV6_ADDRZ_RE.match(hostname))
397
398
399 def _is_key_file_encrypted(key_file):
400 """Detects if a key file is encrypted or not."""
401 with open(key_file, "r") as f:
402 for line in f:
403 # Look for Proc-Type: 4,ENCRYPTED
404 if "ENCRYPTED" in line:
405 return True
406
407 return False
408
[end of src/pip/_vendor/urllib3/util/ssl_.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| pypa/pip | 44c8caccd4a39d6230666bca637157dfc78b95ea | pip 19.3 doesn't send client certificate
**Ubuntu 18.04 virtual environment**
* pip version: 19.3
* Python version: 3.6.8
* OS: Ubuntu 18.04.3 LTS
We have a private Pypi server hosted with [pypicloud](https://pypicloud.readthedocs.io/en/latest/index.html). We use client certificates to authenticate users for downloading/uploading packages.
**Description**
pip 19.3 doesn't seem to send our client certificates so authentication fails and packages cannot be installed:
`WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:852)'),)': /simple/<our package name>/
`
I captured some of the SSL traffic from pip install in Wireshark and the client certificate option is there in the SSL handshake, but the certificates length is 0 with pip 19.3:
![image](https://user-images.githubusercontent.com/9781018/66789548-28f54080-eeba-11e9-8124-315e814564bc.png)
In 19.2.1, the length is non-zero and Wireshark shows the client certificate I expect.
**Expected behavior**
We should not get an SSL error if our client certificates and CA certificates are not expired. I have checked our server logs there don't appear to be any errors there with our certificates.
If I downgrade to pip 19.2.1 or 19.2.3 in my virtual environment, then the SSL error goes away.
I also checked with the `openssl s_client` that a handshake succeeded with the same client certificate:
```
openssl s_client -connect <my server> -cert <cert> -key <key> -state
CONNECTED(00000005)
SSL_connect:before SSL initialization
SSL_connect:SSLv3/TLS write client hello
SSL_connect:SSLv3/TLS write client hello
SSL_connect:SSLv3/TLS read server hello
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = <my server>
verify return:1
SSL_connect:SSLv3/TLS read server certificate
SSL_connect:SSLv3/TLS read server key exchange
SSL_connect:SSLv3/TLS read server certificate request
SSL_connect:SSLv3/TLS read server done
SSL_connect:SSLv3/TLS write client certificate
...
SSL handshake has read 4268 bytes and written 1546 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID:
```
**How to Reproduce**
1. Setup pip.conf or command-line arguments to use client certificate
2. pip install <package>
3. sslv3 alert handshake failure occurs
**Output**
```
pip install <my package>
Looking in indexes: https://pypi.org/simple/, https://<my server>/simple/
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:852)'),)': /simple/<my package>/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:852)'),)': /simple/<my package>/
```
| I cannot reproduce this (Ubuntu 18.04.2, Python 3.6.7) with
<details>
<summary><strong>repro.sh</strong></summary>
```
#!/bin/sh
trap "exit" INT TERM
trap "kill 0" EXIT
set -e
cd "$(mktemp -d)"
openssl req -new -x509 -nodes \
-out cert.pem -keyout cert.pem \
-addext 'subjectAltName = IP:127.0.0.1' \
-subj '/CN=127.0.0.1'
cat <<EOF > server.py
import socket
import ssl
import sys
from pathlib import Path
cert = sys.argv[1]
context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
context.load_cert_chain(cert, cert)
context.load_verify_locations(cafile=cert)
context.verify_mode = ssl.CERT_REQUIRED
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) as sock:
sock.bind(('127.0.0.1', 0))
sock.listen(1)
_, port = sock.getsockname()
Path('port.txt').write_text(str(port), encoding='utf-8')
with context.wrap_socket(sock, server_side=True) as ssock:
while True:
conn, addr = ssock.accept()
cert = conn.getpeercert()
print(cert)
conn.write(b'HTTP/1.1 400 Bad Request\r\n\r\n')
conn.close()
EOF
PYTHON="${PYTHON:-python}"
"$PYTHON" -V
"$PYTHON" -m venv venv
venv/bin/python server.py cert.pem &
sleep 1
venv/bin/python -m pip install --upgrade pip==19.2.3
echo "- Old pip ------------------------------"
venv/bin/python -m pip -V
venv/bin/python -m pip install \
--ignore-installed \
--disable-pip-version-check \
--index-url https://127.0.0.1:$(cat port.txt) \
--cert cert.pem \
--client-cert cert.pem \
pip || true
venv/bin/python -m pip install --upgrade pip
echo "- New pip ------------------------------"
venv/bin/python -m pip -V
pip install \
--ignore-installed \
--disable-pip-version-check \
--index-url https://127.0.0.1:$(cat port.txt) \
--cert cert.pem \
--client-cert cert.pem \
pip
```
</details>
My output is
<details>
<summary><strong>Output</strong></summary>
```
$ PYTHON=~/.pyenv/versions/3.6.7/bin/python ./repro.sh
Generating a RSA private key
................................................................+++++
.......+++++
writing new private key to 'cert.pem'
-----
Python 3.6.7
Collecting pip==19.2.3
Using cached https://files.pythonhosted.org/packages/30/db/9e38760b32e3e7f40cce46dd5fb107b8c73840df38f0046d8e6514e675a1/pip-19.2.3-py2.py3-none-any.whl
Installing collected packages: pip
Found existing installation: pip 10.0.1
Uninstalling pip-10.0.1:
Successfully uninstalled pip-10.0.1
Successfully installed pip-19.2.3
You are using pip version 19.2.3, however version 19.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
- Old pip ------------------------------
pip 19.2.3 from /tmp/user/1000/tmp.ZqHiG62cpt/venv/lib/python3.6/site-packages/pip (python 3.6)
Looking in indexes: https://127.0.0.1:55649
Collecting pip
{'subject': ((('commonName', '127.0.0.1'),),), 'issuer': ((('commonName', '127.0.0.1'),),), 'version': 3, 'serialNumber': '5D7B2701E9D3E0E8A9E6CA66AEC3849D3BE826CD', 'notBefore': 'Oct 15 01:55:59 2019 GMT', 'notAfter': 'Nov 14 01:55:59 2019 GMT', 'subjectAltName': (('IP Address', '127.0.0.1'),)}
ERROR: Could not find a version that satisfies the requirement pip (from versions: none)
ERROR: No matching distribution found for pip
Collecting pip
Using cached https://files.pythonhosted.org/packages/4a/08/6ca123073af4ebc4c5488a5bc8a010ac57aa39ce4d3c8a931ad504de4185/pip-19.3-py2.py3-none-any.whl
Installing collected packages: pip
Found existing installation: pip 19.2.3
Uninstalling pip-19.2.3:
Successfully uninstalled pip-19.2.3
Successfully installed pip-19.3
- New pip ------------------------------
pip 19.3 from /tmp/user/1000/tmp.ZqHiG62cpt/venv/lib/python3.6/site-packages/pip (python 3.6)
Looking in indexes: https://127.0.0.1:55649
Collecting pip
{'subject': ((('commonName', '127.0.0.1'),),), 'issuer': ((('commonName', '127.0.0.1'),),), 'version': 3, 'serialNumber': '5D7B2701E9D3E0E8A9E6CA66AEC3849D3BE826CD', 'notBefore': 'Oct 15 01:55:59 2019 GMT', 'notAfter': 'Nov 14 01:55:59 2019 GMT', 'subjectAltName': (('IP Address', '127.0.0.1'),)}
ERROR: Could not find a version that satisfies the requirement pip (from versions: none)
ERROR: No matching distribution found for pip
```
</details>
Notice in the second instance (with pip 19.3) that the server is still tracing the peer (pip) certificate.
How are you configuring the client cert for pip? Command line, configuration file, or environment variable?
Can you try shaping `repro.sh` from above into something self-contained that demonstrates your issue?
We're using ~/.pip/pip.conf to specify the client certificates. I modified your `repo.sh` and was not able to reproduce the problem using our client + server certificates and a fake SSL server (instead of the python one, I wanted to disable TLS 1.3 so I could see the certificates being sent in Wireshark):
`openssl s_server -accept 8999 -www -cert server.pem -key server.key -CAfile ca-cert.pem -no_tls1_3 -Verify 1`
It's a bit hard to produce something self-contained since we've got a Letsencrypt certificate tied to our own domain and a private PKI infrastructure for the client certificates.
It's looking like it might be an issue when the client certificate bundle is specified in pip.conf, specifying on the command-line seemed to work fine in 19.3. I'll try and come up with a new repro script that simulates this.
You may also run in a container so as not to clobber any existing configuration.
Ok, I think I have a container + script that reproduces the issue. It sets up its own CA and server/client certificates so it should be self-contained. I ran tshark in the Docker container and verified that when pip 19.3 talks to a dummy openssl server acting as pypi.org on the loopback interface, it doesn't send the client cert.
It has something to do with the `trusted-host` parameter in /root/.pip/pip.conf. With that commented out, there's no error. In the output below, some of the output from the openssl s_server process is mixed in with the script output (showing no client certificate sent).
<details>
<summary>Dockerfile</summary>
```
FROM python:3.8.0-slim-buster
COPY repro.sh /root
COPY pip.conf /root/.pip/pip.conf
WORKDIR /root
```
</details>
<details>
<summary>pip.conf</summary>
```
[global]
index-url = https://127.0.0.1:8999
trusted-host = 127.0.0.1
client-cert = /root/pip.client.bundle.pem
```
</details>
<details>
<summary>repro.sh</summary>
```bash
#!/bin/sh
trap "exit" INT TERM
trap "kill 0" EXIT
set -e
# CA + server cert
openssl genrsa -des3 -out ca.key -passout pass:notsecure 2048
openssl req -x509 -new -nodes -key ca.key -sha256 -days 1825 -addext "keyUsage = cRLSign, digitalSignature, keyCertSign" -out ca.pem -subj "/CN=Fake Root CA" -passin pass:notsecure
openssl genrsa -out pip.local.key 2048
openssl req -new -key pip.local.key -out pip.local.csr -subj "/CN=127.0.0.1"
cat << EOF > pip.local.ext
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 127.0.0.1
EOF
openssl x509 -req -in pip.local.csr -CA ca.pem -CAkey ca.key -CAcreateserial \
-out pip.local.pem -days 1825 -sha256 -extfile pip.local.ext -passin pass:notsecure
cat << EOF > pip.client.ext
keyUsage = digitalSignature
extendedKeyUsage = clientAuth
EOF
# client cert
openssl genrsa -out pip.client.key 2048
openssl req -new -key pip.client.key -out pip.client.csr -subj "/CN=pip install"
openssl x509 -req -in pip.client.csr -CA ca.pem -CAkey ca.key -CAcreateserial \
-out pip.client.pem -days 1825 -sha256 -extfile pip.client.ext -passin pass:notsecure
# create key + cert bundle for pip install
cat pip.client.key pip.client.pem > pip.client.bundle.pem
PYTHON="${PYTHON:-python3}"
"$PYTHON" -V
"$PYTHON" -m venv venv
openssl s_server -accept 8999 -www -cert pip.local.pem -key pip.local.key -CAfile ca.pem -no_tls1_3 -Verify 1 &
sleep 1
venv/bin/python -m pip install --index-url https://pypi.org/simple/ --upgrade pip==19.2.3
echo "- Old pip ------------------------------"
venv/bin/python -m pip -V
venv/bin/python -m pip install \
--ignore-installed \
--disable-pip-version-check \
--cert /root/ca.pem \
pip || true
echo "Upgrading pip --------------------------"
venv/bin/python -m pip install --index-url https://pypi.org/simple/ --upgrade pip
echo "- New pip ------------------------------"
venv/bin/python -m pip -V
pip install \
--ignore-installed \
--disable-pip-version-check \
--cert ca.pem \
pip
```
</details>
<details>
<summary>Usage</summary>
```bash
docker build -t pip-debug -f Dockerfile .
docker run -it pip-debug bash
root@6d0a40c1179c:~# ./repro.sh
```
</details>
<details>
<summary>Output</summary>
```
root@0e1127dd4124:~# ./repro.sh
Generating RSA private key, 2048 bit long modulus (2 primes)
.......................+++++
..........+++++
e is 65537 (0x010001)
Generating RSA private key, 2048 bit long modulus (2 primes)
...................................+++++
......................................................................................................................+++++
e is 65537 (0x010001)
Signature ok
subject=CN = 127.0.0.1
Getting CA Private Key
Generating RSA private key, 2048 bit long modulus (2 primes)
........................................+++++
.......................+++++
e is 65537 (0x010001)
Signature ok
subject=CN = pip install
Getting CA Private Key
Python 3.8.0
verify depth is 1, must return a certificate
Using default temp DH parameters
ACCEPT
Looking in indexes: https://pypi.org/simple/
Requirement already up-to-date: pip==19.2.3 in ./venv/lib/python3.8/site-packages (19.2.3)
WARNING: You are using pip version 19.2.3, however version 19.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
- Old pip ------------------------------
pip 19.2.3 from /root/venv/lib/python3.8/site-packages/pip (python 3.8)
Looking in indexes: https://127.0.0.1:8999
Collecting pip
depth=1 CN = Fake Root CA
verify return:1
depth=0 CN = pip install
verify return:1
ERROR: Could not find a version that satisfies the requirement pip (from versions: none)
ERROR: No matching distribution found for pip
Upgrading pip --------------------------
Looking in indexes: https://pypi.org/simple/
Collecting pip
Downloading https://files.pythonhosted.org/packages/4a/08/6ca123073af4ebc4c5488a5bc8a010ac57aa39ce4d3c8a931ad504de4185/pip-19.3-py2.py3-none-any.whl (1.4MB)
|████████████████████████████████| 1.4MB 3.7MB/s
Installing collected packages: pip
Found existing installation: pip 19.2.3
Uninstalling pip-19.2.3:
Successfully uninstalled pip-19.2.3
Successfully installed pip-19.3
- New pip ------------------------------
pip 19.3 from /root/venv/lib/python3.8/site-packages/pip (python 3.8)
Looking in indexes: https://127.0.0.1:8999
140716939547776:error:1417C0C7:SSL routines:tls_process_client_certificate:peer did not return a certificate:../ssl/statem/statem_srvr.c:3672:
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1108)'))': /pip/
140716939547776:error:1417C0C7:SSL routines:tls_process_client_certificate:peer did not return a certificate:../ssl/statem/statem_srvr.c:3672:
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1108)'))': /pip/
140716939547776:error:1417C0C7:SSL routines:tls_process_client_certificate:peer did not return a certificate:../ssl/statem/statem_srvr.c:3672:
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1108)'))': /pip/
140716939547776:error:1417C0C7:SSL routines:tls_process_client_certificate:peer did not return a certificate:../ssl/statem/statem_srvr.c:3672:
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1108)'))': /pip/
140716939547776:error:1417C0C7:SSL routines:tls_process_client_certificate:peer did not return a certificate:../ssl/statem/statem_srvr.c:3672:
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1108)'))': /pip/
140716939547776:error:1417C0C7:SSL routines:tls_process_client_certificate:peer did not return a certificate:../ssl/statem/statem_srvr.c:3672:
Could not fetch URL https://127.0.0.1:8999/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='127.0.0.1', port=8999): Max retries exceeded with url: /pip/ (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1108)'))) - skipping
ERROR: Could not find a version that satisfies the requirement pip (from versions: none)
ERROR: No matching distribution found for pip
```
</details>
Nice, thanks.
I bisected and it looks like the issue was introduced in 3f9136f. Previously the "trusted host" parameter with https URLs was only being applied for index URLs that did not have a port specified. As of 19.3 we assume that an unspecified port means the port is a wildcard. That change in conjunction with your configuration may have uncovered a bug in our `InsecureHTTPAdapter` [here](https://github.com/pypa/pip/blob/8c50c8a9bc8579886fa787a631dc15d4b503a8ac/src/pip/_internal/network/session.py#L214-L216) - we aren't doing anything with the `cert` parameter.
If I'm not missing something, I think we should be doing something like
```python
super(InsecureHTTPAdapter, self).cert_verify(conn=conn, url=url, verify=False, cert=cert)
```
to get the correct behavior (from [here](https://github.com/psf/requests/blob/67a7b2e8336951d527e223429672354989384197/requests/adapters.py#L241-L253)).
In your particular case is it possible to drop the trusted-host parameter since it wasn't being applied in previous versions?
Yeah, we can drop `trusted-host` for now. Most people have just reverted to pip 19.2.3
Thanks @surry for a well designed reproducer and @chrahunt for figuring out a potential root cause! :) | 2019-11-03T18:18:36Z | <patch>
diff --git a/src/pip/_internal/network/session.py b/src/pip/_internal/network/session.py
--- a/src/pip/_internal/network/session.py
+++ b/src/pip/_internal/network/session.py
@@ -212,8 +212,9 @@ def close(self):
class InsecureHTTPAdapter(HTTPAdapter):
def cert_verify(self, conn, url, verify, cert):
- conn.cert_reqs = 'CERT_NONE'
- conn.ca_certs = None
+ super(InsecureHTTPAdapter, self).cert_verify(
+ conn=conn, url=url, verify=False, cert=cert
+ )
class PipSession(requests.Session):
</patch> | [] | [] | |||
Lightning-AI__lightning-941 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support stepping options for lr scheduler
Currently schedulers get called every epoch. Sometimes though, we want them to be called every step.
Proposal 1:
Allow configure_optimizers to return this:
```python
return Adam, {'scheduler': LRScheduler, 'interval': 'batch|epoch'}
```
@ethanwharris @Borda thoughts? any simpler more general way of doing this? i think this dict can eventually have more options if we need to.
@srush
</issue>
<code>
[start of README.md]
1 <div align="center">
2
3 ![Logo](docs/source/_static/images/lightning_logo.svg)
4
5 # PyTorch Lightning
6
7 **The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.**
8
9
10 [![PyPI Status](https://badge.fury.io/py/pytorch-lightning.svg)](https://badge.fury.io/py/pytorch-lightning)
11 [![PyPI Status](https://pepy.tech/badge/pytorch-lightning)](https://pepy.tech/project/pytorch-lightning)
12 [![Coverage](docs/source/_static/images/coverage.svg)](https://github.com/PytorchLightning/pytorch-lightning/tree/master/tests#running-coverage)
13 [![CodeFactor](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning/badge)](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning)
14
15 [![ReadTheDocs](https://readthedocs.org/projects/pytorch-lightning/badge/?version=latest)](https://pytorch-lightning.readthedocs.io/en/latest/)
16 [![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ)
17 [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE)
18 [![Next Release](https://img.shields.io/badge/Next%20Release-Feb%2021-<COLOR>.svg)](https://shields.io/)
19
20 <!--
21 removed until codecov badge isn't empy. likely a config error showing nothing on master.
22 [![codecov](https://codecov.io/gh/Borda/pytorch-lightning/branch/master/graph/badge.svg)](https://codecov.io/gh/Borda/pytorch-lightning)
23 -->
24 </div>
25
26 ---
27 ## Continuous Integration
28 <center>
29
30 | System / PyTorch Version | 1.1 | 1.2 | 1.3 | 1.4 |
31 | :---: | :---: | :---: | :---: | :---: |
32 | Linux py3.6 | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) |
33 | Linux py3.7 | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | <center>—</center> | <center>—</center> | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) |
34 | OSX py3.6 | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | <center>—</center> | <center>—</center> | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) |
35 | OSX py3.7 | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | <center>—</center> | <center>—</center> | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) |
36 | Windows py3.6 | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | <center>—</center> | <center>—</center> | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) |
37 | Windows py3.7 | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | <center>—</center> | <center>—</center> | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) |
38
39 </center>
40
41 Simple installation from PyPI
42 ```bash
43 pip install pytorch-lightning
44 ```
45
46 ## Docs
47 - [master](https://pytorch-lightning.readthedocs.io/en/latest)
48 - [0.6.0](https://pytorch-lightning.readthedocs.io/en/0.6.0/)
49 - [0.5.3.2](https://pytorch-lightning.readthedocs.io/en/0.5.3.2/)
50
51 ## Demo
52 [Copy and run this COLAB!](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg)
53
54 ## What is it?
55 Lightning is a way to organize your PyTorch code to decouple the science code from the engineering. It's more of a style-guide than a framework.
56
57 By refactoring your code, we can automate most of the non-research code. Lightning guarantees tested, correct, modern best practices for the automated parts.
58
59 Here's an example of how to organize PyTorch code into the LightningModule.
60
61 ![PT to PL](docs/source/_images/mnist_imgs/pt_to_pl.jpg)
62
63 - If you are a researcher, Lightning is infinitely flexible, you can modify everything down to the way .backward is called or distributed is set up.
64 - If you are a scientist or production team, lightning is very simple to use with best practice defaults.
65
66 ## What does lightning control for me?
67
68 Everything in Blue!
69 This is how lightning separates the science (red) from the engineering (blue).
70
71 ![Overview](docs/source/_static/images/pl_overview.gif)
72
73 ## How much effort is it to convert?
74 You're probably tired of switching frameworks at this point. But it is a very quick process to refactor into the Lightning format (ie: hours). [Check out this tutorial](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09).
75
76 ## What are the differences with PyTorch?
77 If you're wondering what you gain out of refactoring your PyTorch code, [read this comparison!](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09)
78
79 ## Starting a new project?
80 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed)
81
82 ## Why do I want to use lightning?
83 Every research project starts the same, a model, a training loop, validation loop, etc. As your research advances, you're likely to need distributed training, 16-bit precision, checkpointing, gradient accumulation, etc.
84
85 Lightning sets up all the boilerplate state-of-the-art training for you so you can focus on the research.
86
87 ---
88
89 ## README Table of Contents
90 - [How do I use it](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it)
91 - [What lightning automates](https://github.com/PytorchLightning/pytorch-lightning#what-does-lightning-control-for-me)
92 - [Tensorboard integration](https://github.com/PytorchLightning/pytorch-lightning#tensorboard)
93 - [Lightning features](https://github.com/PytorchLightning/pytorch-lightning#lightning-automates-all-of-the-following-each-is-also-configurable)
94 - [Examples](https://github.com/PytorchLightning/pytorch-lightning#examples)
95 - [Tutorials](https://github.com/PytorchLightning/pytorch-lightning#tutorials)
96 - [Asking for help](https://github.com/PytorchLightning/pytorch-lightning#asking-for-help)
97 - [Contributing](https://github.com/PytorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md)
98 - [Bleeding edge install](https://github.com/PytorchLightning/pytorch-lightning#bleeding-edge)
99 - [Lightning Design Principles](https://github.com/PytorchLightning/pytorch-lightning#lightning-design-principles)
100 - [Lightning team](https://github.com/PytorchLightning/pytorch-lightning#lightning-team)
101 - [FAQ](https://github.com/PytorchLightning/pytorch-lightning#faq)
102
103 ---
104
105 ## How do I do use it?
106 Think about Lightning as refactoring your research code instead of using a new framework. The research code goes into a [LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html) which you fit using a Trainer.
107
108 The LightningModule defines a *system* such as seq-2-seq, GAN, etc... It can ALSO define a simple classifier such as the example below.
109
110 To use lightning do 2 things:
111 1. [Define a LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html)
112 **WARNING:** This syntax is for version 0.5.0+ where abbreviations were removed.
113 ```python
114 import os
115
116 import torch
117 from torch.nn import functional as F
118 from torch.utils.data import DataLoader
119 from torchvision.datasets import MNIST
120 from torchvision import transforms
121
122 import pytorch_lightning as pl
123
124 class CoolSystem(pl.LightningModule):
125
126 def __init__(self):
127 super(CoolSystem, self).__init__()
128 # not the best model...
129 self.l1 = torch.nn.Linear(28 * 28, 10)
130
131 def forward(self, x):
132 return torch.relu(self.l1(x.view(x.size(0), -1)))
133
134 def training_step(self, batch, batch_idx):
135 # REQUIRED
136 x, y = batch
137 y_hat = self.forward(x)
138 loss = F.cross_entropy(y_hat, y)
139 tensorboard_logs = {'train_loss': loss}
140 return {'loss': loss, 'log': tensorboard_logs}
141
142 def validation_step(self, batch, batch_idx):
143 # OPTIONAL
144 x, y = batch
145 y_hat = self.forward(x)
146 return {'val_loss': F.cross_entropy(y_hat, y)}
147
148 def validation_end(self, outputs):
149 # OPTIONAL
150 avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
151 tensorboard_logs = {'val_loss': avg_loss}
152 return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
153
154 def test_step(self, batch, batch_idx):
155 # OPTIONAL
156 x, y = batch
157 y_hat = self.forward(x)
158 return {'test_loss': F.cross_entropy(y_hat, y)}
159
160 def test_end(self, outputs):
161 # OPTIONAL
162 avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
163 tensorboard_logs = {'test_loss': avg_loss}
164 return {'avg_test_loss': avg_loss, 'log': tensorboard_logs}
165
166 def configure_optimizers(self):
167 # REQUIRED
168 # can return multiple optimizers and learning_rate schedulers
169 # (LBFGS it is automatically supported, no need for closure function)
170 return torch.optim.Adam(self.parameters(), lr=0.02)
171
172 @pl.data_loader
173 def train_dataloader(self):
174 # REQUIRED
175 return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
176
177 @pl.data_loader
178 def val_dataloader(self):
179 # OPTIONAL
180 return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
181
182 @pl.data_loader
183 def test_dataloader(self):
184 # OPTIONAL
185 return DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), batch_size=32)
186 ```
187 2. Fit with a [trainer](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html)
188 ```python
189 from pytorch_lightning import Trainer
190
191 model = CoolSystem()
192
193 # most basic trainer, uses good defaults
194 trainer = Trainer()
195 trainer.fit(model)
196 ```
197
198 Trainer sets up a tensorboard logger, early stopping and checkpointing by default (you can modify all of them or
199 use something other than tensorboard).
200
201 Here are more advanced examples
202 ```python
203 # train on cpu using only 10% of the data (for demo purposes)
204 trainer = Trainer(max_epochs=1, train_percent_check=0.1)
205
206 # train on 4 gpus (lightning chooses GPUs for you)
207 # trainer = Trainer(max_epochs=1, gpus=4, distributed_backend='ddp')
208
209 # train on 4 gpus (you choose GPUs)
210 # trainer = Trainer(max_epochs=1, gpus=[0, 1, 3, 7], distributed_backend='ddp')
211
212 # train on 32 gpus across 4 nodes (make sure to submit appropriate SLURM job)
213 # trainer = Trainer(max_epochs=1, gpus=8, num_gpu_nodes=4, distributed_backend='ddp')
214
215 # train (1 epoch only here for demo)
216 trainer.fit(model)
217
218 # view tensorboard logs
219 logging.info(f'View tensorboard logs by running\ntensorboard --logdir {os.getcwd()}')
220 logging.info('and going to http://localhost:6006 on your browser')
221 ```
222
223 When you're all done you can even run the test set separately.
224 ```python
225 trainer.test()
226 ```
227
228 **Could be as complex as seq-2-seq + attention**
229
230 ```python
231 # define what happens for training here
232 def training_step(self, batch, batch_idx):
233 x, y = batch
234
235 # define your own forward and loss calculation
236 hidden_states = self.encoder(x)
237
238 # even as complex as a seq-2-seq + attn model
239 # (this is just a toy, non-working example to illustrate)
240 start_token = '<SOS>'
241 last_hidden = torch.zeros(...)
242 loss = 0
243 for step in range(max_seq_len):
244 attn_context = self.attention_nn(hidden_states, start_token)
245 pred = self.decoder(start_token, attn_context, last_hidden)
246 last_hidden = pred
247 pred = self.predict_nn(pred)
248 loss += self.loss(last_hidden, y[step])
249
250 #toy example as well
251 loss = loss / max_seq_len
252 return {'loss': loss}
253 ```
254
255 **Or as basic as CNN image classification**
256
257 ```python
258 # define what happens for validation here
259 def validation_step(self, batch, batch_idx):
260 x, y = batch
261
262 # or as basic as a CNN classification
263 out = self.forward(x)
264 loss = my_loss(out, y)
265 return {'loss': loss}
266 ```
267
268 **And you also decide how to collate the output of all validation steps**
269
270 ```python
271 def validation_end(self, outputs):
272 """
273 Called at the end of validation to aggregate outputs
274 :param outputs: list of individual outputs of each validation step
275 :return:
276 """
277 val_loss_mean = 0
278 val_acc_mean = 0
279 for output in outputs:
280 val_loss_mean += output['val_loss']
281 val_acc_mean += output['val_acc']
282
283 val_loss_mean /= len(outputs)
284 val_acc_mean /= len(outputs)
285 logs = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
286 result = {'log': logs}
287 return result
288 ```
289
290 ## Tensorboard
291 Lightning is fully integrated with tensorboard, MLFlow and supports any logging module.
292
293 ![tensorboard-support](docs/source/_static/images/tf_loss.png)
294
295 Lightning also adds a text column with all the hyperparameters for this experiment.
296
297 ![tensorboard-support](docs/source/_static/images/tf_tags.png)
298
299 ## Lightning automates all of the following ([each is also configurable](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html)):
300
301
302 - [Running grid search on a cluster](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.distrib_data_parallel.html)
303 - [Fast dev run](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.utilities.debugging.html)
304 - [Logging](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.loggers.html)
305 - [Implement Your Own Distributed (DDP) training](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.configure_ddp)
306 - [Multi-GPU & Multi-node](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.distrib_parts.html)
307 - [Training loop](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.training_loop.html)
308 - [Hooks](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.core.hooks.html)
309 - [Configure optimizers](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.configure_optimizers)
310 - [Validations](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.evaluation_loop.html)
311 - [Model saving & Restoring training session](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.training_io.html)
312
313
314 ## Examples
315 - [GAN](https://github.com/PytorchLightning/pytorch-lightning/tree/master/pl_examples/domain_templates/gan.py)
316 - [MNIST](https://github.com/PytorchLightning/pytorch-lightning/tree/master/pl_examples/basic_examples)
317 - [Other projects using Lightning](https://github.com/PytorchLightning/pytorch-lightning/network/dependents?package_id=UGFja2FnZS0zNzE3NDU4OTM%3D)
318 - [Multi-node](https://github.com/PytorchLightning/pytorch-lightning/tree/master/pl_examples/multi_node_examples)
319
320 ## Tutorials
321 - [Basic Lightning use](https://towardsdatascience.com/supercharge-your-ai-research-with-pytorch-lightning-337948a99eec)
322 - [9 key speed features in Pytorch-Lightning](https://towardsdatascience.com/9-tips-for-training-lightning-fast-neural-networks-in-pytorch-8e63a502f565)
323 - [SLURM, multi-node training with Lightning](https://towardsdatascience.com/trivial-multi-node-training-with-pytorch-lightning-ff75dfb809bd)
324
325 ---
326
327 ## Asking for help
328 Welcome to the Lightning community!
329
330 If you have any questions, feel free to:
331 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/).
332 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question).
333 3. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning.
334
335 If no one replies to you quickly enough, feel free to post the stackoverflow link to our Gitter chat!
336
337 To chat with the rest of us visit our [gitter channel](https://gitter.im/PyTorch-Lightning/community)!
338
339 ---
340 ## FAQ
341 **How do I use Lightning for rapid research?**
342 [Here's a walk-through](https://pytorch-lightning.rtfd.io/en/latest/)
343
344 **Why was Lightning created?**
345 Lightning has 3 goals in mind:
346 1. Maximal flexibility while abstracting out the common boilerplate across research projects.
347 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format.
348 3. Democratizing PyTorch power user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning.
349
350 **How does Lightning compare with Ignite and fast.ai?**
351 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a).
352
353 **Is this another library I have to learn?**
354 Nope! We use pure Pytorch everywhere and don't add unecessary abstractions!
355
356 **Are there plans to support Python 2?**
357 Nope.
358
359 **Are there plans to support virtualenv?**
360 Nope. Please use anaconda or miniconda.
361
362 **Which PyTorch versions do you support?**
363 - **PyTorch 1.1.0**
364 ```bash
365 # install pytorch 1.1.0 using the official instructions
366
367 # install test-tube 0.6.7.6 which supports 1.1.0
368 pip install test-tube==0.6.7.6
369
370 # install latest Lightning version without upgrading deps
371 pip install -U --no-deps pytorch-lightning
372 ```
373 - **PyTorch 1.2.0, 1.3.0,**
374 Install via pip as normal
375
376 ## Custom installation
377
378 ### Bleeding edge
379
380 If you can't wait for the next release, install the most up to date code with:
381 * using GIT (locally clone whole repo with full history)
382 ```bash
383 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade
384 ```
385 * using instant zip (last state of the repo without git history)
386 ```bash
387 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/master.zip --upgrade
388 ```
389
390 ### Any release installation
391
392 You can also install any past release `0.X.Y` from this repository:
393 ```bash
394 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.zip --upgrade
395 ```
396
397 ### Lightning team
398
399 #### Leads
400 - William Falcon [(williamFalcon)](https://github.com/williamFalcon) (Lightning founder)
401 - Jirka Borovec [(Borda)](https://github.com/Borda) (-_-)
402 - Ethan Harris [(ethanwharris)](https://github.com/ethanwharris) (Torchbearer founder)
403 - Matthew Painter [(MattPainter01)](https://github.com/MattPainter01) (Torchbearer founder)
404
405 #### Core Maintainers
406
407 - Nick Eggert [(neggert)](https://github.com/neggert)
408 - Jeremy Jordan [(jeremyjordan)](https://github.com/jeremyjordan)
409 - Jeff Ling [(jeffling)](https://github.com/jeffling)
410 - Tullie Murrell [(tullie)](https://github.com/tullie)
411
412 ## Bibtex
413 If you want to cite the framework feel free to use this (but only if you loved it 😊):
414 ```
415 @misc{Falcon2019,
416 author = {Falcon, W.A. et al.},
417 title = {PyTorch Lightning},
418 year = {2019},
419 publisher = {GitHub},
420 journal = {GitHub repository},
421 howpublished = {\url{https://github.com/PytorchLightning/pytorch-lightning}}
422 }
423 ```
424
[end of README.md]
[start of pl_examples/basic_examples/lightning_module_template.py]
1 """
2 Example template for defining a system
3 """
4 import logging as log
5 import os
6 from argparse import ArgumentParser
7 from collections import OrderedDict
8
9 import torch
10 import torch.nn as nn
11 import torch.nn.functional as F
12 import torchvision.transforms as transforms
13 from torch import optim
14 from torch.utils.data import DataLoader
15 from torch.utils.data.distributed import DistributedSampler
16 from torchvision.datasets import MNIST
17
18 from pytorch_lightning.core import LightningModule
19 from pytorch_lightning.core import data_loader
20
21
22 class LightningTemplateModel(LightningModule):
23 """
24 Sample model to show how to define a template
25 """
26
27 def __init__(self, hparams):
28 """
29 Pass in parsed HyperOptArgumentParser to the model
30 :param hparams:
31 """
32 # init superclass
33 super(LightningTemplateModel, self).__init__()
34 self.hparams = hparams
35
36 self.batch_size = hparams.batch_size
37
38 # if you specify an example input, the summary will show input/output for each layer
39 self.example_input_array = torch.rand(5, 28 * 28)
40
41 # build model
42 self.__build_model()
43
44 # ---------------------
45 # MODEL SETUP
46 # ---------------------
47 def __build_model(self):
48 """
49 Layout model
50 :return:
51 """
52 self.c_d1 = nn.Linear(in_features=self.hparams.in_features,
53 out_features=self.hparams.hidden_dim)
54 self.c_d1_bn = nn.BatchNorm1d(self.hparams.hidden_dim)
55 self.c_d1_drop = nn.Dropout(self.hparams.drop_prob)
56
57 self.c_d2 = nn.Linear(in_features=self.hparams.hidden_dim,
58 out_features=self.hparams.out_features)
59
60 # ---------------------
61 # TRAINING
62 # ---------------------
63 def forward(self, x):
64 """
65 No special modification required for lightning, define as you normally would
66 :param x:
67 :return:
68 """
69
70 x = self.c_d1(x)
71 x = torch.tanh(x)
72 x = self.c_d1_bn(x)
73 x = self.c_d1_drop(x)
74
75 x = self.c_d2(x)
76 logits = F.log_softmax(x, dim=1)
77
78 return logits
79
80 def loss(self, labels, logits):
81 nll = F.nll_loss(logits, labels)
82 return nll
83
84 def training_step(self, batch, batch_idx):
85 """
86 Lightning calls this inside the training loop
87 :param batch:
88 :return:
89 """
90 # forward pass
91 x, y = batch
92 x = x.view(x.size(0), -1)
93
94 y_hat = self.forward(x)
95
96 # calculate loss
97 loss_val = self.loss(y, y_hat)
98
99 # in DP mode (default) make sure if result is scalar, there's another dim in the beginning
100 if self.trainer.use_dp or self.trainer.use_ddp2:
101 loss_val = loss_val.unsqueeze(0)
102
103 tqdm_dict = {'train_loss': loss_val}
104 output = OrderedDict({
105 'loss': loss_val,
106 'progress_bar': tqdm_dict,
107 'log': tqdm_dict
108 })
109
110 # can also return just a scalar instead of a dict (return loss_val)
111 return output
112
113 def validation_step(self, batch, batch_idx):
114 """
115 Lightning calls this inside the validation loop
116 :param batch:
117 :return:
118 """
119 x, y = batch
120 x = x.view(x.size(0), -1)
121 y_hat = self.forward(x)
122
123 loss_val = self.loss(y, y_hat)
124
125 # acc
126 labels_hat = torch.argmax(y_hat, dim=1)
127 val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)
128 val_acc = torch.tensor(val_acc)
129
130 if self.on_gpu:
131 val_acc = val_acc.cuda(loss_val.device.index)
132
133 # in DP mode (default) make sure if result is scalar, there's another dim in the beginning
134 if self.trainer.use_dp or self.trainer.use_ddp2:
135 loss_val = loss_val.unsqueeze(0)
136 val_acc = val_acc.unsqueeze(0)
137
138 output = OrderedDict({
139 'val_loss': loss_val,
140 'val_acc': val_acc,
141 })
142
143 # can also return just a scalar instead of a dict (return loss_val)
144 return output
145
146 def validation_end(self, outputs):
147 """
148 Called at the end of validation to aggregate outputs
149 :param outputs: list of individual outputs of each validation step
150 :return:
151 """
152 # if returned a scalar from validation_step, outputs is a list of tensor scalars
153 # we return just the average in this case (if we want)
154 # return torch.stack(outputs).mean()
155
156 val_loss_mean = 0
157 val_acc_mean = 0
158 for output in outputs:
159 val_loss = output['val_loss']
160
161 # reduce manually when using dp
162 if self.trainer.use_dp or self.trainer.use_ddp2:
163 val_loss = torch.mean(val_loss)
164 val_loss_mean += val_loss
165
166 # reduce manually when using dp
167 val_acc = output['val_acc']
168 if self.trainer.use_dp or self.trainer.use_ddp2:
169 val_acc = torch.mean(val_acc)
170
171 val_acc_mean += val_acc
172
173 val_loss_mean /= len(outputs)
174 val_acc_mean /= len(outputs)
175 tqdm_dict = {'val_loss': val_loss_mean, 'val_acc': val_acc_mean}
176 result = {'progress_bar': tqdm_dict, 'log': tqdm_dict, 'val_loss': val_loss_mean}
177 return result
178
179 # ---------------------
180 # TRAINING SETUP
181 # ---------------------
182 def configure_optimizers(self):
183 """
184 return whatever optimizers we want here
185 :return: list of optimizers
186 """
187 optimizer = optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
188 scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
189 return [optimizer], [scheduler]
190
191 def __dataloader(self, train):
192 # this is neede when you want some info about dataset before binding to trainer
193 self.prepare_data()
194 # init data generators
195 transform = transforms.Compose([transforms.ToTensor(),
196 transforms.Normalize((0.5,), (1.0,))])
197 dataset = MNIST(root=self.hparams.data_root, train=train,
198 transform=transform, download=False)
199
200 # when using multi-node (ddp) we need to add the datasampler
201 batch_size = self.hparams.batch_size
202
203 loader = DataLoader(
204 dataset=dataset,
205 batch_size=batch_size,
206 num_workers=0
207 )
208
209 return loader
210
211 def prepare_data(self):
212 transform = transforms.Compose([transforms.ToTensor(),
213 transforms.Normalize((0.5,), (1.0,))])
214 _ = MNIST(root=self.hparams.data_root, train=True,
215 transform=transform, download=True)
216
217 def train_dataloader(self):
218 log.info('Training data loader called.')
219 return self.__dataloader(train=True)
220
221 def val_dataloader(self):
222 log.info('Validation data loader called.')
223 return self.__dataloader(train=False)
224
225 def test_dataloader(self):
226 log.info('Test data loader called.')
227 return self.__dataloader(train=False)
228
229 @staticmethod
230 def add_model_specific_args(parent_parser, root_dir): # pragma: no cover
231 """
232 Parameters you define here will be available to your model through self.hparams
233 :param parent_parser:
234 :param root_dir:
235 :return:
236 """
237 parser = ArgumentParser(parents=[parent_parser])
238
239 # param overwrites
240 # parser.set_defaults(gradient_clip_val=5.0)
241
242 # network params
243 parser.add_argument('--in_features', default=28 * 28, type=int)
244 parser.add_argument('--out_features', default=10, type=int)
245 # use 500 for CPU, 50000 for GPU to see speed difference
246 parser.add_argument('--hidden_dim', default=50000, type=int)
247 parser.add_argument('--drop_prob', default=0.2, type=float)
248 parser.add_argument('--learning_rate', default=0.001, type=float)
249
250 # data
251 parser.add_argument('--data_root', default=os.path.join(root_dir, 'mnist'), type=str)
252
253 # training params (opt)
254 parser.add_argument('--epochs', default=20, type=int)
255 parser.add_argument('--optimizer_name', default='adam', type=str)
256 parser.add_argument('--batch_size', default=64, type=int)
257 return parser
258
[end of pl_examples/basic_examples/lightning_module_template.py]
[start of pytorch_lightning/core/lightning.py]
1 import collections
2 import inspect
3 import logging as log
4 import os
5 import warnings
6 from abc import ABC, abstractmethod
7 from argparse import Namespace
8 from typing import Any, Callable, Dict, Optional, Union
9
10 import torch
11 import torch.distributed as dist
12 from torch.optim import Adam
13
14 from pytorch_lightning.core.decorators import data_loader
15 from pytorch_lightning.core.grads import GradInformation
16 from pytorch_lightning.core.hooks import ModelHooks
17 from pytorch_lightning.core.saving import ModelIO, load_hparams_from_tags_csv
18 from pytorch_lightning.core.memory import ModelSummary
19 from pytorch_lightning.overrides.data_parallel import LightningDistributedDataParallel
20 from pytorch_lightning.utilities.debugging import MisconfigurationException
21
22 try:
23 import torch_xla.core.xla_model as xm
24 XLA_AVAILABLE = True
25
26 except ImportError:
27 XLA_AVAILABLE = False
28
29
30 class LightningModule(ABC, GradInformation, ModelIO, ModelHooks):
31
32 def __init__(self, *args, **kwargs):
33 super(LightningModule, self).__init__(*args, **kwargs)
34
35 #: Current dtype
36 self.dtype = torch.FloatTensor
37
38 self.exp_save_path = None
39
40 #: The current epoch
41 self.current_epoch = 0
42
43 #: Total training batches seen across all epochs
44 self.global_step = 0
45
46 self.loaded_optimizer_states_dict = {}
47
48 #: Pointer to the trainer object
49 self.trainer = None
50
51 #: Pointer to the logger object
52 self.logger = None
53 self.example_input_array = None
54
55 #: True if your model is currently running on GPUs.
56 #: Useful to set flags around the LightningModule for different CPU vs GPU behavior.
57 self.on_gpu = False
58
59 #: True if using dp
60 self.use_dp = False
61
62 #: True if using ddp
63 self.use_ddp = False
64
65 #: True if using ddp2
66 self.use_ddp2 = False
67
68 #: True if using amp
69 self.use_amp = False
70
71 self.hparams = None
72
73 def print(self, *args, **kwargs):
74 r"""
75 Prints only from process 0. Use this in any distributed mode to log only once
76
77 Args:
78 x (object): The thing to print
79
80 Example
81 -------
82
83 .. code-block:: python
84
85 # example if we were using this model as a feature extractor
86 def forward(self, x):
87 self.print(x, 'in loader')
88
89 """
90 if self.trainer.proc_rank == 0:
91 log.info(*args, **kwargs)
92
93 @abstractmethod
94 def forward(self, *args, **kwargs):
95 r"""
96 Same as torch.nn.Module.forward(), however in Lightning you want this to define
97 the operations you want to use for prediction (ie: on a server or as a feature extractor).
98
99 Normally you'd call self.forward() from your training_step() method. This makes it easy to write a complex
100 system for training with the outputs you'd want in a prediction setting.
101
102 Args:
103 x (tensor): Whatever you decide to define in the forward method
104
105 Return:
106 Predicted output
107
108 Example
109 -------
110
111 .. code-block:: python
112
113 # example if we were using this model as a feature extractor
114 def forward(self, x):
115 feature_maps = self.convnet(x)
116 return feature_maps
117
118 def training_step(self, batch, batch_idx):
119 x, y = batch
120 feature_maps = self.forward(x)
121 logits = self.classifier(feature_maps)
122
123 # ...
124 return loss
125
126 # splitting it this way allows model to be used a feature extractor
127 model = MyModelAbove()
128
129 inputs = server.get_request()
130 results = model(inputs)
131 server.write_results(results)
132
133 # -------------
134 # This is in stark contrast to torch.nn.Module where normally you would have this:
135 def forward(self, batch):
136 x, y = batch
137 feature_maps = self.convnet(x)
138 logits = self.classifier(feature_maps)
139 return logits
140
141 """
142
143 def training_step(self, *args, **kwargs):
144 r"""return loss, dict with metrics for tqdm
145
146 Args:
147 batch (torch.nn.Tensor | (Tensor, Tensor) | [Tensor, Tensor]): The output of your dataloader.
148 A tensor, tuple or list
149 batch_idx (int): Integer displaying index of this batch
150 optimizer_idx (int): If using multiple optimizers, this argument will also be present.
151 hiddens(:`Tensor <https://pytorch.org/docs/stable/tensors.html>`_): Passed in if truncated_bptt_steps > 0.
152
153 :param
154
155 :return: dict with loss key and optional log, progress keys
156 if implementing training_step, return whatever you need in that step:
157
158 - loss -> tensor scalar [REQUIRED]
159 - progress_bar -> Dict for progress bar display. Must have only tensors
160 - log -> Dict of metrics to add to logger. Must have only tensors (no images, etc)
161
162 In this step you'd normally do the forward pass and calculate the loss for a batch.
163 You can also do fancier things like multiple forward passes or something specific to your model.
164
165 Example
166 -------
167
168 .. code-block:: python
169
170 def training_step(self, batch, batch_idx):
171 x, y, z = batch
172
173 # implement your own
174 out = self.forward(x)
175 loss = self.loss(out, x)
176
177 logger_logs = {'training_loss': loss} # optional (MUST ALL BE TENSORS)
178
179 # if using TestTubeLogger or TensorBoardLogger you can nest scalars
180 logger_logs = {'losses': logger_logs} # optional (MUST ALL BE TENSORS)
181
182 output = {
183 'loss': loss, # required
184 'progress_bar': {'training_loss': loss}, # optional (MUST ALL BE TENSORS)
185 'log': logger_logs
186 }
187
188 # return a dict
189 return output
190
191 If you define multiple optimizers, this step will also be called with an additional `optimizer_idx` param.
192
193 .. code-block:: python
194
195 # Multiple optimizers (ie: GANs)
196 def training_step(self, batch, batch_idx, optimizer_idx):
197 if optimizer_idx == 0:
198 # do training_step with encoder
199 if optimizer_idx == 1:
200 # do training_step with decoder
201
202
203 If you add truncated back propagation through time you will also get an additional
204 argument with the hidden states of the previous step.
205
206 .. code-block:: python
207
208 # Truncated back-propagation through time
209 def training_step(self, batch, batch_idx, hiddens):
210 # hiddens are the hiddens from the previous truncated backprop step
211 ...
212 out, hiddens = self.lstm(data, hiddens)
213 ...
214
215 return {
216 "loss": ...,
217 "hiddens": hiddens # remember to detach() this
218 }
219
220 You can also return a -1 instead of a dict to stop the current loop. This is useful
221 if you want to break out of the current training epoch early.
222 """
223
224 def training_end(self, *args, **kwargs):
225 """return loss, dict with metrics for tqdm
226
227 :param outputs: What you return in `training_step`.
228 :return dict: dictionary with loss key and optional log, progress keys:
229 - loss -> tensor scalar [REQUIRED]
230 - progress_bar -> Dict for progress bar display. Must have only tensors
231 - log -> Dict of metrics to add to logger. Must have only tensors (no images, etc)
232
233 In certain cases (dp, ddp2), you might want to use all outputs of every process to do something.
234 For instance, if using negative samples, you could run a batch via dp and use ALL the outputs
235 for a single softmax across the full batch (ie: the denominator would use the full batch).
236
237 In this case you should define training_end to perform those calculations.
238
239 Example
240 -------
241
242 .. code-block:: python
243
244 # WITHOUT training_end
245 # if used in DP or DDP2, this batch is 1/num_gpus large
246 def training_step(self, batch, batch_idx):
247 # batch is 1/num_gpus big
248 x, y = batch
249
250 out = self.forward(x)
251 loss = self.softmax(out)
252 loss = nce_loss(loss)
253 return {'loss': loss}
254
255 # --------------
256 # with training_end to do softmax over the full batch
257 def training_step(self, batch, batch_idx):
258 # batch is 1/num_gpus big
259 x, y = batch
260
261 out = self.forward(x)
262 return {'out': out}
263
264 def training_end(self, outputs):
265 # this out is now the full size of the batch
266 out = outputs['out']
267
268 # this softmax now uses the full batch size
269 loss = self.softmax(out)
270 loss = nce_loss(loss)
271 return {'loss': loss}
272
273 .. note:: see the `multi-gpu guide for more details <multi_gpu.rst#caveats>`_.
274
275 If you define multiple optimizers, this step will also be called with an additional `optimizer_idx` param.
276
277 .. code-block:: python
278
279 # Multiple optimizers (ie: GANs)
280 def training_step(self, batch, batch_idx, optimizer_idx):
281 if optimizer_idx == 0:
282 # do training_step with encoder
283 if optimizer_idx == 1:
284 # do training_step with decoder
285
286 If you add truncated back propagation through time you will also get an additional argument
287 with the hidden states of the previous step.
288
289 .. code-block:: python
290
291 # Truncated back-propagation through time
292 def training_step(self, batch, batch_idx, hiddens):
293 # hiddens are the hiddens from the previous truncated backprop step
294
295 You can also return a -1 instead of a dict to stop the current loop. This is useful if you want to
296 break out of the current training epoch early.
297 """
298
299 def validation_step(self, *args, **kwargs):
300 r"""
301
302 This is the validation loop. It is called for each batch of the validation set.
303 Whatever is returned from here will be passed in as a list on validation_end.
304 In this step you'd normally generate examples or calculate anything of interest such as accuracy.
305
306 Args:
307 batch (torch.nn.Tensor | (Tensor, Tensor) | [Tensor, Tensor]): The output of your dataloader.
308 A tensor, tuple or list
309 batch_idx (int): The index of this batch
310 dataloader_idx (int): The index of the dataloader that produced this batch (only if multiple
311 val datasets used)
312
313 Return:
314 Dict or OrderedDict - passed to the validation_end step
315
316 .. code-block:: python
317
318 # if you have one val dataloader:
319 def validation_step(self, batch, batch_idx)
320
321 # if you have multiple val dataloaders:
322 def validation_step(self, batch, batch_idx, dataloader_idxdx)
323
324 Example
325 -------
326
327 .. code-block:: python
328
329 # CASE 1: A single validation dataset
330 def validation_step(self, batch, batch_idx):
331 x, y = batch
332
333 # implement your own
334 out = self.forward(x)
335 loss = self.loss(out, y)
336
337 # log 6 example images
338 # or generated text... or whatever
339 sample_imgs = x[:6]
340 grid = torchvision.utils.make_grid(sample_imgs)
341 self.logger.experiment.add_image('example_images', grid, 0)
342
343 # calculate acc
344 labels_hat = torch.argmax(out, dim=1)
345 val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)
346
347 # all optional...
348 # return whatever you need for the collation function validation_end
349 output = OrderedDict({
350 'val_loss': loss_val,
351 'val_acc': torch.tensor(val_acc), # everything must be a tensor
352 })
353
354 # return an optional dict
355 return output
356
357 If you pass in multiple validation datasets, validation_step will have an additional argument.
358
359 .. code-block:: python
360
361 # CASE 2: multiple validation datasets
362 def validation_step(self, batch, batch_idx, dataset_idx):
363 # dataset_idx tells you which dataset this is.
364
365 .. note:: If you don't need to validate you don't need to implement this method.
366
367 .. note:: When the validation_step is called, the model has been put in eval mode and PyTorch gradients
368 have been disabled. At the end of validation, model goes back to training mode and gradients are enabled.
369 """
370
371 def test_step(self, *args, **kwargs):
372 """return whatever outputs will need to be aggregated in test_end
373 :param batch: The output of your dataloader. A tensor, tuple or list
374 :param int batch_idx: Integer displaying which batch this is
375 :param int dataloader_idx: Integer displaying which dataloader this is (only if multiple test datasets used)
376 :return dict: Dict or OrderedDict with metrics to display in progress bar. All keys must be tensors.
377
378 .. code-block:: python
379
380 # if you have one test dataloader:
381 def test_step(self, batch, batch_idx)
382
383 # if you have multiple test dataloaders:
384 def test_step(self, batch, batch_idx, dataloader_idxdx)
385
386
387 **OPTIONAL**
388 If you don't need to test you don't need to implement this method.
389 In this step you'd normally generate examples or
390 calculate anything of interest such as accuracy.
391
392 When the validation_step is called, the model has been put in eval mode
393 and PyTorch gradients have been disabled.
394 At the end of validation, model goes back to training mode and gradients are enabled.
395
396 The dict you return here will be available in the `test_end` method.
397
398 This function is used when you execute `trainer.test()`.
399
400 Example
401 -------
402
403 .. code-block:: python
404
405 # CASE 1: A single test dataset
406 def test_step(self, batch, batch_idx):
407 x, y = batch
408
409 # implement your own
410 out = self.forward(x)
411 loss = self.loss(out, y)
412
413 # calculate acc
414 labels_hat = torch.argmax(out, dim=1)
415 test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)
416
417 # all optional...
418 # return whatever you need for the collation function test_end
419 output = OrderedDict({
420 'test_loss': loss_test,
421 'test_acc': torch.tensor(test_acc), # everything must be a tensor
422 })
423
424 # return an optional dict
425 return output
426
427
428 If you pass in multiple test datasets, `test_step` will have an additional argument.
429
430 .. code-block:: python
431
432 # CASE 2: multiple test datasets
433 def test_step(self, batch, batch_idx, dataset_idx):
434 # dataset_idx tells you which dataset this is.
435
436
437 The `dataset_idx` corresponds to the order of datasets returned in `test_dataloader`.
438 """
439
440 def validation_end(self, outputs):
441 """Outputs has the appended output after each validation step.
442
443 :param outputs: List of outputs you defined in validation_step, or if there are multiple dataloaders,
444 a list containing a list of outputs for each dataloader
445 :return dict: Dictionary or OrderedDict with optional:
446 progress_bar -> Dict for progress bar display. Must have only tensors
447 log -> Dict of metrics to add to logger. Must have only tensors (no images, etc)
448
449 If you didn't define a validation_step, this won't be called.
450 Called at the end of the validation loop with the outputs of validation_step.
451
452 The outputs here are strictly for the progress bar.
453 If you don't need to display anything, don't return anything.
454 Any keys present in 'log', 'progress_bar' or the rest of the dictionary
455 are available for callbacks to access. If you want to manually set current step, you can specify it with
456 'step' key in the 'log' Dict.
457
458 Example
459 -------
460
461 With a single dataloader
462
463 .. code-block:: python
464
465 def validation_end(self, outputs):
466 val_loss_mean = 0
467 val_acc_mean = 0
468 for output in outputs:
469 val_loss_mean += output['val_loss']
470 val_acc_mean += output['val_acc']
471
472 val_loss_mean /= len(outputs)
473 val_acc_mean /= len(outputs)
474 tqdm_dict = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
475
476 # show val_loss and val_acc in progress bar but only log val_loss
477 results = {
478 'progress_bar': tqdm_dict,
479 'log': {'val_loss': val_loss_mean.item()}
480 }
481 return results
482
483 With multiple dataloaders, `outputs` will be a list of lists. The outer list contains
484 one entry per dataloader, while the inner list contains the individual outputs of
485 each validation step for that dataloader.
486
487 .. code-block:: python
488
489 def validation_end(self, outputs):
490 val_loss_mean = 0
491 val_acc_mean = 0
492 i = 0
493 for dataloader_outputs in outputs:
494 for output in dataloader_outputs:
495 val_loss_mean += output['val_loss']
496 val_acc_mean += output['val_acc']
497 i += 1
498
499 val_loss_mean /= i
500 val_acc_mean /= i
501 tqdm_dict = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
502
503 # show val_loss and val_acc in progress bar but only log val_loss
504 results = {
505 'progress_bar': tqdm_dict,
506 'log': {'val_loss': val_loss_mean.item(), 'step': self.current_epoch}
507 }
508 return results
509
510 """
511
512 def test_end(self, outputs):
513 """Outputs has the appended output after each test step.
514
515 :param outputs: List of outputs you defined in test_step, or if there are multiple dataloaders,
516 a list containing a list of outputs for each dataloader
517 :return dict: Dict of OrderedDict with metrics to display in progress bar
518
519 If you didn't define a test_step, this won't be called.
520 Called at the end of the test step with the output of each test_step.
521 The outputs here are strictly for the progress bar.
522 If you don't need to display anything, don't return anything.
523
524 Example
525 -------
526
527 .. code-block:: python
528
529 def test_end(self, outputs):
530 test_loss_mean = 0
531 test_acc_mean = 0
532 for output in outputs:
533 test_loss_mean += output['test_loss']
534 test_acc_mean += output['test_acc']
535
536 test_loss_mean /= len(outputs)
537 test_acc_mean /= len(outputs)
538 tqdm_dict = {'test_loss': test_loss_mean.item(), 'test_acc': test_acc_mean.item()}
539
540 # show test_loss and test_acc in progress bar but only log test_loss
541 results = {
542 'progress_bar': tqdm_dict,
543 'log': {'test_loss': val_loss_mean.item()}
544 }
545 return results
546
547 With multiple dataloaders, `outputs` will be a list of lists. The outer list contains
548 one entry per dataloader, while the inner list contains the individual outputs of
549 each validation step for that dataloader.
550
551 .. code-block:: python
552
553 def test_end(self, outputs):
554 test_loss_mean = 0
555 test_acc_mean = 0
556 i = 0
557 for dataloader_outputs in outputs:
558 for output in dataloader_outputs:
559 test_loss_mean += output['test_loss']
560 test_acc_mean += output['test_acc']
561 i += 1
562
563 test_loss_mean /= i
564 test_acc_mean /= i
565 tqdm_dict = {'test_loss': test_loss_mean.item(), 'test_acc': test_acc_mean.item()}
566
567 # show test_loss and test_acc in progress bar but only log test_loss
568 results = {
569 'progress_bar': tqdm_dict,
570 'log': {'test_loss': val_loss_mean.item()}
571 }
572 return results
573
574 """
575
576 def configure_ddp(self, model, device_ids):
577 r"""
578
579 Override to init DDP in your own way or with your own wrapper.
580 The only requirements are that:
581
582 1. On a validation batch the call goes to model.validation_step.
583 2. On a training batch the call goes to model.training_step.
584 3. On a testing batch, the call goes to model.test_step
585
586 Args:
587 model (:class:`.LightningModule`): the LightningModule currently being optimized
588 device_ids (list): the list of GPU ids
589
590 Return:
591 DDP wrapped model
592
593 Example
594 -------
595 .. code-block:: python
596
597 # default implementation used in Trainer
598 def configure_ddp(self, model, device_ids):
599 # Lightning DDP simply routes to test_step, val_step, etc...
600 model = LightningDistributedDataParallel(
601 model,
602 device_ids=device_ids,
603 find_unused_parameters=True
604 )
605 return model
606
607
608 """
609 model = LightningDistributedDataParallel(
610 model,
611 device_ids=device_ids,
612 find_unused_parameters=True
613 )
614 return model
615
616 def init_ddp_connection(self, proc_rank, world_size):
617 r"""
618
619 Override to define your custom way of setting up a distributed environment.
620
621 Lightning's implementation uses env:// init by default and sets the first node as root.
622
623 Args:
624 proc_rank (int): The current process rank within the node.
625 world_size (int): Number of GPUs being use across all nodes. (num_nodes*nb_gpu_nodes).
626 Example
627 -------
628 .. code-block:: python
629
630 def init_ddp_connection(self):
631 # use slurm job id for the port number
632 # guarantees unique ports across jobs from same grid search
633 try:
634 # use the last 4 numbers in the job id as the id
635 default_port = os.environ['SLURM_JOB_ID']
636 default_port = default_port[-4:]
637
638 # all ports should be in the 10k+ range
639 default_port = int(default_port) + 15000
640
641 except Exception as e:
642 default_port = 12910
643
644 # if user gave a port number, use that one instead
645 try:
646 default_port = os.environ['MASTER_PORT']
647 except Exception:
648 os.environ['MASTER_PORT'] = str(default_port)
649
650 # figure out the root node addr
651 try:
652 root_node = os.environ['SLURM_NODELIST'].split(' ')[0]
653 except Exception:
654 root_node = '127.0.0.2'
655
656 root_node = self.trainer.resolve_root_node_address(root_node)
657 os.environ['MASTER_ADDR'] = root_node
658 dist.init_process_group(
659 'nccl',
660 rank=self.proc_rank,
661 world_size=self.world_size
662 )
663
664 """
665 # use slurm job id for the port number
666 # guarantees unique ports across jobs from same grid search
667 try:
668 # use the last 4 numbers in the job id as the id
669 default_port = os.environ['SLURM_JOB_ID']
670 default_port = default_port[-4:]
671
672 # all ports should be in the 10k+ range
673 default_port = int(default_port) + 15000
674
675 except Exception:
676 default_port = 12910
677
678 # if user gave a port number, use that one instead
679 try:
680 default_port = os.environ['MASTER_PORT']
681 except Exception:
682 os.environ['MASTER_PORT'] = str(default_port)
683
684 # figure out the root node addr
685 try:
686 root_node = os.environ['SLURM_NODELIST'].split(' ')[0]
687 except Exception:
688 root_node = '127.0.0.2'
689
690 root_node = self.trainer.resolve_root_node_address(root_node)
691 os.environ['MASTER_ADDR'] = root_node
692 dist.init_process_group('nccl', rank=proc_rank, world_size=world_size)
693
694 def configure_apex(self, amp, model, optimizers, amp_level):
695 r"""
696 Override to init AMP your own way
697 Must return a model and list of optimizers
698
699 Args:
700 amp (object): pointer to amp library object
701 model (:class:`.LightningModule`): pointer to current lightningModule
702 optimizers (list): list of optimizers passed in configure_optimizers()
703 amp_level (str): AMP mode chosen ('O1', 'O2', etc...)
704
705 Return:
706 Apex wrapped model and optimizers
707
708 Example
709 -------
710 .. code-block:: python
711
712 # Default implementation used by Trainer.
713 def configure_apex(self, amp, model, optimizers, amp_level):
714 model, optimizers = amp.initialize(
715 model, optimizers, opt_level=amp_level,
716 )
717
718 return model, optimizers
719 """
720 model, optimizers = amp.initialize(
721 model, optimizers, opt_level=amp_level,
722 )
723
724 return model, optimizers
725
726 def configure_optimizers(self):
727 r"""
728 This is where you choose what optimizers and learning-rate schedulers to use in your optimization.
729 Normally you'd need one. But in the case of GANs or something more esoteric you might have multiple.
730
731 If you don't define this method Lightning will automatically use Adam(lr=1e-3)
732
733 Return: any of these 3 options:
734 - Single optimizer
735 - List or Tuple - List of optimizers
736 - Two lists - The first list has multiple optimizers, the second a list of learning-rate schedulers
737
738 Example
739 -------
740
741 .. code-block:: python
742
743 # most cases (default if not defined)
744 def configure_optimizers(self):
745 opt = Adam(self.parameters(), lr=1e-3)
746 return opt
747
748 # multiple optimizer case (eg: GAN)
749 def configure_optimizers(self):
750 generator_opt = Adam(self.model_gen.parameters(), lr=0.01)
751 disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02)
752 return generator_opt, disriminator_opt
753
754 # example with learning_rate schedulers
755 def configure_optimizers(self):
756 generator_opt = Adam(self.model_gen.parameters(), lr=0.01)
757 disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02)
758 discriminator_sched = CosineAnnealing(discriminator_opt, T_max=10)
759 return [generator_opt, disriminator_opt], [discriminator_sched]
760
761 .. note:: Lightning calls .backward() and .step() on each optimizer and learning rate scheduler as needed.
762
763 .. note:: If you use 16-bit precision (use_amp=True), Lightning will automatically
764 handle the optimizers for you.
765
766 .. note:: If you use multiple optimizers, training_step will have an additional `optimizer_idx` parameter.
767
768 .. note:: If you use LBFGS lightning handles the closure function automatically for you
769
770 .. note:: If you use multiple optimizers, gradients will be calculated only
771 for the parameters of current optimizer at each training step.
772
773 .. note:: If you need to control how often those optimizers step or override the default .step() schedule,
774 override the `optimizer_step` hook.
775
776
777 """
778 return Adam(self.parameters(), lr=1e-3)
779
780 def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None):
781 r"""
782
783 Override this method to adjust the default way the Trainer calls each optimizer. By default, Lightning
784 calls .step() and zero_grad() as shown in the example once per optimizer.
785
786 Args:
787 epoch (int): Current epoch
788 batch_idx (int): Index of current batch
789 optimizer (torch.nn.Optimizer): A PyTorch optimizer
790 optimizer_idx (int): If you used multiple optimizers this indexes into that list
791 second_order_closure (int): closure for second order methods
792
793 Example
794 -------
795 .. code-block:: python
796
797 # DEFAULT
798 def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None):
799 optimizer.step()
800 optimizer.zero_grad()
801
802 # Alternating schedule for optimizer steps (ie: GANs)
803 def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None):
804 # update generator opt every 2 steps
805 if optimizer_idx == 0:
806 if batch_idx % 2 == 0 :
807 optimizer.step()
808 optimizer.zero_grad()
809
810 # update discriminator opt every 4 steps
811 if optimizer_idx == 1:
812 if batch_idx % 4 == 0 :
813 optimizer.step()
814 optimizer.zero_grad()
815
816 # ...
817 # add as many optimizers as you want
818
819
820 Here's another example showing how to use this for more advanced things such as learning-rate warm-up:
821
822 .. code-block:: python
823
824 # learning rate warm-up
825 def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None):
826 # warm up lr
827 if self.trainer.global_step < 500:
828 lr_scale = min(1., float(self.trainer.global_step + 1) / 500.)
829 for pg in optimizer.param_groups:
830 pg['lr'] = lr_scale * self.hparams.learning_rate
831
832 # update params
833 optimizer.step()
834 optimizer.zero_grad()
835
836 """
837 if self.trainer.use_tpu and XLA_AVAILABLE:
838 xm.optimizer_step(optimizer)
839 elif isinstance(optimizer, torch.optim.LBFGS):
840 optimizer.step(second_order_closure)
841 else:
842 optimizer.step()
843
844 # clear gradients
845 optimizer.zero_grad()
846
847 def tbptt_split_batch(self, batch, split_size):
848 r"""
849
850 When using truncated backpropagation through time, each batch must be split along the time dimension.
851 Lightning handles this by default, but for custom behavior override this function.
852
853 Args:
854 batch (torch.nn.Tensor): Current batch
855 split_size (int): How big the split is
856
857 Return:
858 list of batch splits. Each split will be passed to forward_step to enable truncated
859 back propagation through time. The default implementation splits root level Tensors and
860 Sequences at dim=1 (i.e. time dim). It assumes that each time dim is the same length.
861
862 Example
863 -------
864 .. code-block:: python
865
866 def tbptt_split_batch(self, batch, split_size):
867 splits = []
868 for t in range(0, time_dims[0], split_size):
869 batch_split = []
870 for i, x in enumerate(batch):
871 if isinstance(x, torch.Tensor):
872 split_x = x[:, t:t + split_size]
873 elif isinstance(x, collections.Sequence):
874 split_x = [None] * len(x)
875 for batch_idx in range(len(x)):
876 split_x[batch_idx] = x[batch_idx][t:t + split_size]
877
878 batch_split.append(split_x)
879
880 splits.append(batch_split)
881
882 return splits
883
884 .. note:: Called in the training loop after on_batch_start if `truncated_bptt_steps > 0`.
885 Each returned batch split is passed separately to training_step(...).
886
887 """
888 time_dims = [len(x[0]) for x in batch if isinstance(x, (torch.Tensor, collections.Sequence))]
889 assert len(time_dims) >= 1, "Unable to determine batch time dimension"
890 assert all(x == time_dims[0] for x in time_dims), "Batch time dimension length is ambiguous"
891
892 splits = []
893 for t in range(0, time_dims[0], split_size):
894 batch_split = []
895 for i, x in enumerate(batch):
896 if isinstance(x, torch.Tensor):
897 split_x = x[:, t:t + split_size]
898 elif isinstance(x, collections.Sequence):
899 split_x = [None] * len(x)
900 for batch_idx in range(len(x)):
901 split_x[batch_idx] = x[batch_idx][t:t + split_size]
902
903 batch_split.append(split_x)
904
905 splits.append(batch_split)
906
907 return splits
908
909 def prepare_data(self):
910 """Use this to download and prepare data.
911 In distributed (GPU, TPU), this will only be called once
912
913 :return: PyTorch DataLoader
914
915 This is called before requesting the dataloaders
916
917 .. code-block:: python
918
919 model.prepare_data()
920 model.train_dataloader()
921 model.val_dataloader()
922 model.test_dataloader()
923
924 Example
925 -------
926
927 .. code-block:: python
928
929 def prepare_data(self):
930 download_imagenet()
931 clean_imagenet()
932 cache_imagenet()
933 """
934 return None
935
936 def train_dataloader(self):
937 """Implement a PyTorch DataLoader
938
939 :return: PyTorch DataLoader
940
941 Return a dataloader. It will not be called every epoch unless you set
942 ```Trainer(reload_dataloaders_every_epoch=True)```.
943
944 It's recommended that all data downloads and preparation happen in prepare_data().
945
946 .. note:: Lightning adds the correct sampler for distributed and arbitrary hardware. No need to set yourself.
947
948 - .fit()
949 - ...
950 - prepare_data()
951 - train_dataloader
952
953 Example
954 -------
955
956 .. code-block:: python
957
958 def train_dataloader(self):
959 transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
960 dataset = MNIST(root='/path/to/mnist/', train=True, transform=transform, download=True)
961 loader = torch.utils.data.DataLoader(
962 dataset=dataset,
963 batch_size=self.hparams.batch_size,
964 shuffle=True
965 )
966 return loader
967
968 """
969 return None
970
971 @data_loader
972 def tng_dataloader(self): # todo: remove in v0.8.0
973 """Implement a PyTorch DataLoader.
974
975 .. warning:: Deprecated in v0.5.0. use train_dataloader instead.
976 """
977 output = self.train_dataloader()
978 warnings.warn("`tng_dataloader` has been renamed to `train_dataloader` since v0.5.0."
979 " and this method will be removed in v0.8.0", DeprecationWarning)
980 return output
981
982 def test_dataloader(self):
983 r"""
984
985 Return a dataloader. It will not be called every epoch unless you set
986 ```Trainer(reload_dataloaders_every_epoch=True)```.
987
988 It's recommended that all data downloads and preparation happen in prepare_data().
989
990 - .fit()
991 - ...
992 - prepare_data()
993 - train_dataloader
994 - val_dataloader
995 - test_dataloader
996
997 .. note:: Lightning adds the correct sampler for distributed and arbitrary hardware. No need to set yourself.
998
999 Return:
1000 PyTorch DataLoader
1001
1002 Example
1003 -------
1004
1005 .. code-block:: python
1006
1007 def test_dataloader(self):
1008 transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
1009 dataset = MNIST(root='/path/to/mnist/', train=False, transform=transform, download=True)
1010 loader = torch.utils.data.DataLoader(
1011 dataset=dataset,
1012 batch_size=self.hparams.batch_size,
1013 shuffle=True
1014 )
1015
1016 return loader
1017
1018 .. note:: If you don't need a test dataset and a test_step, you don't need to implement this method.
1019
1020 .. note:: If you want to change the data during every epoch DON'T use the data_loader decorator.
1021
1022 """
1023 return None
1024
1025 def val_dataloader(self):
1026 r"""
1027
1028 Return a dataloader. It will not be called every epoch unless you set
1029 ```Trainer(reload_dataloaders_every_epoch=True)```.
1030
1031 It's recommended that all data downloads and preparation happen in prepare_data().
1032
1033 - .fit()
1034 - ...
1035 - prepare_data()
1036 - train_dataloader
1037 - val_dataloader
1038
1039 .. note:: Lightning adds the correct sampler for distributed and arbitrary hardware No need to set yourself.
1040
1041 Return:
1042 PyTorch DataLoader
1043
1044 Example
1045 -------
1046
1047 .. code-block:: python
1048
1049 def val_dataloader(self):
1050 transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
1051 dataset = MNIST(root='/path/to/mnist/', train=False, transform=transform, download=True)
1052 loader = torch.utils.data.DataLoader(
1053 dataset=dataset,
1054 batch_size=self.hparams.batch_size,
1055 shuffle=True
1056 )
1057
1058 return loader
1059
1060 # can also return multiple dataloaders
1061 def val_dataloader(self):
1062 return [loader_a, loader_b, ..., loader_n]
1063
1064 Example
1065 -------
1066
1067 .. code-block:: python
1068
1069 @pl.data_loader
1070 def val_dataloader(self):
1071 transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
1072 dataset = MNIST(root='/path/to/mnist/', train=False, transform=transform, download=True)
1073 loader = torch.utils.data.DataLoader(
1074 dataset=dataset,
1075 batch_size=self.hparams.batch_size,
1076 shuffle=True
1077 )
1078
1079 return loader
1080
1081 # can also return multiple dataloaders
1082 @pl.data_loader
1083 def val_dataloader(self):
1084 return [loader_a, loader_b, ..., loader_n]
1085
1086 .. note:: If you don't need a validation dataset and a validation_step, you don't need to implement this method.
1087
1088 .. note:: If you want to change the data during every epoch DON'T use the data_loader decorator.
1089
1090 .. note:: In the case where you return multiple `val_dataloaders`, the `validation_step`
1091 will have an argument `dataset_idx` which matches the order here.
1092 """
1093 return None
1094
1095 @classmethod
1096 def load_from_metrics(cls, weights_path, tags_csv, map_location=None):
1097 r"""
1098 Warning:
1099 Deprecated in version 0.7.0.
1100 You should use `load_from_checkpoint` instead.
1101 Will be removed in v0.9.0.
1102 """
1103 warnings.warn(
1104 "`load_from_metrics` method has been unified with `load_from_checkpoint` in v0.7.0."
1105 " The deprecated method will be removed in v0.9.0.", DeprecationWarning
1106 )
1107 return cls.load_from_checkpoint(weights_path, tags_csv=tags_csv, map_location=map_location)
1108
1109 @classmethod
1110 def load_from_checkpoint(
1111 cls,
1112 checkpoint_path: str,
1113 map_location: Optional[Union[Dict[str, str], str, torch.device, int, Callable]] = None,
1114 tags_csv: Optional[str] = None,
1115 ) -> 'LightningModule':
1116 r"""
1117
1118 Primary way of loading model from a checkpoint. When Lightning saves a checkpoint
1119 it stores the hyperparameters in the checkpoint if you initialized your LightningModule
1120 with an argument called `hparams` which is a Namespace (output of using argparse
1121 to parse command line arguments).
1122
1123 Example
1124 -------
1125 .. code-block:: python
1126
1127 from argparse import Namespace
1128 hparams = Namespace(**{'learning_rate': 0.1})
1129
1130 model = MyModel(hparams)
1131
1132 class MyModel(LightningModule):
1133 def __init__(self, hparams):
1134 self.learning_rate = hparams.learning_rate
1135
1136 Args:
1137 checkpoint_path: Path to checkpoint.
1138 map_location:
1139 If your checkpoint saved a GPU model and you now load on CPUs
1140 or a different number of GPUs, use this to map to the new setup.
1141 The behaviour is the same as in
1142 `torch.load <https://pytorch.org/docs/stable/torch.html#torch.load>`_.
1143 tags_csv: Optional path to a .csv file with two columns (key, value)
1144 as in this example::
1145
1146 key,value
1147 drop_prob,0.2
1148 batch_size,32
1149
1150 You most likely won't need this since Lightning will always save the hyperparameters
1151 to the checkpoint.
1152 However, if your checkpoint weights don't have the hyperparameters saved,
1153 use this method to pass in a .csv file with the hparams you'd like to use.
1154 These will be converted into a argparse.Namespace and passed into your
1155 LightningModule for use.
1156
1157 Return:
1158 LightningModule with loaded weights and hyperparameters (if available).
1159
1160 Example
1161 -------
1162 .. code-block:: python
1163
1164 # load weights without mapping ...
1165 MyLightningModule.load_from_checkpoint('path/to/checkpoint.ckpt')
1166
1167 # or load weights mapping all weights from GPU 1 to GPU 0 ...
1168 map_location = {'cuda:1':'cuda:0'}
1169 MyLightningModule.load_from_checkpoint(
1170 'path/to/checkpoint.ckpt',
1171 map_location=map_location
1172 )
1173
1174 # or load weights and hyperparameters from separate files.
1175 MyLightningModule.load_from_checkpoint(
1176 'path/to/checkpoint.ckpt',
1177 tags_csv='/path/to/hparams_file.csv'
1178 )
1179
1180 # predict
1181 pretrained_model.eval()
1182 pretrained_model.freeze()
1183 y_hat = pretrained_model(x)
1184 """
1185 if map_location is not None:
1186 checkpoint = torch.load(checkpoint_path, map_location=map_location)
1187 else:
1188 checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
1189
1190 if tags_csv is not None:
1191 # add the hparams from csv file to checkpoint
1192 hparams = load_hparams_from_tags_csv(tags_csv)
1193 hparams.__setattr__('on_gpu', False)
1194 checkpoint['hparams'] = vars(hparams)
1195
1196 model = cls._load_model_state(checkpoint)
1197 return model
1198
1199 @classmethod
1200 def _load_model_state(cls, checkpoint):
1201 cls_takes_hparams = 'hparams' in inspect.signature(cls.__init__).parameters
1202 ckpt_hparams = checkpoint.get('hparams')
1203
1204 if cls_takes_hparams:
1205 if ckpt_hparams is not None:
1206 is_namespace = checkpoint.get('hparams_type') == 'namespace'
1207 hparams = Namespace(**ckpt_hparams) if is_namespace else ckpt_hparams
1208 else:
1209 warnings.warn(
1210 f"Checkpoint does not contain hyperparameters but {cls.__name__}'s __init__ contains"
1211 " argument 'hparams'. Will pass in an empty Namespace instead."
1212 " Did you forget to store your model hyperparameters in self.hparams?"
1213 )
1214 hparams = Namespace()
1215 else: # The user's LightningModule does not define a hparams argument
1216 if ckpt_hparams is None:
1217 hparams = None
1218 else:
1219 raise MisconfigurationException(
1220 f"Checkpoint contains hyperparameters but {cls.__name__}'s __init__ is missing the"
1221 " argument 'hparams'. Are you loading the correct checkpoint?"
1222 )
1223
1224 # load the state_dict on the model automatically
1225 model_args = [hparams] if hparams else []
1226 model = cls(*model_args)
1227 model.load_state_dict(checkpoint['state_dict'])
1228
1229 # give model a chance to load something
1230 model.on_load_checkpoint(checkpoint)
1231
1232 return model
1233
1234 def summarize(self, mode):
1235 model_summary = ModelSummary(self, mode=mode)
1236 log.info('\n' + model_summary.__str__())
1237
1238 def freeze(self):
1239 r"""
1240 Freeze all params for inference
1241
1242 Example
1243 -------
1244 .. code-block:: python
1245
1246 model = MyLightningModule(...)
1247 model.freeze()
1248
1249 """
1250 for param in self.parameters():
1251 param.requires_grad = False
1252
1253 self.eval()
1254
1255 def unfreeze(self):
1256 """Unfreeze all params for training.
1257
1258 .. code-block:: python
1259
1260 model = MyLightningModule(...)
1261 model.unfreeze()
1262
1263 """
1264 for param in self.parameters():
1265 param.requires_grad = True
1266
1267 self.train()
1268
1269 def on_load_checkpoint(self, checkpoint):
1270 r"""
1271 Called by lightning to restore your model.
1272 If you saved something with **on_save_checkpoint** this is your chance to restore this.
1273
1274 Args:
1275 checkpoint (dict): Loaded checkpoint
1276
1277
1278 Example
1279 -------
1280
1281 .. code-block:: python
1282
1283 def on_load_checkpoint(self, checkpoint):
1284 # 99% of the time you don't need to implement this method
1285 self.something_cool_i_want_to_save = checkpoint['something_cool_i_want_to_save']
1286
1287 .. note:: Lighting auto-restores global step, epoch, and all training state including amp scaling.
1288 No need for you to restore anything regarding training.
1289 """
1290
1291 def on_save_checkpoint(self, checkpoint):
1292 r"""
1293
1294 Called by lightning when saving a checkpoint to give you a chance to store anything else you
1295 might want to save
1296
1297 Args:
1298 checkpoint (dic): Checkpoint to be saved
1299
1300 Example
1301 -------
1302
1303 .. code-block:: python
1304
1305 def on_save_checkpoint(self, checkpoint):
1306 # 99% of use cases you don't need to implement this method
1307 checkpoint['something_cool_i_want_to_save'] = my_cool_pickable_object
1308
1309 .. note:: Lighting saves all aspects of training (epoch, global step, etc...) including amp scaling. No need
1310 for you to store anything about training.
1311
1312 """
1313
1314 def get_tqdm_dict(self):
1315 r"""
1316 Additional items to be displayed in the progress bar.
1317
1318 Return:
1319 Dictionary with the items to be displayed in the progress bar.
1320 """
1321 tqdm_dict = {
1322 'loss': '{:.3f}'.format(self.trainer.avg_loss)
1323 }
1324
1325 if self.trainer.truncated_bptt_steps is not None:
1326 tqdm_dict['split_idx'] = self.trainer.split_idx
1327
1328 if self.trainer.logger is not None and self.trainer.logger.version is not None:
1329 tqdm_dict['v_num'] = self.trainer.logger.version
1330
1331 return tqdm_dict
1332
[end of pytorch_lightning/core/lightning.py]
[start of pytorch_lightning/trainer/distrib_parts.py]
1 """
2 Lightning makes multi-gpu training and 16 bit training trivial.
3
4 .. note:: None of the flags below require changing anything about your lightningModel definition.
5
6 Choosing a backend
7 ==================
8
9 Lightning supports two backends. DataParallel and DistributedDataParallel.
10 Both can be used for single-node multi-GPU training.
11 For multi-node training you must use DistributedDataParallel.
12
13 DataParallel (dp)
14 -----------------
15
16 Splits a batch across multiple GPUs on the same node. Cannot be used for multi-node training.
17
18 DistributedDataParallel (ddp)
19 -----------------------------
20
21 Trains a copy of the model on each GPU and only syncs gradients. If used with DistributedSampler, each GPU trains
22 on a subset of the full dataset.
23
24 DistributedDataParallel-2 (ddp2)
25 --------------------------------
26
27 Works like DDP, except each node trains a single copy of the model using ALL GPUs on that node.
28 Very useful when dealing with negative samples, etc...
29
30 You can toggle between each mode by setting this flag.
31
32 .. code-block:: python
33
34 # DEFAULT (when using single GPU or no GPUs)
35 trainer = Trainer(distributed_backend=None)
36
37 # Change to DataParallel (gpus > 1)
38 trainer = Trainer(distributed_backend='dp')
39
40 # change to distributed data parallel (gpus > 1)
41 trainer = Trainer(distributed_backend='ddp')
42
43 # change to distributed data parallel (gpus > 1)
44 trainer = Trainer(distributed_backend='ddp2')
45
46 If you request multiple nodes, the back-end will auto-switch to ddp.
47 We recommend you use DistributedDataparallel even for single-node multi-GPU training.
48 It is MUCH faster than DP but *may* have configuration issues depending on your cluster.
49
50 For a deeper understanding of what lightning is doing, feel free to read this
51 `guide <https://medium.com/@_willfalcon/9-tips-for-training-lightning-fast-neural-networks-in-pytorch-8e63a502f565>`_.
52
53 Distributed and 16-bit precision
54 --------------------------------
55
56 Due to an issue with apex and DistributedDataParallel (PyTorch and NVIDIA issue), Lightning does
57 not allow 16-bit and DP training. We tried to get this to work, but it's an issue on their end.
58
59 Below are the possible configurations we support.
60
61 +-------+---------+----+-----+---------+------------------------------------------------------------+
62 | 1 GPU | 1+ GPUs | DP | DDP | 16-bit | command |
63 +=======+=========+====+=====+=========+============================================================+
64 | Y | | | | | `Trainer(gpus=1)` |
65 +-------+---------+----+-----+---------+------------------------------------------------------------+
66 | Y | | | | Y | `Trainer(gpus=1, use_amp=True)` |
67 +-------+---------+----+-----+---------+------------------------------------------------------------+
68 | | Y | Y | | | `Trainer(gpus=k, distributed_backend='dp')` |
69 +-------+---------+----+-----+---------+------------------------------------------------------------+
70 | | Y | | Y | | `Trainer(gpus=k, distributed_backend='ddp')` |
71 +-------+---------+----+-----+---------+------------------------------------------------------------+
72 | | Y | | Y | Y | `Trainer(gpus=k, distributed_backend='ddp', use_amp=True)` |
73 +-------+---------+----+-----+---------+------------------------------------------------------------+
74
75 You also have the option of specifying which GPUs to use by passing a list:
76
77 .. code-block:: python
78
79 # DEFAULT (int) specifies how many GPUs to use.
80 Trainer(gpus=k)
81
82 # Above is equivalent to
83 Trainer(gpus=list(range(k)))
84
85 # You specify which GPUs (don't use if running on cluster)
86 Trainer(gpus=[0, 1])
87
88 # can also be a string
89 Trainer(gpus='0, 1')
90
91 # can also be -1 or '-1', this uses all available GPUs
92 # this is equivalent to list(range(torch.cuda.available_devices()))
93 Trainer(gpus=-1)
94
95
96 CUDA flags
97 ----------
98
99 CUDA flags make certain GPUs visible to your script.
100 Lightning sets these for you automatically, there's NO NEED to do this yourself.
101
102 .. code-block:: python
103
104 # lightning will set according to what you give the trainer
105 os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
106 os.environ["CUDA_VISIBLE_DEVICES"] = "0"
107
108
109 However, when using a cluster, Lightning will NOT set these flags (and you should not either).
110 SLURM will set these for you.
111
112 16-bit mixed precision
113 ----------------------
114
115 16 bit precision can cut your memory footprint by half. If using volta architecture GPUs
116 it can give a dramatic training speed-up as well.
117 First, install apex (if install fails, look `here <https://github.com/NVIDIA/apex>`_::
118
119 $ git clone https://github.com/NVIDIA/apex
120 $ cd apex
121
122 # ------------------------
123 # OPTIONAL: on your cluster you might need to load cuda 10 or 9
124 # depending on how you installed PyTorch
125
126 # see available modules
127 module avail
128
129 # load correct cuda before install
130 module load cuda-10.0
131 # ------------------------
132
133 # make sure you've loaded a cuda version > 4.0 and < 7.0
134 module load gcc-6.1.0
135
136 $ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
137
138
139 then set this use_amp to True.::
140
141 # DEFAULT
142 trainer = Trainer(amp_level='O2', use_amp=False)
143
144
145 Single-gpu
146 ----------
147
148 Make sure you're on a GPU machine.::
149
150 # DEFAULT
151 trainer = Trainer(gpus=1)
152
153 Multi-gpu
154 ---------
155
156 Make sure you're on a GPU machine. You can set as many GPUs as you want.
157 In this setting, the model will run on all 8 GPUs at once using DataParallel under the hood.
158
159 .. code-block:: python
160
161 # to use DataParallel
162 trainer = Trainer(gpus=8, distributed_backend='dp')
163
164 # RECOMMENDED use DistributedDataParallel
165 trainer = Trainer(gpus=8, distributed_backend='ddp')
166
167 Custom device selection
168 -----------------------
169
170 The number of GPUs can also be selected with a list of indices or a string containing
171 a comma separated list of GPU ids.
172 The table below lists examples of possible input formats and how they are interpreted by Lightning.
173 Note in particular the difference between `gpus=0`, `gpus=[0]` and `gpus="0"`.
174
175 +---------------+-----------+---------------------+---------------------------------+
176 | `gpus` | Type | Parsed | Meaning |
177 +===============+===========+=====================+=================================+
178 | None | NoneType | None | CPU |
179 +---------------+-----------+---------------------+---------------------------------+
180 | 0 | int | None | CPU |
181 +---------------+-----------+---------------------+---------------------------------+
182 | 3 | int | [0, 1, 2] | first 3 GPUs |
183 +---------------+-----------+---------------------+---------------------------------+
184 | -1 | int | [0, 1, 2, ...] | all available GPUs |
185 +---------------+-----------+---------------------+---------------------------------+
186 | [0] | list | [0] | GPU 0 |
187 +---------------+-----------+---------------------+---------------------------------+
188 | [1, 3] | list | [1, 3] | GPUs 1 and 3 |
189 +---------------+-----------+---------------------+---------------------------------+
190 | "0" | str | [0] | GPU 0 |
191 +---------------+-----------+---------------------+---------------------------------+
192 | "3" | str | [3] | GPU 3 |
193 +---------------+-----------+---------------------+---------------------------------+
194 | "1, 3" | str | [1, 3] | GPUs 1 and 3 |
195 +---------------+-----------+---------------------+---------------------------------+
196 | "-1" | str | [0, 1, 2, ...] | all available GPUs |
197 +---------------+-----------+---------------------+---------------------------------+
198
199
200 Multi-node
201 ----------
202
203 Multi-node training is easily done by specifying these flags.
204
205 .. code-block:: python
206
207 # train on 12*8 GPUs
208 trainer = Trainer(gpus=8, num_nodes=12, distributed_backend='ddp')
209
210
211 You must configure your job submission script correctly for the trainer to work.
212 Here is an example script for the above trainer configuration.
213
214 .. code-block:: bash
215
216 #!/bin/bash -l
217
218 # SLURM SUBMIT SCRIPT
219 #SBATCH --nodes=12
220 #SBATCH --gres=gpu:8
221 #SBATCH --ntasks-per-node=8
222 #SBATCH --mem=0
223 #SBATCH --time=0-02:00:00
224
225 # activate conda env
226 conda activate my_env
227
228 # -------------------------
229 # OPTIONAL
230 # -------------------------
231 # debugging flags (optional)
232 # export NCCL_DEBUG=INFO
233 # export PYTHONFAULTHANDLER=1
234
235 # PyTorch comes with prebuilt NCCL support... but if you have issues with it
236 # you might need to load the latest version from your modules
237 # module load NCCL/2.4.7-1-cuda.10.0
238
239 # on your cluster you might need these:
240 # set the network interface
241 # export NCCL_SOCKET_IFNAME=^docker0,lo
242 # -------------------------
243
244 # random port between 12k and 20k
245 export MASTER_PORT=$((12000 + RANDOM % 20000))
246
247 # run script from above
248 python my_main_file.py
249
250 .. note:: When running in DDP mode, any errors in your code will show up as an NCCL issue.
251 Set the `NCCL_DEBUG=INFO` flag to see the ACTUAL error.
252
253 Finally, make sure to add a distributed sampler to your dataset. The distributed sampler copies a
254 portion of your dataset onto each GPU. (World_size = gpus_per_node * nb_nodes).
255
256 .. code-block:: python
257
258 # ie: this:
259 dataset = myDataset()
260 dataloader = Dataloader(dataset)
261
262 # becomes:
263 dataset = myDataset()
264 dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
265 dataloader = Dataloader(dataset, sampler=dist_sampler)
266
267
268 Auto-slurm-job-submission
269 -------------------------
270
271 Instead of manually building SLURM scripts, you can use the
272 `SlurmCluster object <https://williamfalcon.github.io/test-tube/hpc/SlurmCluster>`_
273 to do this for you. The SlurmCluster can also run a grid search if you pass
274 in a `HyperOptArgumentParser
275 <https://williamfalcon.github.io/test-tube/hyperparameter_optimization/HyperOptArgumentParser>`_.
276
277 Here is an example where you run a grid search of 9 combinations of hyperparams.
278 The full examples are
279 `here <https://git.io/Jv87p>`_.
280
281 .. code-block:: python
282
283 # grid search 3 values of learning rate and 3 values of number of layers for your net
284 # this generates 9 experiments (lr=1e-3, layers=16), (lr=1e-3, layers=32),
285 # (lr=1e-3, layers=64), ... (lr=1e-1, layers=64)
286 parser = HyperOptArgumentParser(strategy='grid_search', add_help=False)
287 parser.opt_list('--learning_rate', default=0.001, type=float,
288 options=[1e-3, 1e-2, 1e-1], tunable=True)
289 parser.opt_list('--layers', default=1, type=float, options=[16, 32, 64], tunable=True)
290 hyperparams = parser.parse_args()
291
292 # Slurm cluster submits 9 jobs, each with a set of hyperparams
293 cluster = SlurmCluster(
294 hyperparam_optimizer=hyperparams,
295 log_path='/some/path/to/save',
296 )
297
298 # OPTIONAL FLAGS WHICH MAY BE CLUSTER DEPENDENT
299 # which interface your nodes use for communication
300 cluster.add_command('export NCCL_SOCKET_IFNAME=^docker0,lo')
301
302 # see output of the NCCL connection process
303 # NCCL is how the nodes talk to each other
304 cluster.add_command('export NCCL_DEBUG=INFO')
305
306 # setting a master port here is a good idea.
307 cluster.add_command('export MASTER_PORT=%r' % PORT)
308
309 # ************** DON'T FORGET THIS ***************
310 # MUST load the latest NCCL version
311 cluster.load_modules(['NCCL/2.4.7-1-cuda.10.0'])
312
313 # configure cluster
314 cluster.per_experiment_nb_nodes = 12
315 cluster.per_experiment_nb_gpus = 8
316
317 cluster.add_slurm_cmd(cmd='ntasks-per-node', value=8, comment='1 task per gpu')
318
319 # submit a script with 9 combinations of hyper params
320 # (lr=1e-3, layers=16), (lr=1e-3, layers=32), (lr=1e-3, layers=64), ... (lr=1e-1, layers=64)
321 cluster.optimize_parallel_cluster_gpu(
322 main,
323 nb_trials=9, # how many permutations of the grid search to run
324 job_name='name_for_squeue'
325 )
326
327
328 The other option is that you generate scripts on your own via a bash command or use another library...
329
330 Self-balancing architecture
331 ---------------------------
332
333 Here lightning distributes parts of your module across available GPUs to optimize for speed and memory.
334
335 """
336
337 from abc import ABC, abstractmethod
338 import logging as log
339 import os
340 import signal
341
342 import torch
343
344 from pytorch_lightning.overrides.data_parallel import (
345 LightningDistributedDataParallel,
346 LightningDataParallel,
347 )
348 from pytorch_lightning.utilities.debugging import MisconfigurationException
349
350 try:
351 from apex import amp
352 except ImportError:
353 APEX_AVAILABLE = False
354 else:
355 APEX_AVAILABLE = True
356
357 try:
358 import torch_xla.core.xla_model as xm
359 except ImportError:
360 XLA_AVAILABLE = False
361 else:
362 XLA_AVAILABLE = True
363
364
365 class TrainerDPMixin(ABC):
366
367 # this is just a summary on variables used in this abstract class,
368 # the proper values/initialisation should be done in child class
369 on_gpu: bool
370 use_dp: bool
371 use_ddp2: bool
372 use_ddp: bool
373 use_amp: bool
374 testing: bool
375 single_gpu: bool
376 root_gpu: ...
377 amp_level: str
378 precision: ...
379 current_tpu_idx: ...
380 proc_rank: int
381 tpu_local_core_rank: int
382 tpu_global_core_rank: int
383 use_tpu: bool
384 data_parallel_device_ids: ...
385
386 @abstractmethod
387 def run_pretrain_routine(self, *args):
388 """Warning: this is just empty shell for code implemented in other class."""
389
390 @abstractmethod
391 def init_optimizers(self, *args):
392 """Warning: this is just empty shell for code implemented in other class."""
393
394 def copy_trainer_model_properties(self, model):
395 if isinstance(model, LightningDataParallel):
396 ref_model = model.module
397 elif isinstance(model, LightningDistributedDataParallel):
398 ref_model = model.module
399 else:
400 ref_model = model
401
402 for m in [model, ref_model]:
403 m.trainer = self
404 m.on_gpu = self.on_gpu
405 m.use_dp = self.use_dp
406 m.use_ddp2 = self.use_ddp2
407 m.use_ddp = self.use_ddp
408 m.use_amp = self.use_amp
409 m.testing = self.testing
410 m.single_gpu = self.single_gpu
411 m.use_tpu = self.use_tpu
412 m.tpu_local_core_rank = self.tpu_local_core_rank
413 m.tpu_global_core_rank = self.tpu_global_core_rank
414
415 def transfer_batch_to_tpu(self, batch):
416 return self.__transfer_data_to_device(batch, device='tpu')
417
418 def transfer_batch_to_gpu(self, batch, gpu_id):
419 return self.__transfer_data_to_device(batch, device='gpu', gpu_id=gpu_id)
420
421 def __transfer_data_to_device(self, batch, device, gpu_id=None):
422 if device == 'tpu' and XLA_AVAILABLE:
423 # base case: object can be directly moved using `to`
424 if callable(getattr(batch, 'to', None)):
425 return batch.to(xm.xla_device())
426
427 if device == 'gpu':
428 # base case: object can be directly moved using `cuda` or `to`
429 if callable(getattr(batch, 'cuda', None)):
430 return batch.cuda(gpu_id)
431
432 if callable(getattr(batch, 'to', None)):
433 return batch.to(torch.device('cuda', gpu_id))
434
435 # when list
436 if isinstance(batch, list):
437 for i, x in enumerate(batch):
438 batch[i] = self.__transfer_data_to_device(x, device, gpu_id)
439 return batch
440
441 # when tuple
442 if isinstance(batch, tuple):
443 batch = list(batch)
444 for i, x in enumerate(batch):
445 batch[i] = self.__transfer_data_to_device(x, device, gpu_id)
446 return tuple(batch)
447
448 # when dict
449 if isinstance(batch, dict):
450 for k, v in batch.items():
451 batch[k] = self.__transfer_data_to_device(v, device, gpu_id)
452
453 return batch
454
455 # nothing matches, return the value as is without transform
456 return batch
457
458 def single_gpu_train(self, model):
459 model.cuda(self.root_gpu)
460
461 # CHOOSE OPTIMIZER
462 # allow for lr schedulers as well
463 self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
464
465 if self.use_amp:
466 # An example
467 model, optimizers = model.configure_apex(amp, model, self.optimizers, self.amp_level)
468 self.optimizers = optimizers
469
470 self.run_pretrain_routine(model)
471
472 def tpu_train(self, tpu_core_idx, model):
473 # put model on tpu
474 model.to(xm.xla_device())
475
476 # get the appropriate tpu ranks
477 self.tpu_local_core_rank = xm.get_local_ordinal()
478 self.tpu_global_core_rank = xm.get_ordinal()
479
480 # avoid duplicating progress bar
481 self.show_progress_bar = self.show_progress_bar and self.tpu_global_core_rank == 0
482
483 # track current tpu
484 self.current_tpu_idx = tpu_core_idx
485 self.proc_rank = self.tpu_local_core_rank
486
487 # CHOOSE OPTIMIZER
488 # allow for lr schedulers as well
489 self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
490
491 # init 16 bit for TPU
492 if self.precision == 16:
493 os.environ['XLA_USE_BF16'] = str(1)
494
495 m = f'INIT TPU local core: {self.tpu_local_core_rank}, ' \
496 f'global rank: {self.tpu_global_core_rank}'
497 log.info(m)
498
499 # continue training routine
500 self.run_pretrain_routine(model)
501
502 self.save_spawn_weights(model)
503
504 def dp_train(self, model):
505
506 # CHOOSE OPTIMIZER
507 # allow for lr schedulers as well
508 self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
509
510 model.cuda(self.root_gpu)
511
512 # check for this bug (amp + dp + !01 doesn't work)
513 # https://github.com/NVIDIA/apex/issues/227
514 if self.use_dp and self.use_amp:
515 if self.amp_level == 'O2':
516 m = f"""
517 Amp level {self.amp_level} with DataParallel is not supported.
518 See this note from NVIDIA for more info: https://github.com/NVIDIA/apex/issues/227.
519 We recommend you switch to ddp if you want to use amp
520 """
521 raise MisconfigurationException(m)
522 else:
523 model, optimizers = model.configure_apex(amp, model, self.optimizers, self.amp_level)
524
525 # create list of device ids
526 device_ids = self.data_parallel_device_ids
527 if isinstance(device_ids, int):
528 device_ids = list(range(device_ids))
529
530 model = LightningDataParallel(model, device_ids=device_ids)
531
532 self.run_pretrain_routine(model)
533
534
535 def normalize_parse_gpu_string_input(s):
536 if isinstance(s, str):
537 if s == '-1':
538 return -1
539 else:
540 return [int(x.strip()) for x in s.split(',')]
541 else:
542 return s
543
544
545 def get_all_available_gpus():
546 """
547 :return: a list of all available gpus
548 """
549 return list(range(torch.cuda.device_count()))
550
551
552 def check_gpus_data_type(gpus):
553 """
554 :param gpus: gpus parameter as passed to the Trainer
555 Function checks that it is one of: None, Int, String or List
556 Throws otherwise
557 :return: return unmodified gpus variable
558 """
559
560 if gpus is not None and type(gpus) not in (int, str, list):
561 raise MisconfigurationException("GPUs must be int, string or list of ints or None.")
562
563
564 def normalize_parse_gpu_input_to_list(gpus):
565 assert gpus is not None
566 if isinstance(gpus, list):
567 return gpus
568
569 # must be an int
570 if not gpus: # gpus==0
571 return None
572 if gpus == -1:
573 return get_all_available_gpus()
574
575 return list(range(gpus))
576
577
578 def sanitize_gpu_ids(gpus):
579 """
580 :param gpus: list of ints corresponding to GPU indices
581 Checks that each of the GPUs in the list is actually available.
582 Throws if any of the GPUs is not available.
583 :return: unmodified gpus variable
584 """
585 all_available_gpus = get_all_available_gpus()
586 for gpu in gpus:
587 if gpu not in all_available_gpus:
588 message = f"""
589 You requested GPUs: {gpus}
590 But your machine only has: {all_available_gpus}
591 """
592 raise MisconfigurationException(message)
593 return gpus
594
595
596 def parse_gpu_ids(gpus):
597 """
598 :param gpus: Int, string or list
599 An int -1 or string '-1' indicate that all available GPUs should be used.
600 A list of ints or a string containing list of comma separated integers
601 indicates specific GPUs to use
602 An int 0 means that no GPUs should be used
603 Any int N > 0 indicates that GPUs [0..N) should be used.
604 :return: List of gpus to be used
605
606 If no GPUs are available but the value of gpus variable indicates request for GPUs
607 then a misconfiguration exception is raised.
608 """
609
610 # Check that gpus param is None, Int, String or List
611 check_gpus_data_type(gpus)
612
613 # Handle the case when no gpus are requested
614 if gpus is None or isinstance(gpus, int) and gpus == 0:
615 return None
616
617 # We know user requested GPUs therefore if some of the
618 # requested GPUs are not available an exception is thrown.
619
620 gpus = normalize_parse_gpu_string_input(gpus)
621 gpus = normalize_parse_gpu_input_to_list(gpus)
622 gpus = sanitize_gpu_ids(gpus)
623
624 if not gpus:
625 raise MisconfigurationException("GPUs requested but none are available.")
626 return gpus
627
628
629 def determine_root_gpu_device(gpus):
630 """
631 :param gpus: non empty list of ints representing which gpus to use
632 :return: designated root GPU device
633 """
634 if gpus is None:
635 return None
636
637 assert isinstance(gpus, list), "gpus should be a list"
638 assert len(gpus) > 0, "gpus should be a non empty list"
639
640 # set root gpu
641 root_gpu = gpus[0]
642
643 return root_gpu
644
[end of pytorch_lightning/trainer/distrib_parts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| Lightning-AI/lightning | bcb45d906d5f378a30461d513728cad34fc647ce | Support stepping options for lr scheduler
Currently schedulers get called every epoch. Sometimes though, we want them to be called every step.
Proposal 1:
Allow configure_optimizers to return this:
```python
return Adam, {'scheduler': LRScheduler, 'interval': 'batch|epoch'}
```
@ethanwharris @Borda thoughts? any simpler more general way of doing this? i think this dict can eventually have more options if we need to.
@srush
| 2020-02-25T15:48:00Z | <patch>
diff --git a/pytorch_lightning/core/lightning.py b/pytorch_lightning/core/lightning.py
--- a/pytorch_lightning/core/lightning.py
+++ b/pytorch_lightning/core/lightning.py
@@ -758,6 +758,15 @@ def configure_optimizers(self):
discriminator_sched = CosineAnnealing(discriminator_opt, T_max=10)
return [generator_opt, disriminator_opt], [discriminator_sched]
+ # example with step-based learning_rate schedulers
+ def configure_optimizers(self):
+ gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
+ dis_opt = Adam(self.model_disc.parameters(), lr=0.02)
+ gen_sched = {'scheduler': ExponentialLR(gen_opt, 0.99),
+ 'interval': 'step'} # called after each training step
+ dis_sched = CosineAnnealing(discriminator_opt, T_max=10) # called after each epoch
+ return [gen_opt, dis_opt], [gen_sched, dis_sched]
+
.. note:: Lightning calls .backward() and .step() on each optimizer and learning rate scheduler as needed.
.. note:: If you use 16-bit precision (use_amp=True), Lightning will automatically
@@ -773,6 +782,8 @@ def configure_optimizers(self):
.. note:: If you need to control how often those optimizers step or override the default .step() schedule,
override the `optimizer_step` hook.
+ .. note:: If you only want to call a learning rate schduler every `x` step or epoch,
+ you can input this as 'frequency' key: dict(scheduler=lr_schudler, interval='step' or 'epoch', frequency=x)
"""
return Adam(self.parameters(), lr=1e-3)
diff --git a/pytorch_lightning/trainer/trainer.py b/pytorch_lightning/trainer/trainer.py
--- a/pytorch_lightning/trainer/trainer.py
+++ b/pytorch_lightning/trainer/trainer.py
@@ -6,6 +6,7 @@
from argparse import ArgumentParser
import torch
+from torch import optim
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.utils.data import DataLoader
@@ -743,8 +744,6 @@ def on_train_end(self):
# creates a default one if none passed in
self.configure_early_stopping(early_stop_callback)
- self.reduce_lr_on_plateau_scheduler = None
-
# configure checkpoint callback
self.checkpoint_callback = checkpoint_callback
self.weights_save_path = weights_save_path
@@ -1079,26 +1078,56 @@ def init_optimizers(
optimizers: Union[Optimizer, Tuple[List, List], List[Optimizer], Tuple[Optimizer]]
) -> Tuple[List, List]:
- # single optimizer
+ # single output, single optimizer
if isinstance(optimizers, Optimizer):
return [optimizers], []
- # two lists
- if len(optimizers) == 2 and isinstance(optimizers[0], list):
+ # two lists, optimizer + lr schedulers
+ elif len(optimizers) == 2 and isinstance(optimizers[0], list):
optimizers, lr_schedulers = optimizers
- lr_schedulers, self.reduce_lr_on_plateau_scheduler = self.configure_schedulers(lr_schedulers)
+ lr_schedulers = self.configure_schedulers(lr_schedulers)
return optimizers, lr_schedulers
- # single list or tuple
- if isinstance(optimizers, (list, tuple)):
+ # single list or tuple, multiple optimizer
+ elif isinstance(optimizers, (list, tuple)):
return optimizers, []
+ # unknown configuration
+ else:
+ raise ValueError('Unknown configuration for model optimizers. Output'
+ 'from model.configure_optimizers() should either be:'
+ '* single output, single torch.optim.Optimizer'
+ '* single output, list of torch.optim.Optimizer'
+ '* two outputs, first being a list of torch.optim.Optimizer',
+ 'second being a list of torch.optim.lr_scheduler')
+
def configure_schedulers(self, schedulers: list):
- for i, scheduler in enumerate(schedulers):
- if isinstance(scheduler, torch.optim.lr_scheduler.ReduceLROnPlateau):
- reduce_lr_on_plateau_scheduler = schedulers.pop(i)
- return schedulers, reduce_lr_on_plateau_scheduler
- return schedulers, None
+ # Convert each scheduler into dict sturcture with relevant information
+ lr_schedulers = []
+ default_config = {'interval': 'epoch', # default every epoch
+ 'frequency': 1, # default every epoch/batch
+ 'reduce_on_plateau': False, # most often not ReduceLROnPlateau scheduler
+ 'monitor': 'val_loss'} # default value to monitor for ReduceLROnPlateau
+ for scheduler in schedulers:
+ if isinstance(scheduler, dict):
+ if 'scheduler' not in scheduler:
+ raise ValueError(f'Lr scheduler should have key `scheduler`',
+ ' with item being a lr scheduler')
+ scheduler['reduce_on_plateau'] = \
+ isinstance(scheduler, optim.lr_scheduler.ReduceLROnPlateau)
+
+ lr_schedulers.append({**default_config, **scheduler})
+
+ elif isinstance(scheduler, optim.lr_scheduler.ReduceLROnPlateau):
+ lr_schedulers.append({**default_config, 'scheduler': scheduler,
+ 'reduce_on_plateau': True})
+
+ elif isinstance(scheduler, optim.lr_scheduler._LRScheduler):
+ lr_schedulers.append({**default_config, 'scheduler': scheduler})
+ else:
+ raise ValueError(f'Input {scheduler} to lr schedulers '
+ 'is a invalid input.')
+ return lr_schedulers
def run_pretrain_routine(self, model: LightningModule):
"""Sanity check a few things before starting actual training.
diff --git a/pytorch_lightning/trainer/training_io.py b/pytorch_lightning/trainer/training_io.py
--- a/pytorch_lightning/trainer/training_io.py
+++ b/pytorch_lightning/trainer/training_io.py
@@ -1,3 +1,94 @@
+"""
+Lightning can automate saving and loading checkpoints
+=====================================================
+
+Checkpointing is enabled by default to the current working directory.
+To change the checkpoint path pass in::
+
+ Trainer(default_save_path='/your/path/to/save/checkpoints')
+
+
+To modify the behavior of checkpointing pass in your own callback.
+
+.. code-block:: python
+
+ from pytorch_lightning.callbacks import ModelCheckpoint
+
+ # DEFAULTS used by the Trainer
+ checkpoint_callback = ModelCheckpoint(
+ filepath=os.getcwd(),
+ save_best_only=True,
+ verbose=True,
+ monitor='val_loss',
+ mode='min',
+ prefix=''
+ )
+
+ trainer = Trainer(checkpoint_callback=checkpoint_callback)
+
+
+Restoring training session
+--------------------------
+
+You might want to not only load a model but also continue training it. Use this method to
+restore the trainer state as well. This will continue from the epoch and global step you last left off.
+However, the dataloaders will start from the first batch again (if you shuffled it shouldn't matter).
+
+Lightning will restore the session if you pass a logger with the same version and there's a saved checkpoint.
+
+.. code-block:: python
+
+ from pytorch_lightning import Trainer
+ from pytorch_lightning.loggers import TestTubeLogger
+
+ logger = TestTubeLogger(
+ save_dir='./savepath',
+ version=1 # An existing version with a saved checkpoint
+ )
+ trainer = Trainer(
+ logger=logger,
+ default_save_path='./savepath'
+ )
+
+ # this fit call loads model weights and trainer state
+ # the trainer continues seamlessly from where you left off
+ # without having to do anything else.
+ trainer.fit(model)
+
+
+The trainer restores:
+
+- global_step
+- current_epoch
+- All optimizers
+- All lr_schedulers
+- Model weights
+
+You can even change the logic of your model as long as the weights and "architecture" of
+the system isn't different. If you add a layer, for instance, it might not work.
+
+At a rough level, here's what happens inside Trainer :py:mod:`pytorch_lightning.base_module.model_saving.py`:
+
+.. code-block:: python
+
+ self.global_step = checkpoint['global_step']
+ self.current_epoch = checkpoint['epoch']
+
+ # restore the optimizers
+ optimizer_states = checkpoint['optimizer_states']
+ for optimizer, opt_state in zip(self.optimizers, optimizer_states):
+ optimizer.load_state_dict(opt_state)
+
+ # restore the lr schedulers
+ lr_schedulers = checkpoint['lr_schedulers']
+ for scheduler, lrs_state in zip(self.lr_schedulers, lr_schedulers):
+ scheduler['scheduler'].load_state_dict(lrs_state)
+
+ # uses the model you passed into trainer
+ model.load_state_dict(checkpoint['state_dict'])
+
+"""
+
import logging as log
import os
import re
@@ -228,8 +319,8 @@ def dump_checkpoint(self):
# save lr schedulers
lr_schedulers = []
- for i, scheduler in enumerate(self.lr_schedulers):
- lr_schedulers.append(scheduler.state_dict())
+ for scheduler in self.lr_schedulers:
+ lr_schedulers.append(scheduler['scheduler'].state_dict())
checkpoint['lr_schedulers'] = lr_schedulers
@@ -320,7 +411,7 @@ def restore_training_state(self, checkpoint):
# restore the lr schedulers
lr_schedulers = checkpoint['lr_schedulers']
for scheduler, lrs_state in zip(self.lr_schedulers, lr_schedulers):
- scheduler.load_state_dict(lrs_state)
+ scheduler['scheduler'].load_state_dict(lrs_state)
# ----------------------------------
# PRIVATE OPS
diff --git a/pytorch_lightning/trainer/training_loop.py b/pytorch_lightning/trainer/training_loop.py
--- a/pytorch_lightning/trainer/training_loop.py
+++ b/pytorch_lightning/trainer/training_loop.py
@@ -361,17 +361,7 @@ def train(self):
self.run_training_epoch()
# update LR schedulers
- if self.lr_schedulers is not None:
- for lr_scheduler in self.lr_schedulers:
- lr_scheduler.step()
- if self.reduce_lr_on_plateau_scheduler is not None:
- val_loss = self.callback_metrics.get('val_loss')
- if val_loss is None:
- avail_metrics = ','.join(list(self.callback_metrics.keys()))
- m = f'ReduceLROnPlateau conditioned on metric val_loss ' \
- f'which is not available. Available metrics are: {avail_metrics}'
- raise MisconfigurationException(m)
- self.reduce_lr_on_plateau_scheduler.step(val_loss)
+ self.update_learning_rates(interval='epoch')
if self.max_steps and self.max_steps == self.global_step:
self.run_training_teardown()
@@ -444,6 +434,9 @@ def run_training_epoch(self):
# when returning -1 from train_step, we end epoch early
early_stop_epoch = batch_result == -1
+ # update lr
+ self.update_learning_rates(interval='step')
+
# ---------------
# RUN VAL STEP
# ---------------
@@ -716,6 +709,34 @@ def training_forward(self, batch, batch_idx, opt_idx, hiddens):
return output
+ def update_learning_rates(self, interval):
+ ''' Update learning rates
+ Args:
+ interval (str): either 'epoch' or 'step'.
+ '''
+ if not self.lr_schedulers:
+ return
+
+ for lr_scheduler in self.lr_schedulers:
+ current_idx = self.batch_idx if interval == 'step' else self.current_epoch
+ current_idx += 1 # account for both batch and epoch starts from 0
+ # Take step if call to update_learning_rates matches the interval key and
+ # the current step modulo the schedulers frequency is zero
+ if lr_scheduler['interval'] == interval and current_idx % lr_scheduler['frequency'] == 0:
+ # If instance of ReduceLROnPlateau, we need to pass validation loss
+ if lr_scheduler['reduce_on_plateau']:
+ monitor_key = lr_scheduler['monitor']
+ monitor_val = self.callback_metrics.get(monitor_key)
+ if monitor_val is None:
+ avail_metrics = ','.join(list(self.callback_metrics.keys()))
+ m = f'ReduceLROnPlateau conditioned on metric {monitor_key} ' \
+ f'which is not available. Available metrics are: {avail_metrics}. ' \
+ 'Condition can be set using `monitor` key in lr scheduler dict'
+ raise MisconfigurationException(m)
+ lr_scheduler['scheduler'].step(monitor_val)
+ else:
+ lr_scheduler['scheduler'].step()
+
def call_checkpoint_callback(self):
if self.checkpoint_callback is not None:
self.checkpoint_callback.on_validation_end(self, self.get_model())
</patch> | [] | [] | ||||
PrefectHQ__prefect-1386 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`auth login` CLI check needs token required query
## Description
`prefect auth login` runs a graphql query to verify the token provided is valid. The current query is `query { hello }` and this query does not require authentication. This query needs to be updated to one which requires authentication (which is every other query, let's just find the smallest one)
## Expected Behavior
If the token is invalid it should elevate an error to the user
## Reproduction
Query the API with `query { hello }` without a token and it will still work.
## Environment
N/A
</issue>
<code>
[start of README.md]
1 <p align="center" style="margin-bottom:40px;">
2 <img src="https://uploads-ssl.webflow.com/5ba446b0e783e26d5a2f2382/5c942c9ca934ec5c88588297_primary-color-vertical.svg" height=350 style="max-height: 350px;">
3 </p>
4
5 <p align="center">
6 <a href=https://circleci.com/gh/PrefectHQ/prefect/tree/master>
7 <img src="https://circleci.com/gh/PrefectHQ/prefect/tree/master.svg?style=shield&circle-token=28689a55edc3c373486aaa5f11a1af3e5fc53344">
8 </a>
9
10 <a href="https://codecov.io/gh/PrefectHQ/prefect">
11 <img src="https://codecov.io/gh/PrefectHQ/prefect/branch/master/graph/badge.svg" />
12 </a>
13
14 <a href=https://github.com/ambv/black>
15 <img src="https://img.shields.io/badge/code%20style-black-000000.svg">
16 </a>
17
18 <a href="https://pypi.org/project/prefect/">
19 <img src="https://img.shields.io/pypi/dm/prefect.svg?color=%2327B1FF&label=installs&logoColor=%234D606E">
20 </a>
21
22 <a href="https://hub.docker.com/r/prefecthq/prefect">
23 <img src="https://img.shields.io/docker/pulls/prefecthq/prefect.svg?color=%2327B1FF&logoColor=%234D606E">
24 </a>
25
26 <a href="https://join.slack.com/t/prefect-public/shared_invite/enQtNzE5OTU3OTQwNzc1LTQ5M2FkZmQzZjI0ODg1ZTBmOTc0ZjVjYWFjMWExZDAyYzBmYjVmMTE1NTQ1Y2IxZTllOTc4MmI3NzYxMDlhYWU">
27 <img src="https://img.shields.io/static/v1.svg?label=chat&message=on%20slack&color=27b1ff&style=flat">
28 </a>
29
30 </p>
31
32 ## Hello, world! 👋
33
34 We've rebuilt data engineering for the data science era.
35
36 Prefect is a new workflow management system, designed for modern infrastructure and powered by the open-source Prefect Core workflow engine. Users organize `Tasks` into `Flows`, and Prefect takes care of the rest.
37
38 Read the [docs](https://docs.prefect.io); get the [code](#installation); ask us [anything](https://join.slack.com/t/prefect-public/shared_invite/enQtNzE5OTU3OTQwNzc1LTQ5M2FkZmQzZjI0ODg1ZTBmOTc0ZjVjYWFjMWExZDAyYzBmYjVmMTE1NTQ1Y2IxZTllOTc4MmI3NzYxMDlhYWU)!
39
40 ```python
41 from prefect import task, Flow
42
43
44 @task
45 def say_hello():
46 print("Hello, world!")
47
48
49 with Flow("My First Flow") as flow:
50 say_hello()
51
52
53 flow.run() # "Hello, world!"
54 ```
55
56 ## Docs
57
58 Prefect's documentation -- including concepts, tutorials, and a full API reference -- is always available at [docs.prefect.io](https://docs.prefect.io).
59
60 ## Contributing
61
62 Read about Prefect's [community](https://docs.prefect.io/guide/welcome/community.html) or dive in to the [development guides](https://docs.prefect.io/guide/development/overview.html) for information about contributions, documentation, code style, and testing.
63
64 Join our [Slack](https://join.slack.com/t/prefect-public/shared_invite/enQtNzE5OTU3OTQwNzc1LTQ5M2FkZmQzZjI0ODg1ZTBmOTc0ZjVjYWFjMWExZDAyYzBmYjVmMTE1NTQ1Y2IxZTllOTc4MmI3NzYxMDlhYWU) to chat about Prefect, ask questions, and share tips.
65
66 Prefect is committed to ensuring a positive environment. All interactions are governed by our [Code of Conduct](https://docs.prefect.io/guide/welcome/code_of_conduct.html).
67
68 ## "...Prefect?"
69
70 From the Latin _praefectus_, meaning "one who is in charge", a prefect is an official who oversees a domain and makes sure that the rules are followed. Similarly, Prefect is responsible for making sure that workflows execute properly.
71
72 It also happens to be the name of a roving researcher for that wholly remarkable book, _The Hitchhiker's Guide to the Galaxy_.
73
74 ## Installation
75
76 ### Requirements
77
78 Prefect requires Python 3.5.2+.
79
80 ### Install latest release
81
82 Using `pip`:
83
84 ```bash
85 pip install prefect
86 ```
87
88 or `conda`:
89
90 ```bash
91 conda install -c conda-forge prefect
92 ```
93
94 or `pipenv`:
95 ```
96 pipenv install --pre prefect
97 ```
98
99 ### Install bleeding edge
100
101 ```bash
102 git clone https://github.com/PrefectHQ/prefect.git
103 pip install ./prefect
104 ```
105
106 ## License
107
108 Prefect is licensed under the Apache Software License version 2.0.
109
[end of README.md]
[start of src/prefect/agent/agent.py]
1 import logging
2 from typing import Union
3
4 import pendulum
5 import time
6
7 from prefect import config
8 from prefect.client import Client
9 from prefect.serialization import state
10 from prefect.engine.state import Submitted
11 from prefect.utilities.graphql import with_args
12
13
14 ascii_name = r"""
15 ____ __ _ _ _
16 | _ \ _ __ ___ / _| ___ ___| |_ / \ __ _ ___ _ __ | |_
17 | |_) | '__/ _ \ |_ / _ \/ __| __| / _ \ / _` |/ _ \ '_ \| __|
18 | __/| | | __/ _| __/ (__| |_ / ___ \ (_| | __/ | | | |_
19 |_| |_| \___|_| \___|\___|\__| /_/ \_\__, |\___|_| |_|\__|
20 |___/
21 """
22
23
24 class Agent:
25 """
26 Base class for Agents.
27
28 This Agent class is a standard point for executing Flows in Prefect Cloud. It is meant
29 to have subclasses which inherit functionality from this class. The only piece that
30 the subclasses should implement is the `deploy_flows` function, which specifies how to run a Flow on the given platform. It is built in this
31 way to keep Prefect Cloud logic standard but allows for platform specific
32 customizability.
33
34 In order for this to operate `PREFECT__CLOUD__AGENT__AUTH_TOKEN` must be set as an
35 environment variable or in your user configuration file.
36 """
37
38 def __init__(self) -> None:
39 self.loop_interval = config.cloud.agent.get("loop_interval")
40
41 self.client = Client(token=config.cloud.agent.get("auth_token"))
42
43 logger = logging.getLogger("agent")
44 logger.setLevel(logging.DEBUG)
45 ch = logging.StreamHandler()
46 ch.setLevel(logging.DEBUG)
47 formatter = logging.Formatter(
48 "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
49 )
50 ch.setFormatter(formatter)
51 logger.addHandler(ch)
52
53 self.logger = logger
54
55 def start(self) -> None:
56 """
57 The main entrypoint to the agent. This function loops and constantly polls for
58 new flow runs to deploy
59 """
60 tenant_id = self.agent_connect()
61 while True:
62 self.agent_process(tenant_id)
63 time.sleep(self.loop_interval)
64
65 def agent_connect(self) -> str:
66 """
67 Verify agent connection to Prefect Cloud by finding and returning a tenant id
68
69 Returns:
70 - str: The current tenant id
71 """
72 print(ascii_name)
73 self.logger.info("Starting {}".format(type(self).__name__))
74 self.logger.info(
75 "Agent documentation can be found at https://docs.prefect.io/cloud/agent"
76 )
77 tenant_id = self.query_tenant_id()
78
79 if not tenant_id:
80 raise ConnectionError(
81 "Tenant ID not found. Verify that you are using the proper API token."
82 )
83
84 self.logger.info("Agent successfully connected to Prefect Cloud")
85 self.logger.info("Waiting for flow runs...")
86
87 return tenant_id
88
89 def agent_process(self, tenant_id: str) -> None:
90 """
91 Full process for finding flow runs, updating states, and deploying.
92
93 Args:
94 - tenant_id (str): The tenant id to use in the query
95 """
96 try:
97 flow_runs = self.query_flow_runs(tenant_id=tenant_id)
98
99 if flow_runs:
100 self.logger.info(
101 "Found {} flow run(s) to submit for execution.".format(
102 len(flow_runs)
103 )
104 )
105
106 self.update_states(flow_runs)
107 self.deploy_flows(flow_runs)
108 self.logger.info(
109 "Submitted {} flow run(s) for execution.".format(len(flow_runs))
110 )
111 except Exception as exc:
112 self.logger.error(exc)
113
114 def query_tenant_id(self) -> Union[str, None]:
115 """
116 Query Prefect Cloud for the tenant id that corresponds to the agent's auth token
117
118 Returns:
119 - Union[str, None]: The current tenant id if found, None otherwise
120 """
121 query = {"query": {"tenant": {"id"}}}
122 result = self.client.graphql(query)
123
124 if result.data.tenant: # type: ignore
125 return result.data.tenant[0].id # type: ignore
126
127 return None
128
129 def query_flow_runs(self, tenant_id: str) -> list:
130 """
131 Query Prefect Cloud for flow runs which need to be deployed and executed
132
133 Args:
134 - tenant_id (str): The tenant id to use in the query
135
136 Returns:
137 - list: A list of GraphQLResult flow run objects
138 """
139
140 # Get scheduled flow runs from queue
141 mutation = {
142 "mutation($input: getRunsInQueueInput!)": {
143 "getRunsInQueue(input: $input)": {"flow_run_ids"}
144 }
145 }
146
147 result = self.client.graphql(
148 mutation, variables={"input": {"tenantId": tenant_id}}
149 )
150 flow_run_ids = result.data.getRunsInQueue.flow_run_ids # type: ignore
151 now = pendulum.now("UTC")
152
153 # Query metadata fow flow runs found in queue
154 query = {
155 "query": {
156 with_args(
157 "flow_run",
158 {
159 # match flow runs in the flow_run_ids list
160 "where": {
161 "id": {"_in": flow_run_ids},
162 "_or": [
163 # who are EITHER scheduled...
164 {"state": {"_eq": "Scheduled"}},
165 # OR running with task runs scheduled to start more than 3 seconds ago
166 {
167 "state": {"_eq": "Running"},
168 "task_runs": {
169 "state_start_time": {
170 "_lte": str(now.subtract(seconds=3))
171 }
172 },
173 },
174 ],
175 }
176 },
177 ): {
178 "id": True,
179 "version": True,
180 "tenant_id": True,
181 "state": True,
182 "serialized_state": True,
183 "parameters": True,
184 "flow": {"id", "name", "environment", "storage"},
185 with_args(
186 "task_runs",
187 {
188 "where": {
189 "state_start_time": {
190 "_lte": str(now.subtract(seconds=3))
191 }
192 }
193 },
194 ): {"id", "version", "task_id", "serialized_state"},
195 }
196 }
197 }
198
199 result = self.client.graphql(query)
200 return result.data.flow_run # type: ignore
201
202 def update_states(self, flow_runs: list) -> None:
203 """
204 After a flow run is grabbed this function sets the state to Submitted so it
205 won't be picked up by any other processes
206
207 Args:
208 - flow_runs (list): A list of GraphQLResult flow run objects
209 """
210 for flow_run in flow_runs:
211
212 # Set flow run state to `Submitted` if it is currently `Scheduled`
213 if state.StateSchema().load(flow_run.serialized_state).is_scheduled():
214 self.client.set_flow_run_state(
215 flow_run_id=flow_run.id,
216 version=flow_run.version,
217 state=Submitted(
218 message="Submitted for execution",
219 state=state.StateSchema().load(flow_run.serialized_state),
220 ),
221 )
222
223 # Set task run states to `Submitted` if they are currently `Scheduled`
224 for task_run in flow_run.task_runs:
225 if state.StateSchema().load(task_run.serialized_state).is_scheduled():
226 self.client.set_task_run_state(
227 task_run_id=task_run.id,
228 version=task_run.version,
229 state=Submitted(
230 message="Submitted for execution",
231 state=state.StateSchema().load(task_run.serialized_state),
232 ),
233 )
234
235 def deploy_flows(self, flow_runs: list) -> None:
236 """
237 Meant to be overridden by a platform specific deployment option
238
239 Args:
240 - flow_runs (list): A list of GraphQLResult flow run objects
241 """
242 pass
243
244
245 if __name__ == "__main__":
246 Agent().start()
247
[end of src/prefect/agent/agent.py]
[start of src/prefect/cli/__init__.py]
1 #!/usr/bin/env python
2
3
4 import click
5
6 import prefect
7
8 from .agent import agent as _agent
9 from .auth import auth as _auth
10 from .describe import describe as _describe
11 from .execute import execute as _execute
12 from .get import get as _get
13 from .run import run as _run
14
15
16 CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
17
18
19 @click.group(context_settings=CONTEXT_SETTINGS)
20 def cli():
21 """
22 The Prefect CLI for creating, managing, and inspecting your flows.
23
24 \b
25 Note: a Prefect Cloud API token is required for all Cloud related commands. If a token
26 is not set then run `prefect auth login` to set it.
27
28 \b
29 Query Commands:
30 get List high-level object information
31 describe Retrieve detailed object descriptions
32
33 \b
34 Execution Commands:
35 execute Execute a flow's environment
36 run Run a flow
37 agent Manage agents
38
39 \b
40 Setup Commands:
41 auth Handle Prefect Cloud authorization
42
43 \b
44 Miscellaneous Commands:
45 version Get your current Prefect version
46 config Output your Prefect config
47 """
48 pass
49
50
51 cli.add_command(_agent)
52 cli.add_command(_auth)
53 cli.add_command(_describe)
54 cli.add_command(_execute)
55 cli.add_command(_get)
56 cli.add_command(_run)
57
58
59 # Miscellaneous Commands
60
61
62 @cli.command(hidden=True)
63 def version():
64 """
65 Get your current Prefect version
66 """
67 click.echo(prefect.__version__)
68
69
70 @cli.command(hidden=True)
71 def config():
72 """
73 Output your Prefect config
74 """
75 click.echo(prefect.config.to_dict())
76
[end of src/prefect/cli/__init__.py]
[start of src/prefect/cli/auth.py]
1 import click
2
3 from prefect import Client, config
4 from prefect.utilities.exceptions import AuthorizationError, ClientError
5
6
7 @click.group(hidden=True)
8 def auth():
9 """
10 Handle Prefect Cloud authorization.
11
12 \b
13 Usage:
14 $ prefect auth [COMMAND]
15
16 \b
17 Arguments:
18 login Login to Prefect Cloud
19
20 \b
21 Examples:
22 $ prefect auth login --token MY_TOKEN
23 """
24 pass
25
26
27 @auth.command(hidden=True)
28 @click.option(
29 "--token", "-t", required=True, help="A Prefect Cloud API token.", hidden=True
30 )
31 def login(token):
32 """
33 Login to Prefect Cloud with an api token to use for Cloud communication.
34
35 \b
36 Options:
37 --token, -t TEXT A Prefect Cloud api token [required]
38 """
39
40 if config.cloud.auth_token:
41 click.confirm(
42 "Prefect Cloud API token already set in config. Do you want to override?",
43 default=True,
44 )
45
46 client = Client()
47 client.login(api_token=token)
48
49 # Verify login obtained a valid api token
50 try:
51 client.graphql(query={"query": "hello"})
52 except AuthorizationError:
53 click.secho(
54 "Error attempting to use Prefect API token {}".format(token), fg="red"
55 )
56 return
57 except ClientError:
58 click.secho("Error attempting to communicate with Prefect Cloud", fg="red")
59 return
60
61 click.secho("Login successful", fg="green")
62
[end of src/prefect/cli/auth.py]
[start of src/prefect/cli/describe.py]
1 import click
2 import pendulum
3 from tabulate import tabulate
4
5 from prefect.client import Client
6 from prefect.utilities.graphql import EnumValue, with_args
7
8
9 @click.group(hidden=True)
10 def describe():
11 """
12 Describe commands that render JSON output of Prefect object metadata.
13
14 \b
15 Usage:
16 $ prefect describe [OBJECT]
17
18 \b
19 Arguments:
20 flow-runs Describe flow runs
21 flows Describe flows
22 tasks Describe tasks
23
24 \b
25 Examples:
26 $ prefect describe flows --name My-Flow --version 2
27 {
28 "name": "My-Flow",
29 "version": 2,
30 "project": {
31 "name": "Test-Project"
32 },
33 "created": "2019-05-08T23:04:58.984132+00:00",
34 "description": null,
35 "parameters": [],
36 "archived": false,
37 "storage": {
38 "type": "Docker",
39 "flows": {
40 "My-Flow": "/root/.prefect/My-Flow.prefect"
41 },
42 "image_tag": "944444e8-8862-4d04-9e36-b81ab15dcaf6",
43 "image_name": "z4f0bb62-8cc1-49d9-bda3-6rf53b865ea5",
44 "__version__": "0.5.3",
45 "registry_url": "myregistry.io/flows/"
46 },
47 "environment": {
48 "type": "CloudEnvironment",
49 "__version__": "0.5.3"
50 }
51 }
52 """
53 pass
54
55
56 @describe.command(hidden=True)
57 @click.option("--name", "-n", required=True, help="A flow name to query.", hidden=True)
58 @click.option("--version", "-v", type=int, help="A flow version to query.", hidden=True)
59 @click.option("--project", "-p", help="The name of a project to query.", hidden=True)
60 def flows(name, version, project):
61 """
62 Describe a Prefect flow.
63
64 \b
65 Options:
66 --name, -n TEXT A flow name to query [required]
67 --version, -v INTEGER A flow version to query
68 --project, -p TEXT The name of a project to query
69 """
70 query = {
71 "query": {
72 with_args(
73 "flow",
74 {
75 "where": {
76 "_and": {
77 "name": {"_eq": name},
78 "version": {"_eq": version},
79 "project": {"name": {"_eq": project}},
80 }
81 },
82 "order_by": {
83 "name": EnumValue("asc"),
84 "version": EnumValue("desc"),
85 },
86 "distinct_on": EnumValue("name"),
87 },
88 ): {
89 "name": True,
90 "version": True,
91 "project": {"name": True},
92 "created": True,
93 "description": True,
94 "parameters": True,
95 "archived": True,
96 "storage": True,
97 "environment": True,
98 }
99 }
100 }
101
102 result = Client().graphql(query)
103
104 flow_data = result.data.flow
105
106 if flow_data:
107 click.echo(flow_data[0])
108 else:
109 click.secho("{} not found".format(name), fg="red")
110
111
112 @describe.command(hidden=True)
113 @click.option("--name", "-n", required=True, help="A flow name to query.", hidden=True)
114 @click.option("--version", "-v", type=int, help="A flow version to query.", hidden=True)
115 @click.option("--project", "-p", help="The name of a project to query.", hidden=True)
116 def tasks(name, version, project):
117 """
118 Describe tasks from a Prefect flow. This command is similar to `prefect describe flow`
119 but instead of flow metadata it outputs task metadata.
120
121 \b
122 Options:
123 --name, -n TEXT A flow name to query [required]
124 --version, -v INTEGER A flow version to query
125 --project, -p TEXT The name of a project to query
126 """
127 query = {
128 "query": {
129 with_args(
130 "flow",
131 {
132 "where": {
133 "_and": {
134 "name": {"_eq": name},
135 "version": {"_eq": version},
136 "project": {"name": {"_eq": project}},
137 }
138 },
139 "order_by": {
140 "name": EnumValue("asc"),
141 "version": EnumValue("desc"),
142 },
143 "distinct_on": EnumValue("name"),
144 },
145 ): {
146 "tasks": {
147 "name": True,
148 "created": True,
149 "slug": True,
150 "description": True,
151 "type": True,
152 "max_retries": True,
153 "retry_delay": True,
154 "mapped": True,
155 }
156 }
157 }
158 }
159
160 result = Client().graphql(query)
161
162 flow_data = result.data.flow
163 if not flow_data:
164 click.secho("{} not found".format(name), fg="red")
165 return
166
167 task_data = flow_data[0].tasks
168
169 if task_data:
170 for item in task_data:
171 click.echo(item)
172 else:
173 click.secho("No tasks found for flow {}".format(name), fg="red")
174
175
176 @describe.command(hidden=True)
177 @click.option(
178 "--name", "-n", required=True, help="A flow run name to query", hidden=True
179 )
180 @click.option("--flow-name", "-fn", help="A flow name to query", hidden=True)
181 def flow_runs(name, flow_name):
182 """
183 Describe a Prefect flow run.
184
185 \b
186 Options:
187 --name, -n TEXT A flow run name to query [required]
188 --flow-name, -fn TEXT A flow name to query
189 """
190 query = {
191 "query": {
192 with_args(
193 "flow_run",
194 {
195 "where": {
196 "_and": {
197 "name": {"_eq": name},
198 "flow": {"name": {"_eq": flow_name}},
199 }
200 }
201 },
202 ): {
203 "name": True,
204 "flow": {"name": True},
205 "created": True,
206 "parameters": True,
207 "auto_scheduled": True,
208 "scheduled_start_time": True,
209 "start_time": True,
210 "end_time": True,
211 "duration": True,
212 "heartbeat": True,
213 "serialized_state": True,
214 }
215 }
216 }
217
218 result = Client().graphql(query)
219
220 flow_run_data = result.data.flow_run
221
222 if flow_run_data:
223 click.echo(flow_run_data[0])
224 else:
225 click.secho("{} not found".format(name), fg="red")
226
[end of src/prefect/cli/describe.py]
[start of src/prefect/cli/get.py]
1 import click
2 import pendulum
3 from tabulate import tabulate
4
5 from prefect import config
6 from prefect.client import Client
7 from prefect.utilities.graphql import EnumValue, with_args
8
9
10 @click.group(hidden=True)
11 def get():
12 """
13 Get commands that refer to querying Prefect Cloud metadata.
14
15 \b
16 Usage:
17 $ prefect get [OBJECT]
18
19 \b
20 Arguments:
21 flow-runs Query flow runs
22 flows Query flows
23 projects Query projects
24 tasks Query tasks
25 logs Query logs
26
27 \b
28 Examples:
29 $ prefect get flows
30 NAME VERSION PROJECT NAME AGE
31 My-Flow 3 My-Project 3 days ago
32
33 \b
34 $ prefect get flows --project New-Proj --all-versions
35 NAME VERSION PROJECT NAME AGE
36 Test-Flow 2 New-Proj 22 hours ago
37 Test-Flow 1 New-Proj 1 month ago
38
39 \b
40 $ prefect get tasks --flow-name Test-Flow
41 NAME FLOW NAME FLOW VERSION AGE MAPPED TYPE
42 first_task Test-Flow 1 5 days ago False prefect.tasks.core.function.FunctionTask
43 second_task Test-Flow 1 5 days ago True prefect.tasks.core.function.FunctionTask
44 """
45 pass
46
47
48 @get.command(hidden=True)
49 @click.option("--name", "-n", help="A flow name to query.", hidden=True)
50 @click.option("--version", "-v", type=int, help="A flow version to query.", hidden=True)
51 @click.option("--project", "-p", help="The name of a project to query.", hidden=True)
52 @click.option(
53 "--limit", "-l", default=10, help="A limit amount of flows to query.", hidden=True
54 )
55 @click.option(
56 "--all-versions", is_flag=True, help="Query all flow versions.", hidden=True
57 )
58 def flows(name, version, project, limit, all_versions):
59 """
60 Query information regarding your Prefect flows.
61
62 \b
63 Options:
64 --name, -n TEXT A flow name to query
65 --version, -v TEXT A flow version to query
66 --project, -p TEXT The name of a project to query
67 --limit, -l INTEGER A limit amount of flows to query, defaults to 10
68 --all-versions Output all versions of a flow, default shows most recent
69 """
70
71 distinct_on = EnumValue("name")
72 if all_versions:
73 distinct_on = None
74
75 query = {
76 "query": {
77 with_args(
78 "flow",
79 {
80 "where": {
81 "_and": {
82 "name": {"_eq": name},
83 "version": {"_eq": version},
84 "project": {"name": {"_eq": project}},
85 }
86 },
87 "order_by": {
88 "name": EnumValue("asc"),
89 "version": EnumValue("desc"),
90 },
91 "distinct_on": distinct_on,
92 "limit": limit,
93 },
94 ): {
95 "name": True,
96 "version": True,
97 "project": {"name": True},
98 "created": True,
99 }
100 }
101 }
102
103 result = Client().graphql(query)
104
105 flow_data = result.data.flow
106
107 output = []
108 for item in flow_data:
109 output.append(
110 [
111 item.name,
112 item.version,
113 item.project.name,
114 pendulum.parse(item.created).diff_for_humans(),
115 ]
116 )
117
118 click.echo(
119 tabulate(
120 output,
121 headers=["NAME", "VERSION", "PROJECT NAME", "AGE"],
122 tablefmt="plain",
123 numalign="left",
124 stralign="left",
125 )
126 )
127
128
129 @get.command(hidden=True)
130 @click.option("--name", "-n", help="A project name to query.", hidden=True)
131 def projects(name):
132 """
133 Query information regarding your Prefect projects.
134
135 \b
136 Options:
137 --name, -n TEXT A project name to query
138 """
139 query = {
140 "query": {
141 with_args(
142 "project",
143 {
144 "where": {"_and": {"name": {"_eq": name}}},
145 "order_by": {"name": EnumValue("asc")},
146 },
147 ): {
148 "name": True,
149 "created": True,
150 "description": True,
151 with_args("flows_aggregate", {"distinct_on": EnumValue("name")}): {
152 EnumValue("aggregate"): EnumValue("count")
153 },
154 }
155 }
156 }
157
158 result = Client().graphql(query)
159
160 project_data = result.data.project
161
162 output = []
163 for item in project_data:
164 output.append(
165 [
166 item.name,
167 item.flows_aggregate.aggregate.count,
168 pendulum.parse(item.created).diff_for_humans(),
169 item.description,
170 ]
171 )
172
173 click.echo(
174 tabulate(
175 output,
176 headers=["NAME", "FLOW COUNT", "AGE", "DESCRIPTION"],
177 tablefmt="plain",
178 numalign="left",
179 stralign="left",
180 )
181 )
182
183
184 @get.command(hidden=True)
185 @click.option(
186 "--limit",
187 "-l",
188 default=10,
189 help="A limit amount of flow runs to query.",
190 hidden=True,
191 )
192 @click.option("--flow", "-f", help="Specify a flow's runs to query.", hidden=True)
193 @click.option("--project", "-p", help="Specify a project's runs to query.", hidden=True)
194 @click.option(
195 "--started",
196 "-s",
197 is_flag=True,
198 help="Only retrieve started flow runs.",
199 hidden=True,
200 )
201 def flow_runs(limit, flow, project, started):
202 """
203 Query information regarding Prefect flow runs.
204
205 \b
206 Options:
207 --limit, l INTEGER A limit amount of flow runs to query, defaults to 10
208 --flow, -f TEXT Name of a flow to query for runs
209 --project, -p TEXT Name of a project to query
210 --started, -s Only retrieve started flow runs, default shows `Scheduled` runs
211 """
212
213 if started:
214 order = {"start_time": EnumValue("desc")}
215
216 where = {
217 "_and": {
218 "flow": {
219 "_and": {
220 "name": {"_eq": flow},
221 "project": {"name": {"_eq": project}},
222 }
223 },
224 "start_time": {"_is_null": False},
225 }
226 }
227 else:
228 order = {"created": EnumValue("desc")}
229
230 where = {
231 "flow": {
232 "_and": {"name": {"_eq": flow}, "project": {"name": {"_eq": project}}}
233 }
234 }
235
236 query = {
237 "query": {
238 with_args(
239 "flow_run", {"where": where, "limit": limit, "order_by": order}
240 ): {
241 "flow": {"name": True},
242 "created": True,
243 "state": True,
244 "name": True,
245 "duration": True,
246 "start_time": True,
247 }
248 }
249 }
250
251 result = Client().graphql(query)
252
253 flow_run_data = result.data.flow_run
254
255 output = []
256 for item in flow_run_data:
257 start_time = (
258 pendulum.parse(item.start_time).to_datetime_string()
259 if item.start_time
260 else None
261 )
262 output.append(
263 [
264 item.name,
265 item.flow.name,
266 item.state,
267 pendulum.parse(item.created).diff_for_humans(),
268 start_time,
269 item.duration,
270 ]
271 )
272
273 click.echo(
274 tabulate(
275 output,
276 headers=["NAME", "FLOW NAME", "STATE", "AGE", "START TIME", "DURATION"],
277 tablefmt="plain",
278 numalign="left",
279 stralign="left",
280 )
281 )
282
283
284 @get.command(hidden=True)
285 @click.option("--name", "-n", help="A task name to query", hidden=True)
286 @click.option("--flow-name", "-fn", help="A flow name to query", hidden=True)
287 @click.option(
288 "--flow-version", "-fv", type=int, help="A flow version to query.", hidden=True
289 )
290 @click.option("--project", "-p", help="The name of a project to query.", hidden=True)
291 @click.option(
292 "--limit", "-l", default=10, help="A limit amount of tasks to query.", hidden=True
293 )
294 def tasks(name, flow_name, flow_version, project, limit):
295 """
296 Query information regarding your Prefect tasks.
297
298 \b
299 Options:
300 --name, -n TEXT A task name to query
301 --flow-name, -fn TEXT A flow name to query
302 --flow-version, -fv INTEGER A flow version to query
303 --project, -p TEXT The name of a project to query
304 --limit, -l INTEGER A limit amount of tasks to query, defaults to 10
305 """
306
307 query = {
308 "query": {
309 with_args(
310 "task",
311 {
312 "where": {
313 "_and": {
314 "name": {"_eq": name},
315 "flow": {
316 "name": {"_eq": flow_name},
317 "project": {"name": {"_eq": project}},
318 "version": {"_eq": flow_version},
319 },
320 }
321 },
322 "limit": limit,
323 "order_by": {"created": EnumValue("desc")},
324 },
325 ): {
326 "name": True,
327 "created": True,
328 "flow": {"name": True, "version": True},
329 "mapped": True,
330 "type": True,
331 }
332 }
333 }
334
335 result = Client().graphql(query)
336
337 task_data = result.data.task
338
339 output = []
340 for item in task_data:
341 output.append(
342 [
343 item.name,
344 item.flow.name,
345 item.flow.version,
346 pendulum.parse(item.created).diff_for_humans(),
347 item.mapped,
348 item.type,
349 ]
350 )
351
352 click.echo(
353 tabulate(
354 output,
355 headers=["NAME", "FLOW NAME", "FLOW VERSION", "AGE", "MAPPED", "TYPE"],
356 tablefmt="plain",
357 numalign="left",
358 stralign="left",
359 )
360 )
361
362
363 @get.command(hidden=True)
364 @click.option(
365 "--name", "-n", required=True, help="A flow run name to query", hidden=True
366 )
367 @click.option(
368 "--info", "-i", is_flag=True, help="Retrieve detailed logging info", hidden=True
369 )
370 def logs(name, info):
371 """
372 Query logs for a flow run.
373
374 \b
375 Options:
376 --name, -n TEXT A flow run name to query [required]
377 --info, -i Retrieve detailed logging info
378 """
379 log_query = {
380 with_args("logs", {"order_by": {EnumValue("timestamp"): EnumValue("asc")}}): {
381 "timestamp": True,
382 "message": True,
383 "level": True,
384 },
385 "start_time": True,
386 }
387 if info:
388 log_query = {
389 with_args(
390 "logs", {"order_by": {EnumValue("timestamp"): EnumValue("asc")}}
391 ): {"timestamp": True, "info": True},
392 "start_time": True,
393 }
394
395 query = {
396 "query": {
397 with_args(
398 "flow_run",
399 {
400 "where": {"name": {"_eq": name}},
401 "order_by": {EnumValue("start_time"): EnumValue("desc")},
402 },
403 ): log_query
404 }
405 }
406
407 result = Client().graphql(query)
408
409 flow_run = result.data.flow_run
410 if not flow_run:
411 click.secho("{} not found".format(name), fg="red")
412 return
413
414 run = flow_run[0]
415 logs = run.logs
416 output = []
417
418 if not info:
419 for log in logs:
420 output.append([log.timestamp, log.level, log.message])
421
422 click.echo(
423 tabulate(
424 output,
425 headers=["TIMESTAMP", "LEVEL", "MESSAGE"],
426 tablefmt="plain",
427 numalign="left",
428 stralign="left",
429 )
430 )
431 return
432
433 for log in logs:
434 click.echo(log.info)
435
[end of src/prefect/cli/get.py]
[start of src/prefect/client/client.py]
1 import base64
2 import datetime
3 import json
4 import logging
5 import os
6 from typing import TYPE_CHECKING, Any, Dict, List, NamedTuple, Optional, Union
7
8 import pendulum
9 import requests
10 from requests.adapters import HTTPAdapter
11 from requests.packages.urllib3.util.retry import Retry
12
13 import prefect
14 from prefect.utilities.exceptions import AuthorizationError, ClientError
15 from prefect.utilities.graphql import (
16 EnumValue,
17 GraphQLResult,
18 as_nested_dict,
19 compress,
20 parse_graphql,
21 with_args,
22 )
23
24 if TYPE_CHECKING:
25 from prefect.core import Flow
26 JSONLike = Union[bool, dict, list, str, int, float, None]
27
28 # type definitions for GraphQL results
29
30 TaskRunInfoResult = NamedTuple(
31 "TaskRunInfoResult",
32 [
33 ("id", str),
34 ("task_id", str),
35 ("task_slug", str),
36 ("version", int),
37 ("state", "prefect.engine.state.State"),
38 ],
39 )
40
41 FlowRunInfoResult = NamedTuple(
42 "FlowRunInfoResult",
43 [
44 ("parameters", Dict[str, Any]),
45 ("context", Dict[str, Any]),
46 ("version", int),
47 ("scheduled_start_time", datetime.datetime),
48 ("state", "prefect.engine.state.State"),
49 ("task_runs", List[TaskRunInfoResult]),
50 ],
51 )
52
53
54 class Client:
55 """
56 Client for communication with Prefect Cloud
57
58 If the arguments aren't specified the client initialization first checks the prefect
59 configuration and if the server is not set there it checks the current context. The
60 token will only be present in the current context.
61
62 Args:
63 - graphql_server (str, optional): the URL to send all GraphQL requests
64 to; if not provided, will be pulled from `cloud.graphql` config var
65 - token (str, optional): a Prefect Cloud auth token for communication; if not
66 provided, will be pulled from `cloud.auth_token` config var
67 """
68
69 def __init__(self, graphql_server: str = None, token: str = None):
70
71 if not graphql_server:
72 graphql_server = prefect.config.cloud.get("graphql")
73 self.graphql_server = graphql_server
74
75 token = token or prefect.config.cloud.get("auth_token", None)
76
77 self.token_is_local = False
78 if token is None:
79 if os.path.exists(self.local_token_path):
80 with open(self.local_token_path, "r") as f:
81 token = f.read() or None
82 self.token_is_local = True
83
84 self.token = token
85
86 @property
87 def local_token_path(self) -> str:
88 """
89 Returns the local token path corresponding to the provided graphql_server
90 """
91 graphql_server = (self.graphql_server or "").replace("/", "_")
92 return os.path.expanduser("~/.prefect/tokens/{}".format(graphql_server))
93
94 # -------------------------------------------------------------------------
95 # Utilities
96
97 def get(
98 self,
99 path: str,
100 server: str = None,
101 headers: dict = None,
102 params: Dict[str, JSONLike] = None,
103 ) -> dict:
104 """
105 Convenience function for calling the Prefect API with token auth and GET request
106
107 Args:
108 - path (str): the path of the API url. For example, to GET
109 http://prefect-server/v1/auth/login, path would be 'auth/login'.
110 - server (str, optional): the server to send the GET request to;
111 defaults to `self.graphql_server`
112 - headers (dict, optional): Headers to pass with the request
113 - params (dict): GET parameters
114
115 Returns:
116 - dict: Dictionary representation of the request made
117 """
118 response = self._request(
119 method="GET", path=path, params=params, server=server, headers=headers
120 )
121 if response.text:
122 return response.json()
123 else:
124 return {}
125
126 def post(
127 self,
128 path: str,
129 server: str = None,
130 headers: dict = None,
131 params: Dict[str, JSONLike] = None,
132 ) -> dict:
133 """
134 Convenience function for calling the Prefect API with token auth and POST request
135
136 Args:
137 - path (str): the path of the API url. For example, to POST
138 http://prefect-server/v1/auth/login, path would be 'auth/login'.
139 - server (str, optional): the server to send the POST request to;
140 defaults to `self.graphql_server`
141 - headers(dict): headers to pass with the request
142 - params (dict): POST parameters
143
144 Returns:
145 - dict: Dictionary representation of the request made
146 """
147 response = self._request(
148 method="POST", path=path, params=params, server=server, headers=headers
149 )
150 if response.text:
151 return response.json()
152 else:
153 return {}
154
155 def graphql(
156 self,
157 query: Any,
158 raise_on_error: bool = True,
159 headers: Dict[str, str] = None,
160 variables: Dict[str, JSONLike] = None,
161 ) -> GraphQLResult:
162 """
163 Convenience function for running queries against the Prefect GraphQL API
164
165 Args:
166 - query (Any): A representation of a graphql query to be executed. It will be
167 parsed by prefect.utilities.graphql.parse_graphql().
168 - raise_on_error (bool): if True, a `ClientError` will be raised if the GraphQL
169 returns any `errors`.
170 - headers (dict): any additional headers that should be passed as part of the
171 request
172 - variables (dict): Variables to be filled into a query with the key being
173 equivalent to the variables that are accepted by the query
174
175 Returns:
176 - dict: Data returned from the GraphQL query
177
178 Raises:
179 - ClientError if there are errors raised by the GraphQL mutation
180 """
181 result = self.post(
182 path="",
183 server=self.graphql_server,
184 headers=headers,
185 params=dict(query=parse_graphql(query), variables=json.dumps(variables)),
186 )
187
188 if raise_on_error and "errors" in result:
189 raise ClientError(result["errors"])
190 else:
191 return as_nested_dict(result, GraphQLResult) # type: ignore
192
193 def _request(
194 self,
195 method: str,
196 path: str,
197 params: Dict[str, JSONLike] = None,
198 server: str = None,
199 headers: dict = None,
200 ) -> "requests.models.Response":
201 """
202 Runs any specified request (GET, POST, DELETE) against the server
203
204 Args:
205 - method (str): The type of request to be made (GET, POST, DELETE)
206 - path (str): Path of the API URL
207 - params (dict, optional): Parameters used for the request
208 - server (str, optional): The server to make requests against, base API
209 server is used if not specified
210 - headers (dict, optional): Headers to pass with the request
211
212 Returns:
213 - requests.models.Response: The response returned from the request
214
215 Raises:
216 - ClientError: if the client token is not in the context (due to not being logged in)
217 - ValueError: if a method is specified outside of the accepted GET, POST, DELETE
218 - requests.HTTPError: if a status code is returned that is not `200` or `401`
219 """
220 if server is None:
221 server = self.graphql_server
222 assert isinstance(server, str) # mypy assert
223
224 if self.token is None:
225 raise AuthorizationError("No token found; call Client.login() to set one.")
226
227 url = os.path.join(server, path.lstrip("/")).rstrip("/")
228
229 params = params or {}
230
231 headers = headers or {}
232 headers.update({"Authorization": "Bearer {}".format(self.token)})
233 session = requests.Session()
234 retries = Retry(
235 total=6,
236 backoff_factor=1,
237 status_forcelist=[500, 502, 503, 504],
238 method_whitelist=["DELETE", "GET", "POST"],
239 )
240 session.mount("https://", HTTPAdapter(max_retries=retries))
241 if method == "GET":
242 response = session.get(url, headers=headers, params=params)
243 elif method == "POST":
244 response = session.post(url, headers=headers, json=params)
245 elif method == "DELETE":
246 response = session.delete(url, headers=headers)
247 else:
248 raise ValueError("Invalid method: {}".format(method))
249
250 # Check if request returned a successful status
251 response.raise_for_status()
252
253 return response
254
255 # -------------------------------------------------------------------------
256 # Auth
257 # -------------------------------------------------------------------------
258
259 def login(self, api_token: str) -> None:
260 """
261 Logs in to Prefect Cloud with an API token. The token is written to local storage
262 so it persists across Prefect sessions.
263
264 Args:
265 - api_token (str): a Prefect Cloud API token
266
267 Raises:
268 - AuthorizationError if unable to login to the server (request does not return `200`)
269 """
270 if not os.path.exists(os.path.dirname(self.local_token_path)):
271 os.makedirs(os.path.dirname(self.local_token_path))
272 with open(self.local_token_path, "w+") as f:
273 f.write(api_token)
274 self.token = api_token
275 self.token_is_local = True
276
277 def logout(self) -> None:
278 """
279 Deletes the token from this client, and removes it from local storage.
280 """
281 self.token = None
282 if self.token_is_local:
283 if os.path.exists(self.local_token_path):
284 os.remove(self.local_token_path)
285 self.token_is_local = False
286
287 def deploy(
288 self,
289 flow: "Flow",
290 project_name: str,
291 build: bool = True,
292 set_schedule_active: bool = True,
293 compressed: bool = True,
294 ) -> str:
295 """
296 Push a new flow to Prefect Cloud
297
298 Args:
299 - flow (Flow): a flow to deploy
300 - project_name (str): the project that should contain this flow.
301 - build (bool, optional): if `True`, the flow's environment is built
302 prior to serialization; defaults to `True`
303 - set_schedule_active (bool, optional): if `False`, will set the
304 schedule to inactive in the database to prevent auto-scheduling runs (if the Flow has a schedule).
305 Defaults to `True`. This can be changed later.
306 - compressed (bool, optional): if `True`, the serialized flow will be; defaults to `True`
307 compressed
308
309 Returns:
310 - str: the ID of the newly-deployed flow
311
312 Raises:
313 - ClientError: if the deploy failed
314 """
315 required_parameters = {p for p in flow.parameters() if p.required}
316 if flow.schedule is not None and required_parameters:
317 raise ClientError(
318 "Flows with required parameters can not be scheduled automatically."
319 )
320 if compressed:
321 create_mutation = {
322 "mutation($input: createFlowFromCompressedStringInput!)": {
323 "createFlowFromCompressedString(input: $input)": {"id"}
324 }
325 }
326 else:
327 create_mutation = {
328 "mutation($input: createFlowInput!)": {
329 "createFlow(input: $input)": {"id"}
330 }
331 }
332
333 query_project = {
334 "query": {
335 with_args("project", {"where": {"name": {"_eq": project_name}}}): {
336 "id": True
337 }
338 }
339 }
340
341 project = self.graphql(query_project).data.project # type: ignore
342
343 if not project:
344 raise ValueError(
345 "Project {} not found. Run `client.create_project({})` to create it.".format(
346 project_name, project_name
347 )
348 )
349
350 serialized_flow = flow.serialize(build=build) # type: Any
351 if compressed:
352 serialized_flow = compress(serialized_flow)
353 res = self.graphql(
354 create_mutation,
355 variables=dict(
356 input=dict(
357 projectId=project[0].id,
358 serializedFlow=serialized_flow,
359 setScheduleActive=set_schedule_active,
360 )
361 ),
362 ) # type: Any
363
364 flow_id = (
365 res.data.createFlowFromCompressedString.id
366 if compressed
367 else res.data.createFlow.id
368 )
369 return flow_id
370
371 def create_project(self, project_name: str) -> str:
372 """
373 Create a new Project
374
375 Args:
376 - project_name (str): the project that should contain this flow.
377
378 Returns:
379 - str: the ID of the newly-created project
380
381 Raises:
382 - ClientError: if the project creation failed
383 """
384 project_mutation = {
385 "mutation($input: createProjectInput!)": {
386 "createProject(input: $input)": {"id"}
387 }
388 }
389
390 res = self.graphql(
391 project_mutation, variables=dict(input=dict(name=project_name))
392 ) # type: Any
393
394 return res.data.createProject.id
395
396 def create_flow_run(
397 self,
398 flow_id: str,
399 context: dict = None,
400 parameters: dict = None,
401 scheduled_start_time: datetime.datetime = None,
402 idempotency_key: str = None,
403 ) -> str:
404 """
405 Create a new flow run for the given flow id. If `start_time` is not provided, the flow run will be scheduled to start immediately.
406
407 Args:
408 - flow_id (str): the id of the Flow you wish to schedule
409 - context (dict, optional): the run context
410 - parameters (dict, optional): a dictionary of parameter values to pass to the flow run
411 - scheduled_start_time (datetime, optional): the time to schedule the execution for; if not provided, defaults to now
412 - idempotency_key (str, optional): an idempotency key; if provided, this run will be cached for 24
413 hours. Any subsequent attempts to create a run with the same idempotency key
414 will return the ID of the originally created run (no new run will be created after the first).
415 An error will be raised if parameters or context are provided and don't match the original.
416 Each subsequent request will reset the TTL for 24 hours.
417
418 Returns:
419 - str: the ID of the newly-created flow run
420
421 Raises:
422 - ClientError: if the GraphQL query is bad for any reason
423 """
424 create_mutation = {
425 "mutation($input: createFlowRunInput!)": {
426 "createFlowRun(input: $input)": {"flow_run": "id"}
427 }
428 }
429 inputs = dict(flowId=flow_id)
430 if parameters is not None:
431 inputs.update(parameters=parameters) # type: ignore
432 if context is not None:
433 inputs.update(context=context) # type: ignore
434 if idempotency_key is not None:
435 inputs.update(idempotencyKey=idempotency_key) # type: ignore
436 if scheduled_start_time is not None:
437 inputs.update(
438 scheduledStartTime=scheduled_start_time.isoformat()
439 ) # type: ignore
440 res = self.graphql(create_mutation, variables=dict(input=inputs))
441 return res.data.createFlowRun.flow_run.id # type: ignore
442
443 def get_flow_run_info(self, flow_run_id: str) -> FlowRunInfoResult:
444 """
445 Retrieves version and current state information for the given flow run.
446
447 Args:
448 - flow_run_id (str): the id of the flow run to get information for
449
450 Returns:
451 - GraphQLResult: a `DotDict` representing information about the flow run
452
453 Raises:
454 - ClientError: if the GraphQL mutation is bad for any reason
455 """
456 query = {
457 "query": {
458 with_args("flow_run_by_pk", {"id": flow_run_id}): {
459 "parameters": True,
460 "context": True,
461 "version": True,
462 "scheduled_start_time": True,
463 "serialized_state": True,
464 # load all task runs except dynamic task runs
465 with_args("task_runs", {"where": {"map_index": {"_eq": -1}}}): {
466 "id": True,
467 "task": {"id": True, "slug": True},
468 "version": True,
469 "serialized_state": True,
470 },
471 }
472 }
473 }
474 result = self.graphql(query).data.flow_run_by_pk # type: ignore
475 if result is None:
476 raise ClientError('Flow run ID not found: "{}"'.format(flow_run_id))
477
478 # convert scheduled_start_time from string to datetime
479 result.scheduled_start_time = pendulum.parse(result.scheduled_start_time)
480
481 # create "state" attribute from serialized_state
482 result.state = prefect.engine.state.State.deserialize(
483 result.pop("serialized_state")
484 )
485
486 # reformat task_runs
487 task_runs = []
488 for tr in result.task_runs:
489 tr.state = prefect.engine.state.State.deserialize(
490 tr.pop("serialized_state")
491 )
492 task_info = tr.pop("task")
493 tr.task_id = task_info["id"]
494 tr.task_slug = task_info["slug"]
495 task_runs.append(TaskRunInfoResult(**tr))
496
497 result.task_runs = task_runs
498 result.context = (
499 result.context.to_dict() if result.context is not None else None
500 )
501 result.parameters = (
502 result.parameters.to_dict() if result.parameters is not None else None
503 )
504 return FlowRunInfoResult(**result)
505
506 def update_flow_run_heartbeat(self, flow_run_id: str) -> None:
507 """
508 Convenience method for heartbeating a flow run.
509
510 Does NOT raise an error if the update fails.
511
512 Args:
513 - flow_run_id (str): the flow run ID to heartbeat
514
515 """
516 mutation = {
517 "mutation": {
518 with_args(
519 "updateFlowRunHeartbeat", {"input": {"flowRunId": flow_run_id}}
520 ): {"success"}
521 }
522 }
523 self.graphql(mutation, raise_on_error=False)
524
525 def update_task_run_heartbeat(self, task_run_id: str) -> None:
526 """
527 Convenience method for heartbeating a task run.
528
529 Does NOT raise an error if the update fails.
530
531 Args:
532 - task_run_id (str): the task run ID to heartbeat
533
534 """
535 mutation = {
536 "mutation": {
537 with_args(
538 "updateTaskRunHeartbeat", {"input": {"taskRunId": task_run_id}}
539 ): {"success"}
540 }
541 }
542 self.graphql(mutation, raise_on_error=False)
543
544 def set_flow_run_state(
545 self, flow_run_id: str, version: int, state: "prefect.engine.state.State"
546 ) -> None:
547 """
548 Sets new state for a flow run in the database.
549
550 Args:
551 - flow_run_id (str): the id of the flow run to set state for
552 - version (int): the current version of the flow run state
553 - state (State): the new state for this flow run
554
555 Raises:
556 - ClientError: if the GraphQL mutation is bad for any reason
557 """
558 mutation = {
559 "mutation($state: JSON!)": {
560 with_args(
561 "setFlowRunState",
562 {
563 "input": {
564 "flowRunId": flow_run_id,
565 "version": version,
566 "state": EnumValue("$state"),
567 }
568 },
569 ): {"id"}
570 }
571 }
572
573 serialized_state = state.serialize()
574
575 self.graphql(mutation, variables=dict(state=serialized_state)) # type: Any
576
577 def get_latest_cached_states(
578 self, task_id: str, cache_key: Optional[str], created_after: datetime.datetime
579 ) -> List["prefect.engine.state.State"]:
580 """
581 Pulls all Cached states for the given task that were created after the provided date.
582
583 Args:
584 - task_id (str): the task id for this task run
585 - cache_key (Optional[str]): the cache key for this Task's cache; if `None`, the task id alone will be used
586 - created_after (datetime.datetime): the earliest date the state should have been created at
587
588 Returns:
589 - List[State]: a list of Cached states created after the given date
590 """
591 where_clause = {
592 "where": {
593 "state": {"_eq": "Cached"},
594 "_or": [
595 {"cache_key": {"_eq": cache_key}},
596 {"task_id": {"_eq": task_id}},
597 ],
598 "state_timestamp": {"_gte": created_after.isoformat()},
599 },
600 "order_by": {"state_timestamp": EnumValue("desc")},
601 }
602 query = {"query": {with_args("task_run", where_clause): "serialized_state"}}
603 result = self.graphql(query) # type: Any
604 deserializer = prefect.engine.state.State.deserialize
605 valid_states = [
606 deserializer(res.serialized_state) for res in result.data.task_run
607 ]
608 return valid_states
609
610 def get_task_run_info(
611 self, flow_run_id: str, task_id: str, map_index: Optional[int] = None
612 ) -> TaskRunInfoResult:
613 """
614 Retrieves version and current state information for the given task run.
615
616 Args:
617 - flow_run_id (str): the id of the flow run that this task run lives in
618 - task_id (str): the task id for this task run
619 - map_index (int, optional): the mapping index for this task run; if
620 `None`, it is assumed this task is _not_ mapped
621
622 Returns:
623 - NamedTuple: a tuple containing `id, task_id, version, state`
624
625 Raises:
626 - ClientError: if the GraphQL mutation is bad for any reason
627 """
628
629 mutation = {
630 "mutation": {
631 with_args(
632 "getOrCreateTaskRun",
633 {
634 "input": {
635 "flowRunId": flow_run_id,
636 "taskId": task_id,
637 "mapIndex": -1 if map_index is None else map_index,
638 }
639 },
640 ): {
641 "task_run": {
642 "id": True,
643 "version": True,
644 "serialized_state": True,
645 "task": {"slug": True},
646 }
647 }
648 }
649 }
650 result = self.graphql(mutation) # type: Any
651 task_run = result.data.getOrCreateTaskRun.task_run
652
653 state = prefect.engine.state.State.deserialize(task_run.serialized_state)
654 return TaskRunInfoResult(
655 id=task_run.id,
656 task_id=task_id,
657 task_slug=task_run.task.slug,
658 version=task_run.version,
659 state=state,
660 )
661
662 def set_task_run_state(
663 self,
664 task_run_id: str,
665 version: int,
666 state: "prefect.engine.state.State",
667 cache_for: datetime.timedelta = None,
668 ) -> None:
669 """
670 Sets new state for a task run.
671
672 Args:
673 - task_run_id (str): the id of the task run to set state for
674 - version (int): the current version of the task run state
675 - state (State): the new state for this task run
676 - cache_for (timedelta, optional): how long to store the result of this task for, using the
677 serializer set in config; if not provided, no caching occurs
678
679 Raises:
680 - ClientError: if the GraphQL mutation is bad for any reason
681 """
682 mutation = {
683 "mutation($state: JSON!)": {
684 with_args(
685 "setTaskRunState",
686 {
687 "input": {
688 "taskRunId": task_run_id,
689 "version": version,
690 "state": EnumValue("$state"),
691 }
692 },
693 ): {"id"}
694 }
695 }
696
697 serialized_state = state.serialize()
698
699 self.graphql(mutation, variables=dict(state=serialized_state)) # type: Any
700
701 def set_secret(self, name: str, value: Any) -> None:
702 """
703 Set a secret with the given name and value.
704
705 Args:
706 - name (str): the name of the secret; used for retrieving the secret
707 during task runs
708 - value (Any): the value of the secret
709
710 Raises:
711 - ClientError: if the GraphQL mutation is bad for any reason
712 - ValueError: if the secret-setting was unsuccessful
713 """
714 mutation = {
715 "mutation($input: setSecretInput!)": {
716 "setSecret(input: $input)": {"success"}
717 }
718 }
719
720 result = self.graphql(
721 mutation, variables=dict(input=dict(name=name, value=value))
722 ) # type: Any
723
724 if not result.data.setSecret.success:
725 raise ValueError("Setting secret failed.")
726
727 def write_run_log(
728 self,
729 flow_run_id: str,
730 task_run_id: str = None,
731 timestamp: datetime.datetime = None,
732 name: str = None,
733 message: str = None,
734 level: str = None,
735 info: Any = None,
736 ) -> None:
737 """
738 Writes a log to Cloud
739
740 Args:
741 - flow_run_id (str): the flow run id
742 - task_run_id (str, optional): the task run id
743 - timestamp (datetime, optional): the timestamp; defaults to now
744 - name (str, optional): the name of the logger
745 - message (str, optional): the log message
746 - level (str, optional): the log level as a string. Defaults to INFO, should be one of
747 DEBUG, INFO, WARNING, ERROR, or CRITICAL.
748 - info (Any, optional): a JSON payload of additional information
749
750 Raises:
751 - ValueError: if writing the log fails
752 """
753 mutation = {
754 "mutation($input: writeRunLogInput!)": {
755 "writeRunLog(input: $input)": {"success"}
756 }
757 }
758
759 if timestamp is None:
760 timestamp = pendulum.now("UTC")
761 timestamp_str = pendulum.instance(timestamp).isoformat()
762 result = self.graphql(
763 mutation,
764 variables=dict(
765 input=dict(
766 flowRunId=flow_run_id,
767 taskRunId=task_run_id,
768 timestamp=timestamp_str,
769 name=name,
770 message=message,
771 level=level,
772 info=info,
773 )
774 ),
775 ) # type: Any
776
777 if not result.data.writeRunLog.success:
778 raise ValueError("Writing log failed.")
779
[end of src/prefect/client/client.py]
[start of src/prefect/tasks/google/bigquery.py]
1 from typing import List
2
3 from google.cloud import bigquery
4 from google.cloud.exceptions import NotFound
5 from google.oauth2.service_account import Credentials
6
7 from prefect.client import Secret
8 from prefect.core import Task
9 from prefect.engine.signals import SUCCESS
10 from prefect.utilities.tasks import defaults_from_attrs
11
12
13 class BigQueryTask(Task):
14 """
15 Task for executing queries against a Google BigQuery table and (optionally) returning
16 the results. Note that _all_ initialization settings can be provided / overwritten at runtime.
17
18 Args:
19 - query (str, optional): a string of the query to execute
20 - query_params (list[tuple], optional): a list of 3-tuples specifying
21 BigQuery query parameters; currently only scalar query parameters are supported. See
22 [the Google documentation](https://cloud.google.com/bigquery/docs/parameterized-queries#bigquery-query-params-python)
23 for more details on how both the query and the query parameters should be formatted
24 - project (str, optional): the project to initialize the BigQuery Client with; if not provided,
25 will default to the one inferred from your credentials
26 - location (str, optional): location of the dataset that will be queried; defaults to "US"
27 - dry_run_max_bytes (int, optional): if provided, the maximum number of bytes the query is allowed
28 to process; this will be determined by executing a dry run and raising a `ValueError` if the
29 maximum is exceeded
30 - credentials_secret (str, optional): the name of the Prefect Secret containing a JSON representation
31 of your Google Application credentials; defaults to `"GOOGLE_APPLICATION_CREDENTIALS"`
32 - dataset_dest (str, optional): the optional name of a destination dataset to write the
33 query results to, if you don't want them returned; if provided, `table_dest` must also be
34 provided
35 - table_dest (str, optional): the optional name of a destination table to write the
36 query results to, if you don't want them returned; if provided, `dataset_dest` must also be
37 provided
38 - job_config (dict, optional): an optional dictionary of job configuration parameters; note that
39 the parameters provided here must be pickleable (e.g., dataset references will be rejected)
40 - **kwargs (optional): additional kwargs to pass to the `Task` constructor
41 """
42
43 def __init__(
44 self,
45 query: str = None,
46 query_params: List[tuple] = None, # 3-tuples
47 project: str = None,
48 location: str = "US",
49 dry_run_max_bytes: int = None,
50 credentials_secret: str = None,
51 dataset_dest: str = None,
52 table_dest: str = None,
53 job_config: dict = None,
54 **kwargs
55 ):
56 self.query = query
57 self.query_params = query_params
58 self.project = project
59 self.location = location
60 self.dry_run_max_bytes = dry_run_max_bytes
61 self.credentials_secret = credentials_secret or "GOOGLE_APPLICATION_CREDENTIALS"
62 self.dataset_dest = dataset_dest
63 self.table_dest = table_dest
64 self.job_config = job_config or {}
65 super().__init__(**kwargs)
66
67 @defaults_from_attrs(
68 "query",
69 "query_params",
70 "project",
71 "location",
72 "dry_run_max_bytes",
73 "credentials_secret",
74 "dataset_dest",
75 "table_dest",
76 "job_config",
77 )
78 def run(
79 self,
80 query: str = None,
81 query_params: List[tuple] = None,
82 project: str = None,
83 location: str = "US",
84 dry_run_max_bytes: int = None,
85 credentials_secret: str = None,
86 dataset_dest: str = None,
87 table_dest: str = None,
88 job_config: dict = None,
89 ):
90 """
91 Run method for this Task. Invoked by _calling_ this Task within a Flow context, after initialization.
92
93 Args:
94 - query (str, optional): a string of the query to execute
95 - query_params (list[tuple], optional): a list of 3-tuples specifying
96 BigQuery query parameters; currently only scalar query parameters are supported. See
97 [the Google documentation](https://cloud.google.com/bigquery/docs/parameterized-queries#bigquery-query-params-python)
98 for more details on how both the query and the query parameters should be formatted
99 - project (str, optional): the project to initialize the BigQuery Client with; if not provided,
100 will default to the one inferred from your credentials
101 - location (str, optional): location of the dataset that will be queried; defaults to "US"
102 - dry_run_max_bytes (int, optional): if provided, the maximum number of bytes the query is allowed
103 to process; this will be determined by executing a dry run and raising a `ValueError` if the
104 maximum is exceeded
105 - credentials_secret (str, optional): the name of the Prefect Secret containing a JSON representation
106 of your Google Application credentials; defaults to `"GOOGLE_APPLICATION_CREDENTIALS"`
107 - dataset_dest (str, optional): the optional name of a destination dataset to write the
108 query results to, if you don't want them returned; if provided, `table_dest` must also be
109 provided
110 - table_dest (str, optional): the optional name of a destination table to write the
111 query results to, if you don't want them returned; if provided, `dataset_dest` must also be
112 provided
113 - job_config (dict, optional): an optional dictionary of job configuration parameters; note that
114 the parameters provided here must be pickleable (e.g., dataset references will be rejected)
115
116 Raises:
117 - ValueError: if the `query` is `None`
118 - ValueError: if only one of `dataset_dest` / `table_dest` is provided
119 - ValueError: if the query will execeed `dry_run_max_bytes`
120
121 Returns:
122 - list: a fully populated list of Query results, with one item per row
123 """
124 ## check for any argument inconsistencies
125 if query is None:
126 raise ValueError("No query provided.")
127 if sum([dataset_dest is None, table_dest is None]) == 1:
128 raise ValueError(
129 "Both `dataset_dest` and `table_dest` must be provided if writing to a destination table."
130 )
131
132 ## create client
133 creds = Secret(credentials_secret).get()
134 credentials = Credentials.from_service_account_info(creds)
135 project = project or credentials.project_id
136 client = bigquery.Client(project=project, credentials=credentials)
137
138 ## setup jobconfig
139 job_config = bigquery.QueryJobConfig(**job_config)
140 if query_params is not None:
141 hydrated_params = [
142 bigquery.ScalarQueryParameter(*qp) for qp in query_params
143 ]
144 job_config.query_parameters = hydrated_params
145
146 ## perform dry_run if requested
147 if dry_run_max_bytes is not None:
148 old_info = dict(
149 dry_run=job_config.dry_run, use_query_cache=job_config.use_query_cache
150 )
151 job_config.dry_run = True
152 job_config.use_query_cache = False
153 self.logger.debug("Performing a dry run...")
154 query_job = client.query(query, location=location, job_config=job_config)
155 if query_job.total_bytes_processed > dry_run_max_bytes:
156 raise ValueError(
157 "Query will process {0} bytes which is above the set maximum of {1} for this task.".format(
158 query_job.total_bytes_processed, dry_run_max_bytes
159 )
160 )
161 job_config.dry_run = old_info["dry_run"]
162 job_config.use_query_cache = old_info["use_query_cache"]
163
164 ## if writing to a destination table
165 if dataset_dest is not None:
166 table_ref = client.dataset(dataset_dest).table(table_dest)
167 job_config.destination = table_ref
168
169 query_job = client.query(query, location=location, job_config=job_config)
170 return list(query_job.result())
171
172
173 class BigQueryStreamingInsert(Task):
174 """
175 Task for insert records in a Google BigQuery table via [the streaming API](https://cloud.google.com/bigquery/streaming-data-into-bigquery).
176 Note that all of these settings can optionally be provided or overwritten at runtime.
177
178 Args:
179 - dataset_id (str, optional): the id of a destination dataset to write the
180 records to
181 - table (str, optional): the name of a destination table to write the
182 records to
183 - project (str, optional): the project to initialize the BigQuery Client with; if not provided,
184 will default to the one inferred from your credentials
185 - location (str, optional): location of the dataset that will be written to; defaults to "US"
186 - credentials_secret (str, optional): the name of the Prefect Secret containing a JSON representation
187 of your Google Application credentials; defaults to `"GOOGLE_APPLICATION_CREDENTIALS"`
188 - **kwargs (optional): additional kwargs to pass to the `Task` constructor
189 """
190
191 def __init__(
192 self,
193 dataset_id: str = None,
194 table: str = None,
195 project: str = None,
196 location: str = "US",
197 credentials_secret: str = None,
198 **kwargs
199 ):
200 self.dataset_id = dataset_id
201 self.table = table
202 self.project = project
203 self.location = location
204 self.credentials_secret = credentials_secret or "GOOGLE_APPLICATION_CREDENTIALS"
205 super().__init__(**kwargs)
206
207 @defaults_from_attrs(
208 "dataset_id", "table", "project", "location", "credentials_secret"
209 )
210 def run(
211 self,
212 records: List[dict],
213 dataset_id: str = None,
214 table: str = None,
215 project: str = None,
216 location: str = "US",
217 credentials_secret: str = None,
218 **kwargs
219 ):
220 """
221 Run method for this Task. Invoked by _calling_ this Task within a Flow context, after initialization.
222
223 Args:
224 - records (list[dict]): the list of records to insert as rows into
225 the BigQuery table; each item in the list should be a dictionary whose keys correspond
226 to columns in the table
227 - dataset_id (str, optional): the id of a destination dataset to write the
228 records to; if not provided here, will default to the one provided at initialization
229 - table (str, optional): the name of a destination table to write the
230 records to; if not provided here, will default to the one provided at initialization
231 - project (str, optional): the project to initialize the BigQuery Client with; if not provided,
232 will default to the one inferred from your credentials
233 - location (str, optional): location of the dataset that will be written to; defaults to "US"
234 - credentials_secret (str, optional): the name of the Prefect Secret containing a JSON representation
235 of your Google Application credentials; defaults to `"GOOGLE_APPLICATION_CREDENTIALS"`
236 - **kwargs (optional): additional kwargs to pass to the
237 `insert_rows_json` method; see the documentation here:
238 https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.client.Client.html
239
240 Raises:
241 - ValueError: if all required arguments haven't been provided
242 - ValueError: if any of the records result in errors
243
244 Returns:
245 - the response from `insert_rows_json`
246 """
247 ## check for any argument inconsistencies
248 if dataset_id is None or table is None:
249 raise ValueError("Both dataset_id and table must be provided.")
250
251 ## create client
252 creds = Secret(credentials_secret).get()
253 credentials = Credentials.from_service_account_info(creds)
254 project = project or credentials.project_id
255 client = bigquery.Client(project=project, credentials=credentials)
256
257 ## get table reference
258 table_ref = client.dataset(dataset_id).table(table)
259
260 ## stream data in
261 response = client.insert_rows_json(table=table_ref, json_rows=records, **kwargs)
262
263 errors = []
264 output = []
265 for row in response:
266 output.append(row)
267 if "errors" in row:
268 errors.append(row["errors"])
269
270 if errors:
271 raise ValueError(errors)
272
273 return output
274
275
276 class BigQueryLoadGoogleCloudStorage(Task):
277 """
278 Task for insert records in a Google BigQuery table via a [load job](https://cloud.google.com/bigquery/docs/loading-data).
279 Note that all of these settings can optionally be provided or overwritten at runtime.
280
281 Args:
282 - uri (str, optional): GCS path to load data from
283 - dataset_id (str, optional): the id of a destination dataset to write the
284 records to
285 - table (str, optional): the name of a destination table to write the
286 records to
287 - project (str, optional): the project to initialize the BigQuery Client with; if not provided,
288 will default to the one inferred from your credentials
289 - schema (List[bigquery.SchemaField], optional): the schema to use when creating the table
290 - location (str, optional): location of the dataset that will be queried; defaults to "US"
291 - credentials_secret (str, optional): the name of the Prefect Secret containing a JSON representation
292 of your Google Application credentials; defaults to `"GOOGLE_APPLICATION_CREDENTIALS"`
293 - **kwargs (optional): additional kwargs to pass to the `Task` constructor
294 """
295
296 def __init__(
297 self,
298 uri: str = None,
299 dataset_id: str = None,
300 table: str = None,
301 project: str = None,
302 schema: List[bigquery.SchemaField] = None,
303 location: str = "US",
304 credentials_secret: str = None,
305 **kwargs
306 ):
307 self.uri = uri
308 self.dataset_id = dataset_id
309 self.table = table
310 self.project = project
311 self.schema = schema
312 self.location = location
313 self.credentials_secret = credentials_secret or "GOOGLE_APPLICATION_CREDENTIALS"
314 super().__init__(**kwargs)
315
316 @defaults_from_attrs(
317 "uri", "dataset_id", "table", "project", "location", "credentials_secret"
318 )
319 def run(
320 self,
321 uri: str = None,
322 dataset_id: str = None,
323 table: str = None,
324 project: str = None,
325 schema: List[bigquery.SchemaField] = None,
326 location: str = "US",
327 credentials_secret: str = None,
328 **kwargs
329 ):
330 """
331 Run method for this Task. Invoked by _calling_ this Task within a Flow context, after initialization.
332
333 Args:
334 - uri (str, optional): GCS path to load data from
335 - dataset_id (str, optional): the id of a destination dataset to write the
336 records to; if not provided here, will default to the one provided at initialization
337 - table (str, optional): the name of a destination table to write the
338 records to; if not provided here, will default to the one provided at initialization
339 - project (str, optional): the project to initialize the BigQuery Client with; if not provided,
340 will default to the one inferred from your credentials
341 - schema (List[bigquery.SchemaField], optional): the schema to use when creating the table
342 - location (str, optional): location of the dataset that will be written to; defaults to "US"
343 - credentials_secret (str, optional): the name of the Prefect Secret containing a JSON representation
344 of your Google Application credentials; defaults to `"GOOGLE_APPLICATION_CREDENTIALS"`
345 - **kwargs (optional): additional kwargs to pass to the `bigquery.LoadJobConfig`;
346 see the documentation here:
347 https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.client.Client.html
348
349 Raises:
350 - ValueError: if all required arguments haven't been provided
351 - ValueError: if the load job results in an error
352
353 Returns:
354 - the response from `load_table_from_uri`
355 """
356 ## check for any argument inconsistencies
357 if dataset_id is None or table is None:
358 raise ValueError("Both dataset_id and table must be provided.")
359
360 ## create client
361 creds = Secret(credentials_secret).get()
362 project = project or credentials.project_id
363 client = bigquery.Client(project=project, credentials=credentials)
364
365 ## get table reference
366 table_ref = client.dataset(dataset_id).table(table)
367
368 ## load data
369 autodetect = kwargs.pop("autodetect", True)
370 job_config = bigquery.LoadJobConfig(autodetect=autodetect, **kwargs)
371 if schema:
372 job_config.schema = schema
373 load_job = client.load_table_from_uri(uri, table_ref, job_config=job_config)
374 result = load_job.result() # block until job is finished
375
376
377 class CreateBigQueryTable(Task):
378 """
379 Ensures a BigQuery table exists; creates it otherwise. Note that most initialization keywords
380 can optionally be provided at runtime.
381
382 Args:
383 - project (str, optional): the project to initialize the BigQuery Client with; if not provided,
384 will default to the one inferred from your credentials
385 - credentials_secret (str, optional): the name of the Prefect Secret containing a JSON representation
386 of your Google Application credentials; defaults to `"GOOGLE_APPLICATION_CREDENTIALS"`
387 - dataset (str, optional): the name of a dataset in that the table will be created
388 - table (str, optional): the name of a table to create
389 - schema (List[bigquery.SchemaField], optional): the schema to use when creating the table
390 - clustering_fields (List[str], optional): a list of fields to cluster the table by
391 - time_partitioning (bigquery.TimePartitioning, optional): a `bigquery.TimePartitioning` object specifying
392 a partitioninig of the newly created table
393 - **kwargs (optional): additional kwargs to pass to the `Task` constructor
394 """
395
396 def __init__(
397 self,
398 project: str = None,
399 credentials_secret: str = None,
400 dataset: str = None,
401 table: str = None,
402 schema: List[bigquery.SchemaField] = None,
403 clustering_fields: List[str] = None,
404 time_partitioning: bigquery.TimePartitioning = None,
405 **kwargs
406 ):
407 self.project = project
408 self.credentials_secret = credentials_secret or "GOOGLE_APPLICATION_CREDENTIALS"
409 self.dataset = dataset
410 self.table = table
411 self.schema = schema
412 self.clustering_fields = clustering_fields
413 self.time_partitioning = time_partitioning
414 super().__init__(**kwargs)
415
416 @defaults_from_attrs("project", "credentials_secret", "dataset", "table", "schema")
417 def run(
418 self,
419 project: str = None,
420 credentials_secret: str = None,
421 dataset: str = None,
422 table: str = None,
423 schema: List[bigquery.SchemaField] = None,
424 ):
425 """
426 Run method for this Task. Invoked by _calling_ this Task within a Flow context, after initialization.
427
428 Args:
429 - project (str, optional): the project to initialize the BigQuery Client with; if not provided,
430 will default to the one inferred from your credentials
431 - credentials_secret (str, optional): the name of the Prefect Secret containing a JSON representation
432 of your Google Application credentials; defaults to `"GOOGLE_APPLICATION_CREDENTIALS"`
433 - dataset (str, optional): the name of a dataset in that the table will be created
434 - table (str, optional): the name of a table to create
435 - schema (List[bigquery.SchemaField], optional): the schema to use when creating the table
436
437 Returns:
438 - None
439
440 Raises:
441 - SUCCESS: a `SUCCESS` signal if the table already exists
442 """
443 creds = Secret(credentials_secret).get()
444 credentials = Credentials.from_service_account_info(creds)
445 project = project or credentials.project_id
446 client = bigquery.Client(project=project, credentials=credentials)
447
448 try:
449 dataset_ref = client.get_dataset(dataset)
450 except NotFound:
451 self.logger.debug("Dataset {} not found, creating...".format(dataset))
452 dataset_ref = client.create_dataset(dataset)
453
454 table_ref = dataset_ref.table(table)
455 try:
456 client.get_table(table_ref)
457 raise SUCCESS(
458 "{dataset}.{table} already exists.".format(dataset=dataset, table=table)
459 )
460 except NotFound:
461 self.logger.debug("Table {} not found, creating...".format(table))
462 table = bigquery.Table(table_ref, schema=schema)
463
464 # partitioning
465 if self.time_partitioning:
466 table.time_partitioning = self.time_partitioning
467
468 # cluster for optimal data sorting/access
469 if self.clustering_fields:
470 table.clustering_fields = self.clustering_fields
471 client.create_table(table)
472
[end of src/prefect/tasks/google/bigquery.py]
[start of src/prefect/tasks/postgres/postgres.py]
1 import psycopg2 as pg
2
3 from prefect import Task
4 from prefect.utilities.tasks import defaults_from_attrs
5
6
7 class PostgresExecute(Task):
8 """
9 Task for executing a query against a Postgres database.
10
11 Args:
12 - db_name (str): name of Postgres database
13 - user (str): user name used to authenticate
14 - password (str): password used to authenticate
15 - host (str): database host address
16 - port (int, optional): port used to connect to Postgres database, defaults to 5432 if not provided
17 - query (str, optional): query to execute against database
18 - data (tuple, optional): values to use in query, must be specified using placeholder is query string
19 - commit (bool, optional): set to True to commit transaction, defaults to false
20 - **kwargs (dict, optional): additional keyword arguments to pass to the
21 Task constructor
22 """
23
24 def __init__(
25 self,
26 db_name: str,
27 user: str,
28 password: str,
29 host: str,
30 port: int = 5432,
31 query: str = None,
32 data: tuple = None,
33 commit: bool = False,
34 **kwargs
35 ):
36 self.db_name = db_name
37 self.user = user
38 self.password = password
39 self.host = host
40 self.port = port
41 self.query = query
42 self.data = data
43 self.commit = commit
44 super().__init__(**kwargs)
45
46 @defaults_from_attrs("query", "data", "commit")
47 def run(self, query: str = None, data: tuple = None, commit: bool = False):
48 """
49 Task run method. Executes a query against Postgres database.
50
51 Args:
52 - query (str, optional): query to execute against database
53 - data (tuple, optional): values to use in query, must be specified using
54 placeholder is query string
55 - commit (bool, optional): set to True to commit transaction, defaults to false
56
57 Returns:
58 - None
59
60 Raises:
61 - ValueError: if query parameter is None or a blank string
62 - DatabaseError: if exception occurs when executing the query
63 """
64 if not query:
65 raise ValueError("A query string must be provided")
66
67 ## connect to database, open cursor
68 ## allow psycopg2 to pass through any exceptions raised
69 conn = pg.connect(
70 dbname=self.db_name,
71 user=self.user,
72 password=self.password,
73 host=self.host,
74 port=self.port,
75 )
76
77 ## try to execute query
78 ## context manager automatically rolls back failed transactions
79 try:
80 with conn:
81 with conn.cursor() as cursor:
82 executed = cursor.execute(query=query, vars=data)
83 if commit:
84 conn.commit()
85
86 conn.close()
87 return executed
88
89 ## pass through error, and ensure connection is closed
90 except (Exception, pg.DatabaseError) as error:
91 conn.close()
92 raise error
93
94
95 class PostgresFetch(Task):
96 """
97 Task for fetching results of query from Postgres database.
98
99 Args:
100 - db_name (str): name of Postgres database
101 - user (str): user name used to authenticate
102 - password (str): password used to authenticate
103 - host (str): database host address
104 - port (int, optional): port used to connect to Postgres database, defaults to 5432 if not provided
105 - fetch (str, optional): one of "one" "many" or "all", used to determine how many results to fetch from executed query
106 - fetch_count (int, optional): if fetch = 'many', determines the number of results to fetch, defaults to 10
107 - query (str, optional): query to execute against database
108 - data (tuple, optional): values to use in query, must be specified using placeholder is query string
109 - commit (bool, optional): set to True to commit transaction, defaults to false
110 - **kwargs (dict, optional): additional keyword arguments to pass to the
111 Task constructor
112 """
113
114 def __init__(
115 self,
116 db_name: str,
117 user: str,
118 password: str,
119 host: str,
120 port: int = 5432,
121 fetch: str = "one",
122 fetch_count: int = 10,
123 query: str = None,
124 data: tuple = None,
125 commit: bool = False,
126 **kwargs
127 ):
128 self.db_name = db_name
129 self.user = user
130 self.password = password
131 self.host = host
132 self.port = port
133 self.fetch = fetch
134 self.fetch_count = fetch_count
135 self.query = query
136 self.data = data
137 self.commit = commit
138 super().__init__(**kwargs)
139
140 @defaults_from_attrs("fetch", "fetch_count", "query", "data", "commit")
141 def run(
142 self,
143 fetch: str = "one",
144 fetch_count: int = 10,
145 query: str = None,
146 data: tuple = None,
147 commit: bool = False,
148 ):
149 """
150 Task run method. Executes a query against Postgres database and fetches results.
151
152 Args:
153 - fetch (str, optional): one of "one" "many" or "all", used to determine how many results to fetch from executed query
154 - fetch_count (int, optional): if fetch = 'many', determines the number of results to fetch, defaults to 10
155 - query (str, optional): query to execute against database
156 - data (tuple, optional): values to use in query, must be specified using placeholder is query string
157 - commit (bool, optional): set to True to commit transaction, defaults to false
158
159 Returns:
160 - records (tuple or list of tuples): records from provided query
161
162 Raises:
163 - ValueError: if query parameter is None or a blank string
164 - DatabaseError: if exception occurs when executing the query
165 """
166 if not query:
167 raise ValueError("A query string must be provided")
168
169 if fetch not in {"one", "many", "all"}:
170 raise ValueError(
171 "The 'fetch' parameter must be one of the following - ('one', 'many', 'all')"
172 )
173
174 ## connect to database, open cursor
175 ## allow psycopg2 to pass through any exceptions raised
176 conn = pg.connect(
177 dbname=self.db_name,
178 user=self.user,
179 password=self.password,
180 host=self.host,
181 port=self.port,
182 )
183
184 ## try to execute query
185 ## context manager automatically rolls back failed transactions
186 try:
187 with conn:
188 with conn.cursor() as cursor:
189 cursor.execute(query=query, vars=data)
190
191 ## fetch results
192 if fetch == "all":
193 records = cursor.fetchall()
194 elif fetch == "many":
195 records = cursor.fetchmany(fetch_count)
196 else:
197 records = cursor.fetchone()
198
199 if commit:
200 conn.commit()
201
202 conn.close()
203 return records
204
205 ## pass through error, and ensure connection is closed
206 except (Exception, pg.DatabaseError) as error:
207 conn.close()
208 raise error
209
[end of src/prefect/tasks/postgres/postgres.py]
[start of src/prefect/tasks/snowflake/snowflake.py]
1 import snowflake.connector as sf
2
3 from prefect import Task
4 from prefect.utilities.tasks import defaults_from_attrs
5
6
7 class SnowflakeQuery(Task):
8 """
9 Task for executing a query against a snowflake database.
10
11 Args:
12 - account (str): snowflake account name, see snowflake connector
13 package documentation for details
14 - user (str): user name used to authenticate
15 - password (str): password used to authenticate
16 - database (str, optional): name of the default database to use
17 - schema (int, optional): name of the default schema to use
18 - role (str, optional): name of the default role to use
19 - warehouse (str, optional): name of the default warehouse to use
20 - query (str, optional): query to execute against database
21 - data (tuple, optional): values to use in query, must be specified using placeholder is query string
22 - autocommit (bool, optional): set to True to autocommit, defaults to None, which
23 takes snowflake AUTOCOMMIT parameter
24 - **kwargs (dict, optional): additional keyword arguments to pass to the
25 Task constructor
26 """
27
28 def __init__(
29 self,
30 account: str,
31 user: str,
32 password: str,
33 database: str = None,
34 schema: str = None,
35 role: str = None,
36 warehouse: str = None,
37 query: str = None,
38 data: tuple = None,
39 autocommit: bool = None,
40 **kwargs
41 ):
42 self.account = account
43 self.user = user
44 self.password = password
45 self.database = database
46 self.schema = schema
47 self.role = role
48 self.warehouse = warehouse
49 self.query = query
50 self.data = data
51 self.autocommit = autocommit
52 super().__init__(**kwargs)
53
54 @defaults_from_attrs("query", "data", "autocommit")
55 def run(self, query: str = None, data: tuple = None, autocommit: bool = None):
56 """
57 Task run method. Executes a query against snowflake database.
58
59 Args:
60 - query (str, optional): query to execute against database
61 - data (tuple, optional): values to use in query, must be specified using
62 placeholder is query string
63 - autocommit (bool, optional): set to True to autocommit, defaults to None
64 which takes the snowflake AUTOCOMMIT parameter
65
66 Returns:
67 - None
68
69 Raises:
70 - ValueError: if query parameter is None or a blank string
71 - DatabaseError: if exception occurs when executing the query
72 """
73 if not query:
74 raise ValueError("A query string must be provided")
75
76 # build the connection parameter dictionary
77 # we will remove `None` values next
78 connect_params = {
79 "account": self.account,
80 "user": self.user,
81 "password": self.password,
82 "database": self.database,
83 "schema": self.schema,
84 "role": self.role,
85 "warehouse": self.warehouse,
86 "autocommit": self.autocommit,
87 }
88 # filter out unset values
89 connect_params = {
90 param: value
91 for (param, value) in connect_params.items()
92 if value is not None
93 }
94
95 ## connect to database, open cursor
96 conn = sf.connect(**connect_params)
97 ## try to execute query
98 ## context manager automatically rolls back failed transactions
99 try:
100 with conn:
101 with conn.cursor() as cursor:
102 executed = cursor.execute(query=query, params=data)
103
104 conn.close()
105 return executed
106
107 ## pass through error, and ensure connection is closed
108 except Exception as error:
109 conn.close()
110 raise error
111
[end of src/prefect/tasks/snowflake/snowflake.py]
[start of src/prefect/utilities/graphql.py]
1 import base64
2 import gzip
3 import json
4 import re
5 import textwrap
6 import uuid
7 from collections.abc import KeysView, ValuesView
8 from typing import Any, Union
9
10 from prefect.utilities.collections import DotDict, as_nested_dict
11
12
13 def lowercase_first_letter(s: str) -> str:
14 """
15 Given a string, returns that string with a lowercase first letter
16 """
17 if s:
18 return s[0].lower() + s[1:]
19 return s
20
21
22 class GraphQLResult(DotDict):
23 __protect_critical_keys__ = False
24
25 def __repr__(self) -> str:
26 try:
27 return json.dumps(as_nested_dict(self, dict), indent=4)
28 except TypeError:
29 return repr(self.to_dict())
30
31
32 class EnumValue:
33 """
34 When parsing GraphQL arguments, strings can be wrapped in this class to be rendered
35 as enum values, without quotation marks.
36
37 Args:
38 - value (str): the value that should be represented as an enum value
39
40 """
41
42 def __init__(self, value: str):
43 self.value = value
44
45 def __str__(self) -> str:
46 return self.value
47
48
49 class GQLObject:
50 """
51 Helper object for building GraphQL queries.
52 """
53
54 def __init__(self, name: str = None, _arguments: str = None):
55 self.__name = name or lowercase_first_letter(type(self).__name__)
56 self.__arguments = _arguments
57
58 def __call__(self, arguments: str) -> "GQLObject":
59 return type(self)(name=self.__name, _arguments=arguments)
60
61 def __repr__(self) -> str:
62 return '<GQL: "{name}">'.format(name=self.__name)
63
64 def __str__(self) -> str:
65 if self.__arguments:
66 return with_args(self.__name, self.__arguments)
67 return self.__name
68
69
70 def parse_graphql(document: Any) -> str:
71 """
72 Parses a document into a GraphQL-compliant query string.
73
74 Documents can be a mix of `strings`, `dicts`, `lists` (or other sequences), and
75 `GQLObjects`.
76
77 The parser attempts to maintain the form of the Python objects in the resulting GQL query.
78
79 For example:
80 ```
81 query = parse_graphql({
82 'query': {
83 'books(published: {gt: 1990})': {
84 'title'
85 },
86 'authors': [
87 'name',
88 'books': {
89 'title'
90 }]
91 }
92 }
93 })
94 ```
95 results in:
96 ```
97 query {
98 books(published: {gt: 1990}) {
99 title
100 }
101 authors {
102 name
103 books {
104 title
105 }
106 }
107 }
108 ```
109
110 For convenience, if a dictionary key is True, it is ignored and the key alone is used as
111 a field name
112
113 ```python
114 {'query':{
115 'books': {
116 'id': True,
117 'name': True,
118 'author': {
119 'id',
120 'name',
121 }
122 }
123 }}
124 ```
125
126 is equivalent to:
127
128 ```python
129 {'query':{
130 'books': [
131 'id',
132 'name',
133 {'author': {
134 'id',
135 'name',
136 }}
137 ]
138 }}
139 ```
140
141 Args:
142 - document (Any): A collection of Python objects complying with the general shape
143 of a GraphQL query. Generally, this will consist of (at least) a dictionary, but
144 also sequences and `GQLObjects`.
145
146 Returns:
147 - str: a GraphQL query compiled from the provided Python structures.
148
149 Raises:
150 - TypeError: if the user provided a `GQLObject` class, rather than an instance.
151 """
152 delimiter = " "
153 parsed = _parse_graphql_inner(document, delimiter=delimiter)
154 parsed = parsed.replace(delimiter + "}", "}")
155 parsed = textwrap.dedent(parsed).strip()
156 return parsed
157
158
159 def _parse_graphql_inner(document: Any, delimiter: str) -> str:
160 """
161 Inner loop function of for `parse_graphql`.
162 """
163 if isinstance(document, (tuple, list, set, KeysView, ValuesView)):
164 return "\n".join(
165 [_parse_graphql_inner(item, delimiter=delimiter) for item in document]
166 )
167 elif isinstance(document, (dict, DotDict)):
168 result = []
169 for key, value in document.items():
170 if value is True:
171 result.append(key)
172 else:
173 result.append(
174 "{key} {{\n{value}\n}}".format(
175 key=key, value=_parse_graphql_inner(value, delimiter=delimiter)
176 )
177 )
178
179 return _parse_graphql_inner(result, delimiter=delimiter)
180 elif isinstance(document, type) and issubclass(document, GQLObject):
181 raise TypeError(
182 'It looks like you included a `GQLObject` class ("{name}") '
183 "in your document. Did you mean to use an instance of that type?".format(
184 name=document.__name__
185 )
186 )
187 else:
188 return str(document).replace("\n", "\n" + delimiter)
189
190
191 def parse_graphql_arguments(arguments: Any) -> str:
192 """
193 Parses a dictionary of GraphQL arguments, returning a GraphQL-compliant string
194 representation. If a string is passed, it is returned without modification.
195
196 This parser makes a few adjustments to the dictionary's usual string representation:
197 - `'` around keys are removed
198 - spaces added around curly braces
199 - leading and lagging braces are removed
200 - `True` becomes `true`, `False` becomes `false`, and `None` becomes `null`
201
202 Args:
203 - arguments (Any): an object (usually a dictionary) representing the GraphQL arguments
204
205 Returns:
206 - str: a string representing the parsed GraphQL arguments
207 """
208 parsed = _parse_arguments_inner(arguments)
209 # remove '{ ' and ' }' from front and end of parsed dict
210 if isinstance(arguments, (dict, DotDict)):
211 parsed = parsed[2:-2]
212 # remove '"' and '"' from front and end of parsed str
213 elif isinstance(arguments, str):
214 parsed = parsed[1:-1]
215 return parsed
216
217
218 def _parse_arguments_inner(arguments: Any) -> str:
219 if isinstance(arguments, (dict, DotDict)):
220 # empty dicts are valid GQL arguments
221 if len(arguments) == 0:
222 return "{}"
223
224 formatted = []
225 for key, value in arguments.items():
226 formatted.append(
227 "{key}: {value}".format(key=key, value=_parse_arguments_inner(value))
228 )
229 return "{ " + ", ".join(formatted) + " }"
230 elif isinstance(arguments, (list, tuple, set, KeysView, ValuesView)):
231 return "[" + ", ".join([_parse_arguments_inner(a) for a in arguments]) + "]"
232 elif isinstance(arguments, str):
233 return json.dumps(arguments)
234 elif arguments is True:
235 return "true"
236 elif arguments is False:
237 return "false"
238 elif arguments is None:
239 return "null"
240 elif isinstance(arguments, uuid.UUID):
241 return _parse_arguments_inner(str(arguments))
242 return str(arguments)
243
244
245 def with_args(field: Any, arguments: Any) -> str:
246 """
247 Given Python objects representing a field name and arguments, formats them as a single
248 GraphQL compatible string.
249
250 Example:
251
252 ```
253 query = parse_graphql({
254 'query': {
255 with_args("task", {"where": {"id": 3}}): {
256 "id"
257 }
258 }
259 })
260
261 assert query == '''
262 query {
263 task(where: {id: 3}) {
264 id
265 }
266 }
267 '''
268 ```
269
270 Args:
271 - field (Any): the GraphQL field that will be supplied with arguments
272 - arguments (Any): the arguments to be parsed and supplied to the field
273
274 Returns:
275 - str: the parsed field and arguments
276 """
277 parsed_field = parse_graphql(field)
278 parsed_arguments = parse_graphql_arguments(arguments)
279 return "{field}({arguments})".format(field=parsed_field, arguments=parsed_arguments)
280
281
282 def compress(input: Any) -> str:
283 """
284 Convenience function for compressing something before sending
285 it to Cloud. Converts to string, encodes, compresses,
286 encodes again using b64, and decodes.
287
288 Args:
289 - input (Any): the dictionary to be compressed
290
291 Returns:
292 - str: The string resulting from the compression
293 """
294 return base64.b64encode(gzip.compress(json.dumps(input).encode())).decode()
295
296
297 def decompress(string: str) -> Any:
298 """
299 Convenience function for decompressing a string that's been
300 compressed. Base64 decodes the string, decodes it,
301 decompresses it, and loads.
302
303 Args:
304 - string (str): the string to decompress
305
306 Returns:
307 - Any: The object resulting from the decompression
308 """
309 return json.loads(gzip.decompress(base64.b64decode(string)).decode())
310
[end of src/prefect/utilities/graphql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| PrefectHQ/prefect | e92d10977339e7cf230471804bf471db2f6ace7d | `auth login` CLI check needs token required query
## Description
`prefect auth login` runs a graphql query to verify the token provided is valid. The current query is `query { hello }` and this query does not require authentication. This query needs to be updated to one which requires authentication (which is every other query, let's just find the smallest one)
## Expected Behavior
If the token is invalid it should elevate an error to the user
## Reproduction
Query the API with `query { hello }` without a token and it will still work.
## Environment
N/A
| 2019-08-21T17:00:45Z | <patch>
diff --git a/src/prefect/cli/auth.py b/src/prefect/cli/auth.py
--- a/src/prefect/cli/auth.py
+++ b/src/prefect/cli/auth.py
@@ -37,10 +37,11 @@ def login(token):
--token, -t TEXT A Prefect Cloud api token [required]
"""
- if config.cloud.auth_token:
+ if config.cloud.get("auth_token"):
click.confirm(
"Prefect Cloud API token already set in config. Do you want to override?",
default=True,
+ abort=True,
)
client = Client()
@@ -48,7 +49,7 @@ def login(token):
# Verify login obtained a valid api token
try:
- client.graphql(query={"query": "hello"})
+ client.graphql(query={"query": {"tenant": "id"}})
except AuthorizationError:
click.secho(
"Error attempting to use Prefect API token {}".format(token), fg="red"
</patch> | [] | [] | ||||
pandas-dev__pandas-34877 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: s3 reads from public buckets not working
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample
```python
# Your code here
import pandas as pd
df = pd.read_csv("s3://nyc-tlc/trip data/yellow_tripdata_2019-01.csv")
```
<details>
<summary> Error stack trace </summary>
<pre>
Traceback (most recent call last):
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/s3.py", line 33, in get_file_and_filesystem
file = fs.open(_strip_schema(filepath_or_buffer), mode)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/fsspec/spec.py", line 775, in open
**kwargs
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 378, in _open
autocommit=autocommit, requester_pays=requester_pays)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 1097, in __init__
cache_type=cache_type)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/fsspec/spec.py", line 1065, in __init__
self.details = fs.info(path)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 530, in info
Key=key, **version_id_kw(version_id), **self.req_kw)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 200, in _call_s3
return method(**additional_kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 622, in _make_api_call
operation_model, request_dict, request_context)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 641, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 132, in _send_request
request = self.create_request(request_dict, operation_model)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 116, in create_request
operation_name=operation_model.name)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/signers.py", line 160, in sign
auth.add_auth(request)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/auth.py", line 357, in add_auth
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/parsers.py", line 676, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/parsers.py", line 431, in _read
filepath_or_buffer, encoding, compression
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/common.py", line 212, in get_filepath_or_buffer
filepath_or_buffer, encoding=encoding, compression=compression, mode=mode
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/s3.py", line 52, in get_filepath_or_buffer
file, _fs = get_file_and_filesystem(filepath_or_buffer, mode=mode)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/s3.py", line 42, in get_file_and_filesystem
file = fs.open(_strip_schema(filepath_or_buffer), mode)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/fsspec/spec.py", line 775, in open
**kwargs
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 378, in _open
autocommit=autocommit, requester_pays=requester_pays)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 1097, in __init__
cache_type=cache_type)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/fsspec/spec.py", line 1065, in __init__
self.details = fs.info(path)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 530, in info
Key=key, **version_id_kw(version_id), **self.req_kw)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 200, in _call_s3
return method(**additional_kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 622, in _make_api_call
operation_model, request_dict, request_context)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 641, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 132, in _send_request
request = self.create_request(request_dict, operation_model)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 116, in create_request
operation_name=operation_model.name)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/signers.py", line 160, in sign
auth.add_auth(request)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/auth.py", line 357, in add_auth
raise NoCredentialsError
</pre>
</details>
#### Problem description
Reading directly from s3 public buckets (without manually configuring the `anon` parameter via s3fs) is broken with pandas 1.0.4 (worked with 1.0.3).
Looks like reading from public buckets requires `anon=True` while creating the filesystem. This 22cf0f5dfcfbddd5506fdaf260e485bff1b88ef1 seems to have introduced the issue, where `anon=False` is passed when the `noCredentialsError` is encountered.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.7.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-55-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.4
numpy : 1.18.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.0.2
setuptools : 47.1.1.post20200604
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.15.1
pytables : None
pytest : None
pyxlsb : None
s3fs : 0.4.2
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [![PyPI Latest Release](https://img.shields.io/pypi/v/pandas.svg)](https://pypi.org/project/pandas/)
9 [![Conda Latest Release](https://anaconda.org/conda-forge/pandas/badges/version.svg)](https://anaconda.org/anaconda/pandas/)
10 [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3509134.svg)](https://doi.org/10.5281/zenodo.3509134)
11 [![Package Status](https://img.shields.io/pypi/status/pandas.svg)](https://pypi.org/project/pandas/)
12 [![License](https://img.shields.io/pypi/l/pandas.svg)](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [![Travis Build Status](https://travis-ci.org/pandas-dev/pandas.svg?branch=master)](https://travis-ci.org/pandas-dev/pandas)
14 [![Azure Build Status](https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master)](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [![Coverage](https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master)](https://codecov.io/gh/pandas-dev/pandas)
16 [![Downloads](https://anaconda.org/conda-forge/pandas/badges/downloads.svg)](https://pandas.pydata.org)
17 [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/pydata/pandas)
18 [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://numfocus.org)
19 [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging.
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 package index](https://pypi.org/project/pandas) and on conda.
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy](https://www.numpy.org)
104 - [python-dateutil](https://labix.org/python-dateutil)
105 - [pytz](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need Cython in addition to the normal
111 dependencies above. Cython can be installed from pypi:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
[start of pandas/compat/_optional.py]
1 import distutils.version
2 import importlib
3 import types
4 import warnings
5
6 # Update install.rst when updating versions!
7
8 VERSIONS = {
9 "bs4": "4.6.0",
10 "bottleneck": "1.2.1",
11 "fsspec": "0.7.4",
12 "fastparquet": "0.3.2",
13 "gcsfs": "0.6.0",
14 "lxml.etree": "3.8.0",
15 "matplotlib": "2.2.2",
16 "numexpr": "2.6.2",
17 "odfpy": "1.3.0",
18 "openpyxl": "2.5.7",
19 "pandas_gbq": "0.12.0",
20 "pyarrow": "0.13.0",
21 "pytables": "3.4.3",
22 "pytest": "5.0.1",
23 "pyxlsb": "1.0.6",
24 "s3fs": "0.4.0",
25 "scipy": "1.2.0",
26 "sqlalchemy": "1.1.4",
27 "tables": "3.4.3",
28 "tabulate": "0.8.3",
29 "xarray": "0.8.2",
30 "xlrd": "1.1.0",
31 "xlwt": "1.2.0",
32 "xlsxwriter": "0.9.8",
33 "numba": "0.46.0",
34 }
35
36
37 def _get_version(module: types.ModuleType) -> str:
38 version = getattr(module, "__version__", None)
39 if version is None:
40 # xlrd uses a capitalized attribute name
41 version = getattr(module, "__VERSION__", None)
42
43 if version is None:
44 raise ImportError(f"Can't determine version for {module.__name__}")
45 return version
46
47
48 def import_optional_dependency(
49 name: str, extra: str = "", raise_on_missing: bool = True, on_version: str = "raise"
50 ):
51 """
52 Import an optional dependency.
53
54 By default, if a dependency is missing an ImportError with a nice
55 message will be raised. If a dependency is present, but too old,
56 we raise.
57
58 Parameters
59 ----------
60 name : str
61 The module name. This should be top-level only, so that the
62 version may be checked.
63 extra : str
64 Additional text to include in the ImportError message.
65 raise_on_missing : bool, default True
66 Whether to raise if the optional dependency is not found.
67 When False and the module is not present, None is returned.
68 on_version : str {'raise', 'warn'}
69 What to do when a dependency's version is too old.
70
71 * raise : Raise an ImportError
72 * warn : Warn that the version is too old. Returns None
73 * ignore: Return the module, even if the version is too old.
74 It's expected that users validate the version locally when
75 using ``on_version="ignore"`` (see. ``io/html.py``)
76
77 Returns
78 -------
79 maybe_module : Optional[ModuleType]
80 The imported module, when found and the version is correct.
81 None is returned when the package is not found and `raise_on_missing`
82 is False, or when the package's version is too old and `on_version`
83 is ``'warn'``.
84 """
85 msg = (
86 f"Missing optional dependency '{name}'. {extra} "
87 f"Use pip or conda to install {name}."
88 )
89 try:
90 module = importlib.import_module(name)
91 except ImportError:
92 if raise_on_missing:
93 raise ImportError(msg) from None
94 else:
95 return None
96
97 minimum_version = VERSIONS.get(name)
98 if minimum_version:
99 version = _get_version(module)
100 if distutils.version.LooseVersion(version) < minimum_version:
101 assert on_version in {"warn", "raise", "ignore"}
102 msg = (
103 f"Pandas requires version '{minimum_version}' or newer of '{name}' "
104 f"(version '{version}' currently installed)."
105 )
106 if on_version == "warn":
107 warnings.warn(msg, UserWarning)
108 return None
109 elif on_version == "raise":
110 raise ImportError(msg)
111
112 return module
113
[end of pandas/compat/_optional.py]
[start of pandas/io/common.py]
1 """Common IO api utilities"""
2
3 import bz2
4 from collections import abc
5 import gzip
6 from io import BufferedIOBase, BytesIO, RawIOBase
7 import mmap
8 import os
9 import pathlib
10 from typing import (
11 IO,
12 TYPE_CHECKING,
13 Any,
14 AnyStr,
15 Dict,
16 List,
17 Mapping,
18 Optional,
19 Tuple,
20 Type,
21 Union,
22 )
23 from urllib.parse import (
24 urljoin,
25 urlparse as parse_url,
26 uses_netloc,
27 uses_params,
28 uses_relative,
29 )
30 import zipfile
31
32 from pandas._typing import FilePathOrBuffer
33 from pandas.compat import _get_lzma_file, _import_lzma
34 from pandas.compat._optional import import_optional_dependency
35
36 from pandas.core.dtypes.common import is_file_like
37
38 lzma = _import_lzma()
39
40
41 _VALID_URLS = set(uses_relative + uses_netloc + uses_params)
42 _VALID_URLS.discard("")
43
44
45 if TYPE_CHECKING:
46 from io import IOBase # noqa: F401
47
48
49 def is_url(url) -> bool:
50 """
51 Check to see if a URL has a valid protocol.
52
53 Parameters
54 ----------
55 url : str or unicode
56
57 Returns
58 -------
59 isurl : bool
60 If `url` has a valid protocol return True otherwise False.
61 """
62 if not isinstance(url, str):
63 return False
64 return parse_url(url).scheme in _VALID_URLS
65
66
67 def _expand_user(
68 filepath_or_buffer: FilePathOrBuffer[AnyStr],
69 ) -> FilePathOrBuffer[AnyStr]:
70 """
71 Return the argument with an initial component of ~ or ~user
72 replaced by that user's home directory.
73
74 Parameters
75 ----------
76 filepath_or_buffer : object to be converted if possible
77
78 Returns
79 -------
80 expanded_filepath_or_buffer : an expanded filepath or the
81 input if not expandable
82 """
83 if isinstance(filepath_or_buffer, str):
84 return os.path.expanduser(filepath_or_buffer)
85 return filepath_or_buffer
86
87
88 def validate_header_arg(header) -> None:
89 if isinstance(header, bool):
90 raise TypeError(
91 "Passing a bool to header is invalid. Use header=None for no header or "
92 "header=int or list-like of ints to specify "
93 "the row(s) making up the column names"
94 )
95
96
97 def stringify_path(
98 filepath_or_buffer: FilePathOrBuffer[AnyStr],
99 ) -> FilePathOrBuffer[AnyStr]:
100 """
101 Attempt to convert a path-like object to a string.
102
103 Parameters
104 ----------
105 filepath_or_buffer : object to be converted
106
107 Returns
108 -------
109 str_filepath_or_buffer : maybe a string version of the object
110
111 Notes
112 -----
113 Objects supporting the fspath protocol (python 3.6+) are coerced
114 according to its __fspath__ method.
115
116 For backwards compatibility with older pythons, pathlib.Path and
117 py.path objects are specially coerced.
118
119 Any other object is passed through unchanged, which includes bytes,
120 strings, buffers, or anything else that's not even path-like.
121 """
122 if hasattr(filepath_or_buffer, "__fspath__"):
123 # https://github.com/python/mypy/issues/1424
124 return filepath_or_buffer.__fspath__() # type: ignore
125 elif isinstance(filepath_or_buffer, pathlib.Path):
126 return str(filepath_or_buffer)
127 return _expand_user(filepath_or_buffer)
128
129
130 def urlopen(*args, **kwargs):
131 """
132 Lazy-import wrapper for stdlib urlopen, as that imports a big chunk of
133 the stdlib.
134 """
135 import urllib.request
136
137 return urllib.request.urlopen(*args, **kwargs)
138
139
140 def is_fsspec_url(url: FilePathOrBuffer) -> bool:
141 """
142 Returns true if the given URL looks like
143 something fsspec can handle
144 """
145 return (
146 isinstance(url, str)
147 and "://" in url
148 and not url.startswith(("http://", "https://"))
149 )
150
151
152 def get_filepath_or_buffer(
153 filepath_or_buffer: FilePathOrBuffer,
154 encoding: Optional[str] = None,
155 compression: Optional[str] = None,
156 mode: Optional[str] = None,
157 storage_options: Optional[Dict[str, Any]] = None,
158 ):
159 """
160 If the filepath_or_buffer is a url, translate and return the buffer.
161 Otherwise passthrough.
162
163 Parameters
164 ----------
165 filepath_or_buffer : a url, filepath (str, py.path.local or pathlib.Path),
166 or buffer
167 compression : {{'gzip', 'bz2', 'zip', 'xz', None}}, optional
168 encoding : the encoding to use to decode bytes, default is 'utf-8'
169 mode : str, optional
170 storage_options: dict, optional
171 passed on to fsspec, if using it; this is not yet accessed by the public API
172
173 Returns
174 -------
175 Tuple[FilePathOrBuffer, str, str, bool]
176 Tuple containing the filepath or buffer, the encoding, the compression
177 and should_close.
178 """
179 filepath_or_buffer = stringify_path(filepath_or_buffer)
180
181 if isinstance(filepath_or_buffer, str) and is_url(filepath_or_buffer):
182 # TODO: fsspec can also handle HTTP via requests, but leaving this unchanged
183 req = urlopen(filepath_or_buffer)
184 content_encoding = req.headers.get("Content-Encoding", None)
185 if content_encoding == "gzip":
186 # Override compression based on Content-Encoding header
187 compression = "gzip"
188 reader = BytesIO(req.read())
189 req.close()
190 return reader, encoding, compression, True
191
192 if is_fsspec_url(filepath_or_buffer):
193 assert isinstance(
194 filepath_or_buffer, str
195 ) # just to appease mypy for this branch
196 # two special-case s3-like protocols; these have special meaning in Hadoop,
197 # but are equivalent to just "s3" from fsspec's point of view
198 # cc #11071
199 if filepath_or_buffer.startswith("s3a://"):
200 filepath_or_buffer = filepath_or_buffer.replace("s3a://", "s3://")
201 if filepath_or_buffer.startswith("s3n://"):
202 filepath_or_buffer = filepath_or_buffer.replace("s3n://", "s3://")
203 fsspec = import_optional_dependency("fsspec")
204
205 file_obj = fsspec.open(
206 filepath_or_buffer, mode=mode or "rb", **(storage_options or {})
207 ).open()
208 return file_obj, encoding, compression, True
209
210 if isinstance(filepath_or_buffer, (str, bytes, mmap.mmap)):
211 return _expand_user(filepath_or_buffer), None, compression, False
212
213 if not is_file_like(filepath_or_buffer):
214 msg = f"Invalid file path or buffer object type: {type(filepath_or_buffer)}"
215 raise ValueError(msg)
216
217 return filepath_or_buffer, None, compression, False
218
219
220 def file_path_to_url(path: str) -> str:
221 """
222 converts an absolute native path to a FILE URL.
223
224 Parameters
225 ----------
226 path : a path in native format
227
228 Returns
229 -------
230 a valid FILE URL
231 """
232 # lazify expensive import (~30ms)
233 from urllib.request import pathname2url
234
235 return urljoin("file:", pathname2url(path))
236
237
238 _compression_to_extension = {"gzip": ".gz", "bz2": ".bz2", "zip": ".zip", "xz": ".xz"}
239
240
241 def get_compression_method(
242 compression: Optional[Union[str, Mapping[str, str]]]
243 ) -> Tuple[Optional[str], Dict[str, str]]:
244 """
245 Simplifies a compression argument to a compression method string and
246 a mapping containing additional arguments.
247
248 Parameters
249 ----------
250 compression : str or mapping
251 If string, specifies the compression method. If mapping, value at key
252 'method' specifies compression method.
253
254 Returns
255 -------
256 tuple of ({compression method}, Optional[str]
257 {compression arguments}, Dict[str, str])
258
259 Raises
260 ------
261 ValueError on mapping missing 'method' key
262 """
263 if isinstance(compression, Mapping):
264 compression_args = dict(compression)
265 try:
266 compression = compression_args.pop("method")
267 except KeyError as err:
268 raise ValueError("If mapping, compression must have key 'method'") from err
269 else:
270 compression_args = {}
271 return compression, compression_args
272
273
274 def infer_compression(
275 filepath_or_buffer: FilePathOrBuffer, compression: Optional[str]
276 ) -> Optional[str]:
277 """
278 Get the compression method for filepath_or_buffer. If compression='infer',
279 the inferred compression method is returned. Otherwise, the input
280 compression method is returned unchanged, unless it's invalid, in which
281 case an error is raised.
282
283 Parameters
284 ----------
285 filepath_or_buffer : str or file handle
286 File path or object.
287 compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}
288 If 'infer' and `filepath_or_buffer` is path-like, then detect
289 compression from the following extensions: '.gz', '.bz2', '.zip',
290 or '.xz' (otherwise no compression).
291
292 Returns
293 -------
294 string or None
295
296 Raises
297 ------
298 ValueError on invalid compression specified.
299 """
300 # No compression has been explicitly specified
301 if compression is None:
302 return None
303
304 # Infer compression
305 if compression == "infer":
306 # Convert all path types (e.g. pathlib.Path) to strings
307 filepath_or_buffer = stringify_path(filepath_or_buffer)
308 if not isinstance(filepath_or_buffer, str):
309 # Cannot infer compression of a buffer, assume no compression
310 return None
311
312 # Infer compression from the filename/URL extension
313 for compression, extension in _compression_to_extension.items():
314 if filepath_or_buffer.endswith(extension):
315 return compression
316 return None
317
318 # Compression has been specified. Check that it's valid
319 if compression in _compression_to_extension:
320 return compression
321
322 msg = f"Unrecognized compression type: {compression}"
323 valid = ["infer", None] + sorted(_compression_to_extension)
324 msg += f"\nValid compression types are {valid}"
325 raise ValueError(msg)
326
327
328 def get_handle(
329 path_or_buf,
330 mode: str,
331 encoding=None,
332 compression: Optional[Union[str, Mapping[str, Any]]] = None,
333 memory_map: bool = False,
334 is_text: bool = True,
335 errors=None,
336 ):
337 """
338 Get file handle for given path/buffer and mode.
339
340 Parameters
341 ----------
342 path_or_buf : str or file handle
343 File path or object.
344 mode : str
345 Mode to open path_or_buf with.
346 encoding : str or None
347 Encoding to use.
348 compression : str or dict, default None
349 If string, specifies compression mode. If dict, value at key 'method'
350 specifies compression mode. Compression mode must be one of {'infer',
351 'gzip', 'bz2', 'zip', 'xz', None}. If compression mode is 'infer'
352 and `filepath_or_buffer` is path-like, then detect compression from
353 the following extensions: '.gz', '.bz2', '.zip', or '.xz' (otherwise
354 no compression). If dict and compression mode is one of
355 {'zip', 'gzip', 'bz2'}, or inferred as one of the above,
356 other entries passed as additional compression options.
357
358 .. versionchanged:: 1.0.0
359
360 May now be a dict with key 'method' as compression mode
361 and other keys as compression options if compression
362 mode is 'zip'.
363
364 .. versionchanged:: 1.1.0
365
366 Passing compression options as keys in dict is now
367 supported for compression modes 'gzip' and 'bz2' as well as 'zip'.
368
369 memory_map : boolean, default False
370 See parsers._parser_params for more information.
371 is_text : boolean, default True
372 whether file/buffer is in text format (csv, json, etc.), or in binary
373 mode (pickle, etc.).
374 errors : str, default 'strict'
375 Specifies how encoding and decoding errors are to be handled.
376 See the errors argument for :func:`open` for a full list
377 of options.
378
379 .. versionadded:: 1.1.0
380
381 Returns
382 -------
383 f : file-like
384 A file-like object.
385 handles : list of file-like objects
386 A list of file-like object that were opened in this function.
387 """
388 need_text_wrapping: Tuple[Type["IOBase"], ...]
389 try:
390 from s3fs import S3File
391
392 need_text_wrapping = (BufferedIOBase, RawIOBase, S3File)
393 except ImportError:
394 need_text_wrapping = (BufferedIOBase, RawIOBase)
395
396 handles: List[IO] = list()
397 f = path_or_buf
398
399 # Convert pathlib.Path/py.path.local or string
400 path_or_buf = stringify_path(path_or_buf)
401 is_path = isinstance(path_or_buf, str)
402
403 compression, compression_args = get_compression_method(compression)
404 if is_path:
405 compression = infer_compression(path_or_buf, compression)
406
407 if compression:
408
409 # GH33398 the type ignores here seem related to mypy issue #5382;
410 # it may be possible to remove them once that is resolved.
411
412 # GZ Compression
413 if compression == "gzip":
414 if is_path:
415 f = gzip.open(
416 path_or_buf, mode, **compression_args # type: ignore
417 )
418 else:
419 f = gzip.GzipFile(
420 fileobj=path_or_buf, **compression_args # type: ignore
421 )
422
423 # BZ Compression
424 elif compression == "bz2":
425 if is_path:
426 f = bz2.BZ2File(
427 path_or_buf, mode, **compression_args # type: ignore
428 )
429 else:
430 f = bz2.BZ2File(path_or_buf, **compression_args) # type: ignore
431
432 # ZIP Compression
433 elif compression == "zip":
434 zf = _BytesZipFile(path_or_buf, mode, **compression_args)
435 # Ensure the container is closed as well.
436 handles.append(zf)
437 if zf.mode == "w":
438 f = zf
439 elif zf.mode == "r":
440 zip_names = zf.namelist()
441 if len(zip_names) == 1:
442 f = zf.open(zip_names.pop())
443 elif len(zip_names) == 0:
444 raise ValueError(f"Zero files found in ZIP file {path_or_buf}")
445 else:
446 raise ValueError(
447 "Multiple files found in ZIP file. "
448 f"Only one file per ZIP: {zip_names}"
449 )
450
451 # XZ Compression
452 elif compression == "xz":
453 f = _get_lzma_file(lzma)(path_or_buf, mode)
454
455 # Unrecognized Compression
456 else:
457 msg = f"Unrecognized compression type: {compression}"
458 raise ValueError(msg)
459
460 handles.append(f)
461
462 elif is_path:
463 if encoding:
464 # Encoding
465 f = open(path_or_buf, mode, encoding=encoding, errors=errors, newline="")
466 elif is_text:
467 # No explicit encoding
468 f = open(path_or_buf, mode, errors="replace", newline="")
469 else:
470 # Binary mode
471 f = open(path_or_buf, mode)
472 handles.append(f)
473
474 # Convert BytesIO or file objects passed with an encoding
475 if is_text and (compression or isinstance(f, need_text_wrapping)):
476 from io import TextIOWrapper
477
478 g = TextIOWrapper(f, encoding=encoding, errors=errors, newline="")
479 if not isinstance(f, (BufferedIOBase, RawIOBase)):
480 handles.append(g)
481 f = g
482
483 if memory_map and hasattr(f, "fileno"):
484 try:
485 wrapped = _MMapWrapper(f)
486 f.close()
487 f = wrapped
488 except Exception:
489 # we catch any errors that may have occurred
490 # because that is consistent with the lower-level
491 # functionality of the C engine (pd.read_csv), so
492 # leave the file handler as is then
493 pass
494
495 return f, handles
496
497
498 class _BytesZipFile(zipfile.ZipFile, BytesIO): # type: ignore
499 """
500 Wrapper for standard library class ZipFile and allow the returned file-like
501 handle to accept byte strings via `write` method.
502
503 BytesIO provides attributes of file-like object and ZipFile.writestr writes
504 bytes strings into a member of the archive.
505 """
506
507 # GH 17778
508 def __init__(
509 self,
510 file: FilePathOrBuffer,
511 mode: str,
512 archive_name: Optional[str] = None,
513 **kwargs,
514 ):
515 if mode in ["wb", "rb"]:
516 mode = mode.replace("b", "")
517 self.archive_name = archive_name
518 super().__init__(file, mode, zipfile.ZIP_DEFLATED, **kwargs)
519
520 def write(self, data):
521 archive_name = self.filename
522 if self.archive_name is not None:
523 archive_name = self.archive_name
524 super().writestr(archive_name, data)
525
526 @property
527 def closed(self):
528 return self.fp is None
529
530
531 class _MMapWrapper(abc.Iterator):
532 """
533 Wrapper for the Python's mmap class so that it can be properly read in
534 by Python's csv.reader class.
535
536 Parameters
537 ----------
538 f : file object
539 File object to be mapped onto memory. Must support the 'fileno'
540 method or have an equivalent attribute
541
542 """
543
544 def __init__(self, f: IO):
545 self.mmap = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
546
547 def __getattr__(self, name: str):
548 return getattr(self.mmap, name)
549
550 def __iter__(self) -> "_MMapWrapper":
551 return self
552
553 def __next__(self) -> str:
554 newbytes = self.mmap.readline()
555
556 # readline returns bytes, not str, but Python's CSV reader
557 # expects str, so convert the output to str before continuing
558 newline = newbytes.decode("utf-8")
559
560 # mmap doesn't raise if reading past the allocated
561 # data but instead returns an empty string, so raise
562 # if that is returned
563 if newline == "":
564 raise StopIteration
565 return newline
566
[end of pandas/io/common.py]
[start of scripts/generate_pip_deps_from_conda.py]
1 #!/usr/bin/env python3
2 """
3 Convert the conda environment.yml to the pip requirements-dev.txt,
4 or check that they have the same packages (for the CI)
5
6 Usage:
7
8 Generate `requirements-dev.txt`
9 $ ./conda_to_pip
10
11 Compare and fail (exit status != 0) if `requirements-dev.txt` has not been
12 generated with this script:
13 $ ./conda_to_pip --compare
14 """
15 import argparse
16 import os
17 import re
18 import sys
19
20 import yaml
21
22 EXCLUDE = {"python"}
23 RENAME = {"pytables": "tables", "pyqt": "pyqt5", "dask-core": "dask"}
24
25
26 def conda_package_to_pip(package):
27 """
28 Convert a conda package to its pip equivalent.
29
30 In most cases they are the same, those are the exceptions:
31 - Packages that should be excluded (in `EXCLUDE`)
32 - Packages that should be renamed (in `RENAME`)
33 - A package requiring a specific version, in conda is defined with a single
34 equal (e.g. ``pandas=1.0``) and in pip with two (e.g. ``pandas==1.0``)
35 """
36 package = re.sub("(?<=[^<>])=", "==", package).strip()
37
38 for compare in ("<=", ">=", "=="):
39 if compare not in package:
40 continue
41
42 pkg, version = package.split(compare)
43 if pkg in EXCLUDE:
44 return
45
46 if pkg in RENAME:
47 return "".join((RENAME[pkg], compare, version))
48
49 break
50
51 if package in RENAME:
52 return RENAME[package]
53
54 return package
55
56
57 def main(conda_fname, pip_fname, compare=False):
58 """
59 Generate the pip dependencies file from the conda file, or compare that
60 they are synchronized (``compare=True``).
61
62 Parameters
63 ----------
64 conda_fname : str
65 Path to the conda file with dependencies (e.g. `environment.yml`).
66 pip_fname : str
67 Path to the pip file with dependencies (e.g. `requirements-dev.txt`).
68 compare : bool, default False
69 Whether to generate the pip file (``False``) or to compare if the
70 pip file has been generated with this script and the last version
71 of the conda file (``True``).
72
73 Returns
74 -------
75 bool
76 True if the comparison fails, False otherwise
77 """
78 with open(conda_fname) as conda_fd:
79 deps = yaml.safe_load(conda_fd)["dependencies"]
80
81 pip_deps = []
82 for dep in deps:
83 if isinstance(dep, str):
84 conda_dep = conda_package_to_pip(dep)
85 if conda_dep:
86 pip_deps.append(conda_dep)
87 elif isinstance(dep, dict) and len(dep) == 1 and "pip" in dep:
88 pip_deps += dep["pip"]
89 else:
90 raise ValueError(f"Unexpected dependency {dep}")
91
92 fname = os.path.split(conda_fname)[1]
93 header = (
94 f"# This file is auto-generated from {fname}, do not modify.\n"
95 "# See that file for comments about the need/usage of each dependency.\n\n"
96 )
97 pip_content = header + "\n".join(pip_deps)
98
99 if compare:
100 with open(pip_fname) as pip_fd:
101 return pip_content != pip_fd.read()
102 else:
103 with open(pip_fname, "w") as pip_fd:
104 pip_fd.write(pip_content)
105 return False
106
107
108 if __name__ == "__main__":
109 argparser = argparse.ArgumentParser(
110 description="convert (or compare) conda file to pip"
111 )
112 argparser.add_argument(
113 "--compare",
114 action="store_true",
115 help="compare whether the two files are equivalent",
116 )
117 argparser.add_argument(
118 "--azure", action="store_true", help="show the output in azure-pipelines format"
119 )
120 args = argparser.parse_args()
121
122 repo_path = os.path.dirname(os.path.abspath(os.path.dirname(__file__)))
123 res = main(
124 os.path.join(repo_path, "environment.yml"),
125 os.path.join(repo_path, "requirements-dev.txt"),
126 compare=args.compare,
127 )
128 if res:
129 msg = (
130 f"`requirements-dev.txt` has to be generated with `{sys.argv[0]}` after "
131 "`environment.yml` is modified.\n"
132 )
133 if args.azure:
134 msg = (
135 f"##vso[task.logissue type=error;sourcepath=requirements-dev.txt]{msg}"
136 )
137 sys.stderr.write(msg)
138 sys.exit(res)
139
[end of scripts/generate_pip_deps_from_conda.py]
[start of scripts/validate_docstrings.py]
1 #!/usr/bin/env python3
2 """
3 Analyze docstrings to detect errors.
4
5 If no argument is provided, it does a quick check of docstrings and returns
6 a csv with all API functions and results of basic checks.
7
8 If a function or method is provided in the form "pandas.function",
9 "pandas.module.class.method", etc. a list of all errors in the docstring for
10 the specified function or method.
11
12 Usage::
13 $ ./validate_docstrings.py
14 $ ./validate_docstrings.py pandas.DataFrame.head
15 """
16 import argparse
17 import doctest
18 import glob
19 import importlib
20 import json
21 import os
22 import sys
23 import tempfile
24 from typing import List, Optional
25
26 import flake8.main.application
27
28 try:
29 from io import StringIO
30 except ImportError:
31 from cStringIO import StringIO
32
33 # Template backend makes matplotlib to not plot anything. This is useful
34 # to avoid that plot windows are open from the doctests while running the
35 # script. Setting here before matplotlib is loaded.
36 # We don't warn for the number of open plots, as none is actually being opened
37 os.environ["MPLBACKEND"] = "Template"
38 import matplotlib # noqa: E402 isort:skip
39
40 matplotlib.rc("figure", max_open_warning=10000)
41
42 import numpy # noqa: E402 isort:skip
43
44 BASE_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
45
46 sys.path.insert(0, os.path.join(BASE_PATH))
47 import pandas # noqa: E402 isort:skip
48
49 sys.path.insert(1, os.path.join(BASE_PATH, "doc", "sphinxext"))
50 from numpydoc.validate import validate, Docstring # noqa: E402 isort:skip
51
52
53 PRIVATE_CLASSES = ["NDFrame", "IndexOpsMixin"]
54 ERROR_MSGS = {
55 "GL04": "Private classes ({mentioned_private_classes}) should not be "
56 "mentioned in public docstrings",
57 "SA05": "{reference_name} in `See Also` section does not need `pandas` "
58 "prefix, use {right_reference} instead.",
59 "EX02": "Examples do not pass tests:\n{doctest_log}",
60 "EX03": "flake8 error: {error_code} {error_message}{times_happening}",
61 "EX04": "Do not import {imported_library}, as it is imported "
62 "automatically for the examples (numpy as np, pandas as pd)",
63 }
64
65
66 def pandas_error(code, **kwargs):
67 """
68 Copy of the numpydoc error function, since ERROR_MSGS can't be updated
69 with our custom errors yet.
70 """
71 return (code, ERROR_MSGS[code].format(**kwargs))
72
73
74 def get_api_items(api_doc_fd):
75 """
76 Yield information about all public API items.
77
78 Parse api.rst file from the documentation, and extract all the functions,
79 methods, classes, attributes... This should include all pandas public API.
80
81 Parameters
82 ----------
83 api_doc_fd : file descriptor
84 A file descriptor of the API documentation page, containing the table
85 of contents with all the public API.
86
87 Yields
88 ------
89 name : str
90 The name of the object (e.g. 'pandas.Series.str.upper).
91 func : function
92 The object itself. In most cases this will be a function or method,
93 but it can also be classes, properties, cython objects...
94 section : str
95 The name of the section in the API page where the object item is
96 located.
97 subsection : str
98 The name of the subsection in the API page where the object item is
99 located.
100 """
101 current_module = "pandas"
102 previous_line = current_section = current_subsection = ""
103 position = None
104 for line in api_doc_fd:
105 line = line.strip()
106 if len(line) == len(previous_line):
107 if set(line) == set("-"):
108 current_section = previous_line
109 continue
110 if set(line) == set("~"):
111 current_subsection = previous_line
112 continue
113
114 if line.startswith(".. currentmodule::"):
115 current_module = line.replace(".. currentmodule::", "").strip()
116 continue
117
118 if line == ".. autosummary::":
119 position = "autosummary"
120 continue
121
122 if position == "autosummary":
123 if line == "":
124 position = "items"
125 continue
126
127 if position == "items":
128 if line == "":
129 position = None
130 continue
131 item = line.strip()
132 func = importlib.import_module(current_module)
133 for part in item.split("."):
134 func = getattr(func, part)
135
136 yield (
137 ".".join([current_module, item]),
138 func,
139 current_section,
140 current_subsection,
141 )
142
143 previous_line = line
144
145
146 class PandasDocstring(Docstring):
147 @property
148 def mentioned_private_classes(self):
149 return [klass for klass in PRIVATE_CLASSES if klass in self.raw_doc]
150
151 @property
152 def examples_errors(self):
153 flags = doctest.NORMALIZE_WHITESPACE | doctest.IGNORE_EXCEPTION_DETAIL
154 finder = doctest.DocTestFinder()
155 runner = doctest.DocTestRunner(optionflags=flags)
156 context = {"np": numpy, "pd": pandas}
157 error_msgs = ""
158 for test in finder.find(self.raw_doc, self.name, globs=context):
159 f = StringIO()
160 runner.run(test, out=f.write)
161 error_msgs += f.getvalue()
162 return error_msgs
163
164 @property
165 def examples_source_code(self):
166 lines = doctest.DocTestParser().get_examples(self.raw_doc)
167 return [line.source for line in lines]
168
169 def validate_pep8(self):
170 if not self.examples:
171 return
172
173 # F401 is needed to not generate flake8 errors in examples
174 # that do not user numpy or pandas
175 content = "".join(
176 (
177 "import numpy as np # noqa: F401\n",
178 "import pandas as pd # noqa: F401\n",
179 *self.examples_source_code,
180 )
181 )
182
183 application = flake8.main.application.Application()
184 application.initialize(["--quiet"])
185
186 with tempfile.NamedTemporaryFile(mode="w", encoding="utf-8") as file:
187 file.write(content)
188 file.flush()
189 application.run_checks([file.name])
190
191 # We need this to avoid flake8 printing the names of the files to
192 # the standard output
193 application.formatter.write = lambda line, source: None
194 application.report()
195
196 yield from application.guide.stats.statistics_for("")
197
198
199 def pandas_validate(func_name: str):
200 """
201 Call the numpydoc validation, and add the errors specific to pandas.
202
203 Parameters
204 ----------
205 func_name : str
206 Name of the object of the docstring to validate.
207
208 Returns
209 -------
210 dict
211 Information about the docstring and the errors found.
212 """
213 doc = PandasDocstring(func_name)
214 result = validate(func_name)
215
216 mentioned_errs = doc.mentioned_private_classes
217 if mentioned_errs:
218 result["errors"].append(
219 pandas_error("GL04", mentioned_private_classes=", ".join(mentioned_errs))
220 )
221
222 if doc.see_also:
223 for rel_name, rel_desc in doc.see_also.items():
224 if rel_name.startswith("pandas."):
225 result["errors"].append(
226 pandas_error(
227 "SA05",
228 reference_name=rel_name,
229 right_reference=rel_name[len("pandas.") :],
230 )
231 )
232
233 result["examples_errs"] = ""
234 if doc.examples:
235 result["examples_errs"] = doc.examples_errors
236 if result["examples_errs"]:
237 result["errors"].append(
238 pandas_error("EX02", doctest_log=result["examples_errs"])
239 )
240 for err in doc.validate_pep8():
241 result["errors"].append(
242 pandas_error(
243 "EX03",
244 error_code=err.error_code,
245 error_message=err.message,
246 times_happening=f" ({err.count} times)" if err.count > 1 else "",
247 )
248 )
249 examples_source_code = "".join(doc.examples_source_code)
250 for wrong_import in ("numpy", "pandas"):
251 if f"import {wrong_import}" in examples_source_code:
252 result["errors"].append(
253 pandas_error("EX04", imported_library=wrong_import)
254 )
255
256 return result
257
258
259 def validate_all(prefix, ignore_deprecated=False):
260 """
261 Execute the validation of all docstrings, and return a dict with the
262 results.
263
264 Parameters
265 ----------
266 prefix : str or None
267 If provided, only the docstrings that start with this pattern will be
268 validated. If None, all docstrings will be validated.
269 ignore_deprecated: bool, default False
270 If True, deprecated objects are ignored when validating docstrings.
271
272 Returns
273 -------
274 dict
275 A dictionary with an item for every function/method... containing
276 all the validation information.
277 """
278 result = {}
279 seen = {}
280
281 api_doc_fnames = os.path.join(BASE_PATH, "doc", "source", "reference", "*.rst")
282 api_items = []
283 for api_doc_fname in glob.glob(api_doc_fnames):
284 with open(api_doc_fname) as f:
285 api_items += list(get_api_items(f))
286
287 for func_name, func_obj, section, subsection in api_items:
288 if prefix and not func_name.startswith(prefix):
289 continue
290 doc_info = pandas_validate(func_name)
291 if ignore_deprecated and doc_info["deprecated"]:
292 continue
293 result[func_name] = doc_info
294
295 shared_code_key = doc_info["file"], doc_info["file_line"]
296 shared_code = seen.get(shared_code_key, "")
297 result[func_name].update(
298 {
299 "in_api": True,
300 "section": section,
301 "subsection": subsection,
302 "shared_code_with": shared_code,
303 }
304 )
305
306 seen[shared_code_key] = func_name
307
308 return result
309
310
311 def print_validate_all_results(
312 prefix: str,
313 errors: Optional[List[str]],
314 output_format: str,
315 ignore_deprecated: bool,
316 ):
317 if output_format not in ("default", "json", "actions"):
318 raise ValueError(f'Unknown output_format "{output_format}"')
319
320 result = validate_all(prefix, ignore_deprecated)
321
322 if output_format == "json":
323 sys.stdout.write(json.dumps(result))
324 return 0
325
326 prefix = "##[error]" if output_format == "actions" else ""
327 exit_status = 0
328 for name, res in result.items():
329 for err_code, err_desc in res["errors"]:
330 if errors and err_code not in errors:
331 continue
332 sys.stdout.write(
333 f'{prefix}{res["file"]}:{res["file_line"]}:'
334 f"{err_code}:{name}:{err_desc}\n"
335 )
336 exit_status += 1
337
338 return exit_status
339
340
341 def print_validate_one_results(func_name: str):
342 def header(title, width=80, char="#"):
343 full_line = char * width
344 side_len = (width - len(title) - 2) // 2
345 adj = "" if len(title) % 2 == 0 else " "
346 title_line = f"{char * side_len} {title}{adj} {char * side_len}"
347
348 return f"\n{full_line}\n{title_line}\n{full_line}\n\n"
349
350 result = pandas_validate(func_name)
351
352 sys.stderr.write(header(f"Docstring ({func_name})"))
353 sys.stderr.write(f"{result['docstring']}\n")
354
355 sys.stderr.write(header("Validation"))
356 if result["errors"]:
357 sys.stderr.write(f'{len(result["errors"])} Errors found:\n')
358 for err_code, err_desc in result["errors"]:
359 if err_code == "EX02": # Failing examples are printed at the end
360 sys.stderr.write("\tExamples do not pass tests\n")
361 continue
362 sys.stderr.write(f"\t{err_desc}\n")
363 else:
364 sys.stderr.write(f'Docstring for "{func_name}" correct. :)\n')
365
366 if result["examples_errs"]:
367 sys.stderr.write(header("Doctests"))
368 sys.stderr.write(result["examples_errs"])
369
370
371 def main(func_name, prefix, errors, output_format, ignore_deprecated):
372 """
373 Main entry point. Call the validation for one or for all docstrings.
374 """
375 if func_name is None:
376 return print_validate_all_results(
377 prefix, errors, output_format, ignore_deprecated
378 )
379 else:
380 print_validate_one_results(func_name)
381 return 0
382
383
384 if __name__ == "__main__":
385 format_opts = "default", "json", "actions"
386 func_help = (
387 "function or method to validate (e.g. pandas.DataFrame.head) "
388 "if not provided, all docstrings are validated and returned "
389 "as JSON"
390 )
391 argparser = argparse.ArgumentParser(description="validate pandas docstrings")
392 argparser.add_argument("function", nargs="?", default=None, help=func_help)
393 argparser.add_argument(
394 "--format",
395 default="default",
396 choices=format_opts,
397 help="format of the output when validating "
398 "multiple docstrings (ignored when validating one). "
399 "It can be {str(format_opts)[1:-1]}",
400 )
401 argparser.add_argument(
402 "--prefix",
403 default=None,
404 help="pattern for the "
405 "docstring names, in order to decide which ones "
406 'will be validated. A prefix "pandas.Series.str."'
407 "will make the script validate all the docstrings "
408 "of methods starting by this pattern. It is "
409 "ignored if parameter function is provided",
410 )
411 argparser.add_argument(
412 "--errors",
413 default=None,
414 help="comma separated "
415 "list of error codes to validate. By default it "
416 "validates all errors (ignored when validating "
417 "a single docstring)",
418 )
419 argparser.add_argument(
420 "--ignore_deprecated",
421 default=False,
422 action="store_true",
423 help="if this flag is set, "
424 "deprecated objects are ignored when validating "
425 "all docstrings",
426 )
427
428 args = argparser.parse_args()
429 sys.exit(
430 main(
431 args.function,
432 args.prefix,
433 args.errors.split(",") if args.errors else None,
434 args.format,
435 args.ignore_deprecated,
436 )
437 )
438
[end of scripts/validate_docstrings.py]
[start of scripts/validate_unwanted_patterns.py]
1 #!/usr/bin/env python3
2 """
3 Unwanted patterns test cases.
4
5 The reason this file exist despite the fact we already have
6 `ci/code_checks.sh`,
7 (see https://github.com/pandas-dev/pandas/blob/master/ci/code_checks.sh)
8
9 is that some of the test cases are more complex/imposible to validate via regex.
10 So this file is somewhat an extensions to `ci/code_checks.sh`
11 """
12
13 import argparse
14 import ast
15 import os
16 import sys
17 import token
18 import tokenize
19 from typing import IO, Callable, FrozenSet, Iterable, List, Tuple
20
21 PATHS_TO_IGNORE: Tuple[str, ...] = ("asv_bench/env",)
22
23
24 def _get_literal_string_prefix_len(token_string: str) -> int:
25 """
26 Getting the length of the literal string prefix.
27
28 Parameters
29 ----------
30 token_string : str
31 String to check.
32
33 Returns
34 -------
35 int
36 Length of the literal string prefix.
37
38 Examples
39 --------
40 >>> example_string = "'Hello world'"
41 >>> _get_literal_string_prefix_len(example_string)
42 0
43 >>> example_string = "r'Hello world'"
44 >>> _get_literal_string_prefix_len(example_string)
45 1
46 """
47 try:
48 return min(
49 token_string.find(quote)
50 for quote in (r"'", r'"')
51 if token_string.find(quote) >= 0
52 )
53 except ValueError:
54 return 0
55
56
57 def bare_pytest_raises(file_obj: IO[str]) -> Iterable[Tuple[int, str]]:
58 """
59 Test Case for bare pytest raises.
60
61 For example, this is wrong:
62
63 >>> with pytest.raise(ValueError):
64 ... # Some code that raises ValueError
65
66 And this is what we want instead:
67
68 >>> with pytest.raise(ValueError, match="foo"):
69 ... # Some code that raises ValueError
70
71 Parameters
72 ----------
73 file_obj : IO
74 File-like object containing the Python code to validate.
75
76 Yields
77 ------
78 line_number : int
79 Line number of unconcatenated string.
80 msg : str
81 Explenation of the error.
82
83 Notes
84 -----
85 GH #23922
86 """
87 contents = file_obj.read()
88 tree = ast.parse(contents)
89
90 for node in ast.walk(tree):
91 if not isinstance(node, ast.Call):
92 continue
93
94 try:
95 if not (node.func.value.id == "pytest" and node.func.attr == "raises"):
96 continue
97 except AttributeError:
98 continue
99
100 if not node.keywords:
101 yield (
102 node.lineno,
103 "Bare pytests raise have been found. "
104 "Please pass in the argument 'match' as well the exception.",
105 )
106 else:
107 # Means that there are arguments that are being passed in,
108 # now we validate that `match` is one of the passed in arguments
109 if not any(keyword.arg == "match" for keyword in node.keywords):
110 yield (
111 node.lineno,
112 "Bare pytests raise have been found. "
113 "Please pass in the argument 'match' as well the exception.",
114 )
115
116
117 def strings_to_concatenate(file_obj: IO[str]) -> Iterable[Tuple[int, str]]:
118 """
119 This test case is necessary after 'Black' (https://github.com/psf/black),
120 is formating strings over multiple lines.
121
122 For example, when this:
123
124 >>> foo = (
125 ... "bar "
126 ... "baz"
127 ... )
128
129 Is becoming this:
130
131 >>> foo = ("bar " "baz")
132
133 'Black' is not considering this as an
134 issue (see https://github.com/psf/black/issues/1051),
135 so we are checking it here instead.
136
137 Parameters
138 ----------
139 file_obj : IO
140 File-like object containing the Python code to validate.
141
142 Yields
143 ------
144 line_number : int
145 Line number of unconcatenated string.
146 msg : str
147 Explenation of the error.
148
149 Notes
150 -----
151 GH #30454
152 """
153 tokens: List = list(tokenize.generate_tokens(file_obj.readline))
154
155 for current_token, next_token in zip(tokens, tokens[1:]):
156 if current_token.type == next_token.type == token.STRING:
157 yield (
158 current_token.start[0],
159 (
160 "String unnecessarily split in two by black. "
161 "Please merge them manually."
162 ),
163 )
164
165
166 def strings_with_wrong_placed_whitespace(
167 file_obj: IO[str],
168 ) -> Iterable[Tuple[int, str]]:
169 """
170 Test case for leading spaces in concated strings.
171
172 For example:
173
174 >>> rule = (
175 ... "We want the space at the end of the line, "
176 ... "not at the beginning"
177 ... )
178
179 Instead of:
180
181 >>> rule = (
182 ... "We want the space at the end of the line,"
183 ... " not at the beginning"
184 ... )
185
186 Parameters
187 ----------
188 file_obj : IO
189 File-like object containing the Python code to validate.
190
191 Yields
192 ------
193 line_number : int
194 Line number of unconcatenated string.
195 msg : str
196 Explenation of the error.
197 """
198
199 def has_wrong_whitespace(first_line: str, second_line: str) -> bool:
200 """
201 Checking if the two lines are mattching the unwanted pattern.
202
203 Parameters
204 ----------
205 first_line : str
206 First line to check.
207 second_line : str
208 Second line to check.
209
210 Returns
211 -------
212 bool
213 True if the two recived string match, an unwanted pattern.
214
215 Notes
216 -----
217 The unwanted pattern that we are trying to catch is if the spaces in
218 a string that is concatenated over multiple lines are placed at the
219 end of each string, unless this string is ending with a
220 newline character (\n).
221
222 For example, this is bad:
223
224 >>> rule = (
225 ... "We want the space at the end of the line,"
226 ... " not at the beginning"
227 ... )
228
229 And what we want is:
230
231 >>> rule = (
232 ... "We want the space at the end of the line, "
233 ... "not at the beginning"
234 ... )
235
236 And if the string is ending with a new line character (\n) we
237 do not want any trailing whitespaces after it.
238
239 For example, this is bad:
240
241 >>> rule = (
242 ... "We want the space at the begging of "
243 ... "the line if the previous line is ending with a \n "
244 ... "not at the end, like always"
245 ... )
246
247 And what we do want is:
248
249 >>> rule = (
250 ... "We want the space at the begging of "
251 ... "the line if the previous line is ending with a \n"
252 ... " not at the end, like always"
253 ... )
254 """
255 if first_line.endswith(r"\n"):
256 return False
257 elif first_line.startswith(" ") or second_line.startswith(" "):
258 return False
259 elif first_line.endswith(" ") or second_line.endswith(" "):
260 return False
261 elif (not first_line.endswith(" ")) and second_line.startswith(" "):
262 return True
263 return False
264
265 tokens: List = list(tokenize.generate_tokens(file_obj.readline))
266
267 for first_token, second_token, third_token in zip(tokens, tokens[1:], tokens[2:]):
268 # Checking if we are in a block of concated string
269 if (
270 first_token.type == third_token.type == token.STRING
271 and second_token.type == token.NL
272 ):
273 # Striping the quotes, with the string litteral prefix
274 first_string: str = first_token.string[
275 _get_literal_string_prefix_len(first_token.string) + 1 : -1
276 ]
277 second_string: str = third_token.string[
278 _get_literal_string_prefix_len(third_token.string) + 1 : -1
279 ]
280
281 if has_wrong_whitespace(first_string, second_string):
282 yield (
283 third_token.start[0],
284 (
285 "String has a space at the beginning instead "
286 "of the end of the previous string."
287 ),
288 )
289
290
291 def main(
292 function: Callable[[IO[str]], Iterable[Tuple[int, str]]],
293 source_path: str,
294 output_format: str,
295 file_extensions_to_check: str,
296 ) -> bool:
297 """
298 Main entry point of the script.
299
300 Parameters
301 ----------
302 function : Callable
303 Function to execute for the specified validation type.
304 source_path : str
305 Source path representing path to a file/directory.
306 output_format : str
307 Output format of the error message.
308
309 Returns
310 -------
311 bool
312 True if found any patterns are found related to the given function.
313
314 Raises
315 ------
316 ValueError
317 If the `source_path` is not pointing to existing file/directory.
318 """
319 if not os.path.exists(source_path):
320 raise ValueError("Please enter a valid path, pointing to a file/directory.")
321
322 is_failed: bool = False
323 file_path: str = ""
324
325 FILE_EXTENSIONS_TO_CHECK: FrozenSet[str] = frozenset(
326 file_extensions_to_check.split(",")
327 )
328
329 if os.path.isfile(source_path):
330 file_path = source_path
331 with open(file_path, "r") as file_obj:
332 for line_number, msg in function(file_obj):
333 is_failed = True
334 print(
335 output_format.format(
336 source_path=file_path, line_number=line_number, msg=msg
337 )
338 )
339
340 for subdir, _, files in os.walk(source_path):
341 if any(path in subdir for path in PATHS_TO_IGNORE):
342 continue
343 for file_name in files:
344 if not any(
345 file_name.endswith(extension) for extension in FILE_EXTENSIONS_TO_CHECK
346 ):
347 continue
348
349 file_path = os.path.join(subdir, file_name)
350 with open(file_path, "r") as file_obj:
351 for line_number, msg in function(file_obj):
352 is_failed = True
353 print(
354 output_format.format(
355 source_path=file_path, line_number=line_number, msg=msg
356 )
357 )
358
359 return is_failed
360
361
362 if __name__ == "__main__":
363 available_validation_types: List[str] = [
364 "bare_pytest_raises",
365 "strings_to_concatenate",
366 "strings_with_wrong_placed_whitespace",
367 ]
368
369 parser = argparse.ArgumentParser(description="Unwanted patterns checker.")
370
371 parser.add_argument(
372 "path", nargs="?", default=".", help="Source path of file/directory to check."
373 )
374 parser.add_argument(
375 "--format",
376 "-f",
377 default="{source_path}:{line_number}:{msg}",
378 help="Output format of the error message.",
379 )
380 parser.add_argument(
381 "--validation-type",
382 "-vt",
383 choices=available_validation_types,
384 required=True,
385 help="Validation test case to check.",
386 )
387 parser.add_argument(
388 "--included-file-extensions",
389 default="py,pyx,pxd,pxi",
390 help="Coma seperated file extensions to check.",
391 )
392
393 args = parser.parse_args()
394
395 sys.exit(
396 main(
397 function=globals().get(args.validation_type), # type: ignore
398 source_path=args.path,
399 output_format=args.format,
400 file_extensions_to_check=args.included_file_extensions,
401 )
402 )
403
[end of scripts/validate_unwanted_patterns.py]
[start of setup.py]
1 #!/usr/bin/env python3
2
3 """
4 Parts of this file were taken from the pyzmq project
5 (https://github.com/zeromq/pyzmq) which have been permitted for use under the
6 BSD license. Parts are from lxml (https://github.com/lxml/lxml)
7 """
8
9 import argparse
10 from distutils.sysconfig import get_config_vars
11 from distutils.version import LooseVersion
12 import multiprocessing
13 import os
14 from os.path import join as pjoin
15 import platform
16 import shutil
17 import sys
18
19 import pkg_resources
20 from setuptools import Command, find_packages, setup
21
22 # versioning
23 import versioneer
24
25 cmdclass = versioneer.get_cmdclass()
26
27
28 def is_platform_windows():
29 return sys.platform == "win32" or sys.platform == "cygwin"
30
31
32 def is_platform_mac():
33 return sys.platform == "darwin"
34
35
36 min_numpy_ver = "1.15.4"
37 min_cython_ver = "0.29.16" # note: sync with pyproject.toml
38
39 try:
40 import Cython
41
42 _CYTHON_VERSION = Cython.__version__
43 from Cython.Build import cythonize
44
45 _CYTHON_INSTALLED = _CYTHON_VERSION >= LooseVersion(min_cython_ver)
46 except ImportError:
47 _CYTHON_VERSION = None
48 _CYTHON_INSTALLED = False
49 cythonize = lambda x, *args, **kwargs: x # dummy func
50
51 # The import of Extension must be after the import of Cython, otherwise
52 # we do not get the appropriately patched class.
53 # See https://cython.readthedocs.io/en/latest/src/userguide/source_files_and_compilation.html # noqa
54 from distutils.extension import Extension # noqa: E402 isort:skip
55 from distutils.command.build import build # noqa: E402 isort:skip
56
57 if _CYTHON_INSTALLED:
58 from Cython.Distutils.old_build_ext import old_build_ext as _build_ext
59
60 cython = True
61 from Cython import Tempita as tempita
62 else:
63 from distutils.command.build_ext import build_ext as _build_ext
64
65 cython = False
66
67
68 _pxi_dep_template = {
69 "algos": ["_libs/algos_common_helper.pxi.in", "_libs/algos_take_helper.pxi.in"],
70 "hashtable": [
71 "_libs/hashtable_class_helper.pxi.in",
72 "_libs/hashtable_func_helper.pxi.in",
73 ],
74 "index": ["_libs/index_class_helper.pxi.in"],
75 "sparse": ["_libs/sparse_op_helper.pxi.in"],
76 "interval": ["_libs/intervaltree.pxi.in"],
77 }
78
79 _pxifiles = []
80 _pxi_dep = {}
81 for module, files in _pxi_dep_template.items():
82 pxi_files = [pjoin("pandas", x) for x in files]
83 _pxifiles.extend(pxi_files)
84 _pxi_dep[module] = pxi_files
85
86
87 class build_ext(_build_ext):
88 @classmethod
89 def render_templates(cls, pxifiles):
90 for pxifile in pxifiles:
91 # build pxifiles first, template extension must be .pxi.in
92 assert pxifile.endswith(".pxi.in")
93 outfile = pxifile[:-3]
94
95 if (
96 os.path.exists(outfile)
97 and os.stat(pxifile).st_mtime < os.stat(outfile).st_mtime
98 ):
99 # if .pxi.in is not updated, no need to output .pxi
100 continue
101
102 with open(pxifile, "r") as f:
103 tmpl = f.read()
104 pyxcontent = tempita.sub(tmpl)
105
106 with open(outfile, "w") as f:
107 f.write(pyxcontent)
108
109 def build_extensions(self):
110 # if building from c files, don't need to
111 # generate template output
112 if cython:
113 self.render_templates(_pxifiles)
114
115 super().build_extensions()
116
117
118 DESCRIPTION = "Powerful data structures for data analysis, time series, and statistics"
119 LONG_DESCRIPTION = """
120 **pandas** is a Python package that provides fast, flexible, and expressive data
121 structures designed to make working with structured (tabular, multidimensional,
122 potentially heterogeneous) and time series data both easy and intuitive. It
123 aims to be the fundamental high-level building block for doing practical,
124 **real world** data analysis in Python. Additionally, it has the broader goal
125 of becoming **the most powerful and flexible open source data analysis /
126 manipulation tool available in any language**. It is already well on its way
127 toward this goal.
128
129 pandas is well suited for many different kinds of data:
130
131 - Tabular data with heterogeneously-typed columns, as in an SQL table or
132 Excel spreadsheet
133 - Ordered and unordered (not necessarily fixed-frequency) time series data.
134 - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
135 column labels
136 - Any other form of observational / statistical data sets. The data actually
137 need not be labeled at all to be placed into a pandas data structure
138
139 The two primary data structures of pandas, Series (1-dimensional) and DataFrame
140 (2-dimensional), handle the vast majority of typical use cases in finance,
141 statistics, social science, and many areas of engineering. For R users,
142 DataFrame provides everything that R's ``data.frame`` provides and much
143 more. pandas is built on top of `NumPy <https://www.numpy.org>`__ and is
144 intended to integrate well within a scientific computing environment with many
145 other 3rd party libraries.
146
147 Here are just a few of the things that pandas does well:
148
149 - Easy handling of **missing data** (represented as NaN) in floating point as
150 well as non-floating point data
151 - Size mutability: columns can be **inserted and deleted** from DataFrame and
152 higher dimensional objects
153 - Automatic and explicit **data alignment**: objects can be explicitly
154 aligned to a set of labels, or the user can simply ignore the labels and
155 let `Series`, `DataFrame`, etc. automatically align the data for you in
156 computations
157 - Powerful, flexible **group by** functionality to perform
158 split-apply-combine operations on data sets, for both aggregating and
159 transforming data
160 - Make it **easy to convert** ragged, differently-indexed data in other
161 Python and NumPy data structures into DataFrame objects
162 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
163 of large data sets
164 - Intuitive **merging** and **joining** data sets
165 - Flexible **reshaping** and pivoting of data sets
166 - **Hierarchical** labeling of axes (possible to have multiple labels per
167 tick)
168 - Robust IO tools for loading data from **flat files** (CSV and delimited),
169 Excel files, databases, and saving / loading data from the ultrafast **HDF5
170 format**
171 - **Time series**-specific functionality: date range generation and frequency
172 conversion, moving window statistics, date shifting and lagging.
173
174 Many of these principles are here to address the shortcomings frequently
175 experienced using other languages / scientific research environments. For data
176 scientists, working with data is typically divided into multiple stages:
177 munging and cleaning data, analyzing / modeling it, then organizing the results
178 of the analysis into a form suitable for plotting or tabular display. pandas is
179 the ideal tool for all of these tasks.
180 """
181
182 DISTNAME = "pandas"
183 LICENSE = "BSD"
184 AUTHOR = "The PyData Development Team"
185 EMAIL = "pydata@googlegroups.com"
186 URL = "https://pandas.pydata.org"
187 DOWNLOAD_URL = ""
188 PROJECT_URLS = {
189 "Bug Tracker": "https://github.com/pandas-dev/pandas/issues",
190 "Documentation": "https://pandas.pydata.org/pandas-docs/stable/",
191 "Source Code": "https://github.com/pandas-dev/pandas",
192 }
193 CLASSIFIERS = [
194 "Development Status :: 5 - Production/Stable",
195 "Environment :: Console",
196 "Operating System :: OS Independent",
197 "Intended Audience :: Science/Research",
198 "Programming Language :: Python",
199 "Programming Language :: Python :: 3",
200 "Programming Language :: Python :: 3.6",
201 "Programming Language :: Python :: 3.7",
202 "Programming Language :: Python :: 3.8",
203 "Programming Language :: Cython",
204 "Topic :: Scientific/Engineering",
205 ]
206
207
208 class CleanCommand(Command):
209 """Custom distutils command to clean the .so and .pyc files."""
210
211 user_options = [("all", "a", "")]
212
213 def initialize_options(self):
214 self.all = True
215 self._clean_me = []
216 self._clean_trees = []
217
218 base = pjoin("pandas", "_libs", "src")
219 tsbase = pjoin("pandas", "_libs", "tslibs", "src")
220 dt = pjoin(tsbase, "datetime")
221 util = pjoin("pandas", "util")
222 parser = pjoin(base, "parser")
223 ujson_python = pjoin(base, "ujson", "python")
224 ujson_lib = pjoin(base, "ujson", "lib")
225 self._clean_exclude = [
226 pjoin(dt, "np_datetime.c"),
227 pjoin(dt, "np_datetime_strings.c"),
228 pjoin(parser, "tokenizer.c"),
229 pjoin(parser, "io.c"),
230 pjoin(ujson_python, "ujson.c"),
231 pjoin(ujson_python, "objToJSON.c"),
232 pjoin(ujson_python, "JSONtoObj.c"),
233 pjoin(ujson_python, "date_conversions.c"),
234 pjoin(ujson_lib, "ultrajsonenc.c"),
235 pjoin(ujson_lib, "ultrajsondec.c"),
236 pjoin(util, "move.c"),
237 ]
238
239 for root, dirs, files in os.walk("pandas"):
240 for f in files:
241 filepath = pjoin(root, f)
242 if filepath in self._clean_exclude:
243 continue
244
245 if os.path.splitext(f)[-1] in (
246 ".pyc",
247 ".so",
248 ".o",
249 ".pyo",
250 ".pyd",
251 ".c",
252 ".cpp",
253 ".orig",
254 ):
255 self._clean_me.append(filepath)
256 for d in dirs:
257 if d == "__pycache__":
258 self._clean_trees.append(pjoin(root, d))
259
260 # clean the generated pxi files
261 for pxifile in _pxifiles:
262 pxifile = pxifile.replace(".pxi.in", ".pxi")
263 self._clean_me.append(pxifile)
264
265 for d in ("build", "dist"):
266 if os.path.exists(d):
267 self._clean_trees.append(d)
268
269 def finalize_options(self):
270 pass
271
272 def run(self):
273 for clean_me in self._clean_me:
274 try:
275 os.unlink(clean_me)
276 except OSError:
277 pass
278 for clean_tree in self._clean_trees:
279 try:
280 shutil.rmtree(clean_tree)
281 except OSError:
282 pass
283
284
285 # we need to inherit from the versioneer
286 # class as it encodes the version info
287 sdist_class = cmdclass["sdist"]
288
289
290 class CheckSDist(sdist_class):
291 """Custom sdist that ensures Cython has compiled all pyx files to c."""
292
293 _pyxfiles = [
294 "pandas/_libs/lib.pyx",
295 "pandas/_libs/hashtable.pyx",
296 "pandas/_libs/tslib.pyx",
297 "pandas/_libs/index.pyx",
298 "pandas/_libs/internals.pyx",
299 "pandas/_libs/algos.pyx",
300 "pandas/_libs/join.pyx",
301 "pandas/_libs/indexing.pyx",
302 "pandas/_libs/interval.pyx",
303 "pandas/_libs/hashing.pyx",
304 "pandas/_libs/missing.pyx",
305 "pandas/_libs/reduction.pyx",
306 "pandas/_libs/testing.pyx",
307 "pandas/_libs/sparse.pyx",
308 "pandas/_libs/ops.pyx",
309 "pandas/_libs/parsers.pyx",
310 "pandas/_libs/tslibs/base.pyx",
311 "pandas/_libs/tslibs/ccalendar.pyx",
312 "pandas/_libs/tslibs/dtypes.pyx",
313 "pandas/_libs/tslibs/period.pyx",
314 "pandas/_libs/tslibs/strptime.pyx",
315 "pandas/_libs/tslibs/np_datetime.pyx",
316 "pandas/_libs/tslibs/timedeltas.pyx",
317 "pandas/_libs/tslibs/timestamps.pyx",
318 "pandas/_libs/tslibs/timezones.pyx",
319 "pandas/_libs/tslibs/conversion.pyx",
320 "pandas/_libs/tslibs/fields.pyx",
321 "pandas/_libs/tslibs/offsets.pyx",
322 "pandas/_libs/tslibs/parsing.pyx",
323 "pandas/_libs/tslibs/tzconversion.pyx",
324 "pandas/_libs/tslibs/vectorized.pyx",
325 "pandas/_libs/window/indexers.pyx",
326 "pandas/_libs/writers.pyx",
327 "pandas/io/sas/sas.pyx",
328 ]
329
330 _cpp_pyxfiles = [
331 "pandas/_libs/window/aggregations.pyx",
332 ]
333
334 def initialize_options(self):
335 sdist_class.initialize_options(self)
336
337 def run(self):
338 if "cython" in cmdclass:
339 self.run_command("cython")
340 else:
341 # If we are not running cython then
342 # compile the extensions correctly
343 pyx_files = [(self._pyxfiles, "c"), (self._cpp_pyxfiles, "cpp")]
344
345 for pyxfiles, extension in pyx_files:
346 for pyxfile in pyxfiles:
347 sourcefile = pyxfile[:-3] + extension
348 msg = (
349 f"{extension}-source file '{sourcefile}' not found.\n"
350 "Run 'setup.py cython' before sdist."
351 )
352 assert os.path.isfile(sourcefile), msg
353 sdist_class.run(self)
354
355
356 class CheckingBuildExt(build_ext):
357 """
358 Subclass build_ext to get clearer report if Cython is necessary.
359 """
360
361 def check_cython_extensions(self, extensions):
362 for ext in extensions:
363 for src in ext.sources:
364 if not os.path.exists(src):
365 print(f"{ext.name}: -> [{ext.sources}]")
366 raise Exception(
367 f"""Cython-generated file '{src}' not found.
368 Cython is required to compile pandas from a development branch.
369 Please install Cython or download a release package of pandas.
370 """
371 )
372
373 def build_extensions(self):
374 self.check_cython_extensions(self.extensions)
375 build_ext.build_extensions(self)
376
377
378 class CythonCommand(build_ext):
379 """
380 Custom distutils command subclassed from Cython.Distutils.build_ext
381 to compile pyx->c, and stop there. All this does is override the
382 C-compile method build_extension() with a no-op.
383 """
384
385 def build_extension(self, ext):
386 pass
387
388
389 class DummyBuildSrc(Command):
390 """ numpy's build_src command interferes with Cython's build_ext.
391 """
392
393 user_options = []
394
395 def initialize_options(self):
396 self.py_modules_dict = {}
397
398 def finalize_options(self):
399 pass
400
401 def run(self):
402 pass
403
404
405 cmdclass.update({"clean": CleanCommand, "build": build})
406 cmdclass["build_ext"] = CheckingBuildExt
407
408 if cython:
409 suffix = ".pyx"
410 cmdclass["cython"] = CythonCommand
411 else:
412 suffix = ".c"
413 cmdclass["build_src"] = DummyBuildSrc
414
415 # ----------------------------------------------------------------------
416 # Preparation of compiler arguments
417
418 debugging_symbols_requested = "--with-debugging-symbols" in sys.argv
419 if debugging_symbols_requested:
420 sys.argv.remove("--with-debugging-symbols")
421
422
423 if sys.byteorder == "big":
424 endian_macro = [("__BIG_ENDIAN__", "1")]
425 else:
426 endian_macro = [("__LITTLE_ENDIAN__", "1")]
427
428
429 if is_platform_windows():
430 extra_compile_args = []
431 extra_link_args = []
432 if debugging_symbols_requested:
433 extra_compile_args.append("/Z7")
434 extra_link_args.append("/DEBUG")
435 else:
436 extra_compile_args = ["-Werror"]
437 extra_link_args = []
438 if debugging_symbols_requested:
439 extra_compile_args.append("-g")
440
441 # Build for at least macOS 10.9 when compiling on a 10.9 system or above,
442 # overriding CPython distuitls behaviour which is to target the version that
443 # python was built for. This may be overridden by setting
444 # MACOSX_DEPLOYMENT_TARGET before calling setup.py
445 if is_platform_mac():
446 if "MACOSX_DEPLOYMENT_TARGET" not in os.environ:
447 current_system = platform.mac_ver()[0]
448 python_target = get_config_vars().get(
449 "MACOSX_DEPLOYMENT_TARGET", current_system
450 )
451 if (
452 LooseVersion(python_target) < "10.9"
453 and LooseVersion(current_system) >= "10.9"
454 ):
455 os.environ["MACOSX_DEPLOYMENT_TARGET"] = "10.9"
456
457 if sys.version_info[:2] == (3, 8): # GH 33239
458 extra_compile_args.append("-Wno-error=deprecated-declarations")
459
460 # enable coverage by building cython files by setting the environment variable
461 # "PANDAS_CYTHON_COVERAGE" (with a Truthy value) or by running build_ext
462 # with `--with-cython-coverage`enabled
463 linetrace = os.environ.get("PANDAS_CYTHON_COVERAGE", False)
464 if "--with-cython-coverage" in sys.argv:
465 linetrace = True
466 sys.argv.remove("--with-cython-coverage")
467
468 # Note: if not using `cythonize`, coverage can be enabled by
469 # pinning `ext.cython_directives = directives` to each ext in extensions.
470 # github.com/cython/cython/wiki/enhancements-compilerdirectives#in-setuppy
471 directives = {"linetrace": False, "language_level": 3}
472 macros = []
473 if linetrace:
474 # https://pypkg.com/pypi/pytest-cython/f/tests/example-project/setup.py
475 directives["linetrace"] = True
476 macros = [("CYTHON_TRACE", "1"), ("CYTHON_TRACE_NOGIL", "1")]
477
478 # in numpy>=1.16.0, silence build warnings about deprecated API usage
479 # we can't do anything about these warnings because they stem from
480 # cython+numpy version mismatches.
481 macros.append(("NPY_NO_DEPRECATED_API", "0"))
482 if "-Werror" in extra_compile_args:
483 try:
484 import numpy as np
485 except ImportError:
486 pass
487 else:
488 if np.__version__ < LooseVersion("1.16.0"):
489 extra_compile_args.remove("-Werror")
490
491
492 # ----------------------------------------------------------------------
493 # Specification of Dependencies
494
495 # TODO: Need to check to see if e.g. `linetrace` has changed and possibly
496 # re-compile.
497 def maybe_cythonize(extensions, *args, **kwargs):
498 """
499 Render tempita templates before calling cythonize. This is skipped for
500
501 * clean
502 * sdist
503 """
504 if "clean" in sys.argv or "sdist" in sys.argv:
505 # See https://github.com/cython/cython/issues/1495
506 return extensions
507
508 elif not cython:
509 # GH#28836 raise a helfpul error message
510 if _CYTHON_VERSION:
511 raise RuntimeError(
512 f"Cannot cythonize with old Cython version ({_CYTHON_VERSION} "
513 f"installed, needs {min_cython_ver})"
514 )
515 raise RuntimeError("Cannot cythonize without Cython installed.")
516
517 numpy_incl = pkg_resources.resource_filename("numpy", "core/include")
518 # TODO: Is this really necessary here?
519 for ext in extensions:
520 if hasattr(ext, "include_dirs") and numpy_incl not in ext.include_dirs:
521 ext.include_dirs.append(numpy_incl)
522
523 # reuse any parallel arguments provided for compilation to cythonize
524 parser = argparse.ArgumentParser()
525 parser.add_argument("-j", type=int)
526 parser.add_argument("--parallel", type=int)
527 parsed, _ = parser.parse_known_args()
528
529 nthreads = 0
530 if parsed.parallel:
531 nthreads = parsed.parallel
532 elif parsed.j:
533 nthreads = parsed.j
534
535 kwargs["nthreads"] = nthreads
536 build_ext.render_templates(_pxifiles)
537 return cythonize(extensions, *args, **kwargs)
538
539
540 def srcpath(name=None, suffix=".pyx", subdir="src"):
541 return pjoin("pandas", subdir, name + suffix)
542
543
544 lib_depends = ["pandas/_libs/src/parse_helper.h"]
545
546 klib_include = ["pandas/_libs/src/klib"]
547
548 tseries_depends = [
549 "pandas/_libs/tslibs/src/datetime/np_datetime.h",
550 "pandas/_libs/tslibs/src/datetime/np_datetime_strings.h",
551 ]
552
553 ext_data = {
554 "_libs.algos": {
555 "pyxfile": "_libs/algos",
556 "include": klib_include,
557 "depends": _pxi_dep["algos"],
558 },
559 "_libs.groupby": {"pyxfile": "_libs/groupby"},
560 "_libs.hashing": {"pyxfile": "_libs/hashing", "depends": []},
561 "_libs.hashtable": {
562 "pyxfile": "_libs/hashtable",
563 "include": klib_include,
564 "depends": (["pandas/_libs/src/klib/khash_python.h"] + _pxi_dep["hashtable"]),
565 },
566 "_libs.index": {
567 "pyxfile": "_libs/index",
568 "include": klib_include,
569 "depends": _pxi_dep["index"],
570 },
571 "_libs.indexing": {"pyxfile": "_libs/indexing"},
572 "_libs.internals": {"pyxfile": "_libs/internals"},
573 "_libs.interval": {
574 "pyxfile": "_libs/interval",
575 "include": klib_include,
576 "depends": _pxi_dep["interval"],
577 },
578 "_libs.join": {"pyxfile": "_libs/join", "include": klib_include},
579 "_libs.lib": {
580 "pyxfile": "_libs/lib",
581 "depends": lib_depends + tseries_depends,
582 "include": klib_include, # due to tokenizer import
583 "sources": ["pandas/_libs/src/parser/tokenizer.c"],
584 },
585 "_libs.missing": {"pyxfile": "_libs/missing", "depends": tseries_depends},
586 "_libs.parsers": {
587 "pyxfile": "_libs/parsers",
588 "include": klib_include + ["pandas/_libs/src"],
589 "depends": [
590 "pandas/_libs/src/parser/tokenizer.h",
591 "pandas/_libs/src/parser/io.h",
592 ],
593 "sources": [
594 "pandas/_libs/src/parser/tokenizer.c",
595 "pandas/_libs/src/parser/io.c",
596 ],
597 },
598 "_libs.reduction": {"pyxfile": "_libs/reduction"},
599 "_libs.ops": {"pyxfile": "_libs/ops"},
600 "_libs.ops_dispatch": {"pyxfile": "_libs/ops_dispatch"},
601 "_libs.properties": {"pyxfile": "_libs/properties"},
602 "_libs.reshape": {"pyxfile": "_libs/reshape", "depends": []},
603 "_libs.sparse": {"pyxfile": "_libs/sparse", "depends": _pxi_dep["sparse"]},
604 "_libs.tslib": {"pyxfile": "_libs/tslib", "depends": tseries_depends},
605 "_libs.tslibs.base": {"pyxfile": "_libs/tslibs/base"},
606 "_libs.tslibs.ccalendar": {"pyxfile": "_libs/tslibs/ccalendar"},
607 "_libs.tslibs.dtypes": {"pyxfile": "_libs/tslibs/dtypes"},
608 "_libs.tslibs.conversion": {
609 "pyxfile": "_libs/tslibs/conversion",
610 "depends": tseries_depends,
611 "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"],
612 },
613 "_libs.tslibs.fields": {
614 "pyxfile": "_libs/tslibs/fields",
615 "depends": tseries_depends,
616 },
617 "_libs.tslibs.nattype": {"pyxfile": "_libs/tslibs/nattype"},
618 "_libs.tslibs.np_datetime": {
619 "pyxfile": "_libs/tslibs/np_datetime",
620 "depends": tseries_depends,
621 "sources": [
622 "pandas/_libs/tslibs/src/datetime/np_datetime.c",
623 "pandas/_libs/tslibs/src/datetime/np_datetime_strings.c",
624 ],
625 },
626 "_libs.tslibs.offsets": {
627 "pyxfile": "_libs/tslibs/offsets",
628 "depends": tseries_depends,
629 },
630 "_libs.tslibs.parsing": {
631 "pyxfile": "_libs/tslibs/parsing",
632 "include": klib_include,
633 "depends": ["pandas/_libs/src/parser/tokenizer.h"],
634 "sources": ["pandas/_libs/src/parser/tokenizer.c"],
635 },
636 "_libs.tslibs.period": {
637 "pyxfile": "_libs/tslibs/period",
638 "depends": tseries_depends,
639 "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"],
640 },
641 "_libs.tslibs.strptime": {
642 "pyxfile": "_libs/tslibs/strptime",
643 "depends": tseries_depends,
644 },
645 "_libs.tslibs.timedeltas": {
646 "pyxfile": "_libs/tslibs/timedeltas",
647 "depends": tseries_depends,
648 },
649 "_libs.tslibs.timestamps": {
650 "pyxfile": "_libs/tslibs/timestamps",
651 "depends": tseries_depends,
652 },
653 "_libs.tslibs.timezones": {"pyxfile": "_libs/tslibs/timezones"},
654 "_libs.tslibs.tzconversion": {
655 "pyxfile": "_libs/tslibs/tzconversion",
656 "depends": tseries_depends,
657 },
658 "_libs.tslibs.vectorized": {"pyxfile": "_libs/tslibs/vectorized"},
659 "_libs.testing": {"pyxfile": "_libs/testing"},
660 "_libs.window.aggregations": {
661 "pyxfile": "_libs/window/aggregations",
662 "language": "c++",
663 "suffix": ".cpp",
664 "depends": ["pandas/_libs/src/skiplist.h"],
665 },
666 "_libs.window.indexers": {"pyxfile": "_libs/window/indexers"},
667 "_libs.writers": {"pyxfile": "_libs/writers"},
668 "io.sas._sas": {"pyxfile": "io/sas/sas"},
669 }
670
671 extensions = []
672
673 for name, data in ext_data.items():
674 source_suffix = suffix if suffix == ".pyx" else data.get("suffix", ".c")
675
676 sources = [srcpath(data["pyxfile"], suffix=source_suffix, subdir="")]
677
678 sources.extend(data.get("sources", []))
679
680 include = data.get("include")
681
682 obj = Extension(
683 f"pandas.{name}",
684 sources=sources,
685 depends=data.get("depends", []),
686 include_dirs=include,
687 language=data.get("language", "c"),
688 define_macros=data.get("macros", macros),
689 extra_compile_args=extra_compile_args,
690 extra_link_args=extra_link_args,
691 )
692
693 extensions.append(obj)
694
695 # ----------------------------------------------------------------------
696 # ujson
697
698 if suffix == ".pyx":
699 # undo dumb setuptools bug clobbering .pyx sources back to .c
700 for ext in extensions:
701 if ext.sources[0].endswith((".c", ".cpp")):
702 root, _ = os.path.splitext(ext.sources[0])
703 ext.sources[0] = root + suffix
704
705 ujson_ext = Extension(
706 "pandas._libs.json",
707 depends=[
708 "pandas/_libs/src/ujson/lib/ultrajson.h",
709 "pandas/_libs/src/ujson/python/date_conversions.h",
710 ],
711 sources=(
712 [
713 "pandas/_libs/src/ujson/python/ujson.c",
714 "pandas/_libs/src/ujson/python/objToJSON.c",
715 "pandas/_libs/src/ujson/python/date_conversions.c",
716 "pandas/_libs/src/ujson/python/JSONtoObj.c",
717 "pandas/_libs/src/ujson/lib/ultrajsonenc.c",
718 "pandas/_libs/src/ujson/lib/ultrajsondec.c",
719 ]
720 + [
721 "pandas/_libs/tslibs/src/datetime/np_datetime.c",
722 "pandas/_libs/tslibs/src/datetime/np_datetime_strings.c",
723 ]
724 ),
725 include_dirs=[
726 "pandas/_libs/src/ujson/python",
727 "pandas/_libs/src/ujson/lib",
728 "pandas/_libs/src/datetime",
729 ],
730 extra_compile_args=(["-D_GNU_SOURCE"] + extra_compile_args),
731 extra_link_args=extra_link_args,
732 define_macros=macros,
733 )
734
735
736 extensions.append(ujson_ext)
737
738 # ----------------------------------------------------------------------
739
740
741 def setup_package():
742 setuptools_kwargs = {
743 "install_requires": [
744 "python-dateutil >= 2.7.3",
745 "pytz >= 2017.2",
746 f"numpy >= {min_numpy_ver}",
747 ],
748 "setup_requires": [f"numpy >= {min_numpy_ver}"],
749 "zip_safe": False,
750 }
751
752 setup(
753 name=DISTNAME,
754 maintainer=AUTHOR,
755 version=versioneer.get_version(),
756 packages=find_packages(include=["pandas", "pandas.*"]),
757 package_data={"": ["templates/*", "_libs/**/*.dll"]},
758 ext_modules=maybe_cythonize(extensions, compiler_directives=directives),
759 maintainer_email=EMAIL,
760 description=DESCRIPTION,
761 license=LICENSE,
762 cmdclass=cmdclass,
763 url=URL,
764 download_url=DOWNLOAD_URL,
765 project_urls=PROJECT_URLS,
766 long_description=LONG_DESCRIPTION,
767 classifiers=CLASSIFIERS,
768 platforms="any",
769 python_requires=">=3.6.1",
770 extras_require={
771 "test": [
772 # sync with setup.cfg minversion & install.rst
773 "pytest>=4.0.2",
774 "pytest-xdist",
775 "hypothesis>=3.58",
776 ]
777 },
778 entry_points={
779 "pandas_plotting_backends": ["matplotlib = pandas:plotting._matplotlib"]
780 },
781 **setuptools_kwargs,
782 )
783
784
785 if __name__ == "__main__":
786 # Freeze to support parallel compilation when using spawn instead of fork
787 multiprocessing.freeze_support()
788 setup_package()
789
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| pandas-dev/pandas | b0468aa45f3912d6f8823d1cd418af34ffdcd2b1 | BUG: s3 reads from public buckets not working
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample
```python
# Your code here
import pandas as pd
df = pd.read_csv("s3://nyc-tlc/trip data/yellow_tripdata_2019-01.csv")
```
<details>
<summary> Error stack trace </summary>
<pre>
Traceback (most recent call last):
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/s3.py", line 33, in get_file_and_filesystem
file = fs.open(_strip_schema(filepath_or_buffer), mode)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/fsspec/spec.py", line 775, in open
**kwargs
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 378, in _open
autocommit=autocommit, requester_pays=requester_pays)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 1097, in __init__
cache_type=cache_type)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/fsspec/spec.py", line 1065, in __init__
self.details = fs.info(path)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 530, in info
Key=key, **version_id_kw(version_id), **self.req_kw)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 200, in _call_s3
return method(**additional_kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 622, in _make_api_call
operation_model, request_dict, request_context)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 641, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 132, in _send_request
request = self.create_request(request_dict, operation_model)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 116, in create_request
operation_name=operation_model.name)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/signers.py", line 160, in sign
auth.add_auth(request)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/auth.py", line 357, in add_auth
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/parsers.py", line 676, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/parsers.py", line 431, in _read
filepath_or_buffer, encoding, compression
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/common.py", line 212, in get_filepath_or_buffer
filepath_or_buffer, encoding=encoding, compression=compression, mode=mode
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/s3.py", line 52, in get_filepath_or_buffer
file, _fs = get_file_and_filesystem(filepath_or_buffer, mode=mode)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/pandas/io/s3.py", line 42, in get_file_and_filesystem
file = fs.open(_strip_schema(filepath_or_buffer), mode)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/fsspec/spec.py", line 775, in open
**kwargs
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 378, in _open
autocommit=autocommit, requester_pays=requester_pays)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 1097, in __init__
cache_type=cache_type)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/fsspec/spec.py", line 1065, in __init__
self.details = fs.info(path)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 530, in info
Key=key, **version_id_kw(version_id), **self.req_kw)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/s3fs/core.py", line 200, in _call_s3
return method(**additional_kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 622, in _make_api_call
operation_model, request_dict, request_context)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/client.py", line 641, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 132, in _send_request
request = self.create_request(request_dict, operation_model)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/endpoint.py", line 116, in create_request
operation_name=operation_model.name)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/signers.py", line 160, in sign
auth.add_auth(request)
File "/home/conda/envs/pandas-test/lib/python3.7/site-packages/botocore/auth.py", line 357, in add_auth
raise NoCredentialsError
</pre>
</details>
#### Problem description
Reading directly from s3 public buckets (without manually configuring the `anon` parameter via s3fs) is broken with pandas 1.0.4 (worked with 1.0.3).
Looks like reading from public buckets requires `anon=True` while creating the filesystem. This 22cf0f5dfcfbddd5506fdaf260e485bff1b88ef1 seems to have introduced the issue, where `anon=False` is passed when the `noCredentialsError` is encountered.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.7.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-55-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.4
numpy : 1.18.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.0.2
setuptools : 47.1.1.post20200604
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.15.1
pytables : None
pytest : None
pyxlsb : None
s3fs : 0.4.2
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| @ayushdg thanks for the report!
cc @simonjayhawkins @alimcmaster1 for 1.0.5, it might be safer to revert https://github.com/pandas-dev/pandas/pull/33632, and then target the fixes (like https://github.com/pandas-dev/pandas/pull/34500) to master
Agree @jorisvandenbossche - do you want me to open a PR to revert #33632 on 1.0.x branch? Apologies for this change it didn’t go as planned. I’ll check why our test cases didn’t catch the above!
> do you want me to open a PR to revert #33632 on 1.0.x branch?
Yes, that sounds good
> Apologies for this change it didn’t go as planned.
No, no, nobody of us had foreseen the breakages ;)
Can't seem to reproduce this using moto... Potentially related: https://github.com/dask/s3fs/blob/master/s3fs/tests/test_s3fs.py#L1089
(I can repo locally using the s3 URL above - if I remove AWS Creds from my environment)
The fix for this to target 1.1 is to set ‘anon=True’ in S3FileSystem https://github.com/pandas-dev/pandas/pull/33632/files#diff-a37b395bed03f0404dec864a4529c97dR41
I’ll wait as we are moving to fsspec which gets rid of this logic https://github.com/pandas-dev/pandas/pull/34266 - but we should definitely trying using moto to test this.
Can anyone summarize the status here?
1.0.3: worked
1.0.4: broken
master: broken?
master+https://github.com/pandas-dev/pandas/pull/34266: broken?
Do we have a plan in place to restore this? IIUC the old way was to
1. try with the default (which I think looks up keys based on env vars)
2. If we get an error, retry with `anon=True`
Yep, it broke in 1.0.4, and will be fixed in 1.0.5 by reverting the patch that broke it.
That means that master is still broken, and thus we first need to write a test for it, and check whether #34266 actually fixes it already, or otherwise still fix it differently.
The old way was indeed to try with `anon=True` if it first failed. I suppose we can "simply" restore that logic? (in case it's not automatically fixed with fsspec)
Thanks
> in case it's not automatically fixed with fsspec
It's not. So we'll need to do that explicitly. Long-term we might want to get away from this logic by asking users to do `read_csv(..., storage_options={"requester_pays": False})`. But for 1.1 we'll want to restore the old implicit retry behavior if possible. | 2020-06-19T23:07:29Z | <patch>
diff --git a/pandas/io/common.py b/pandas/io/common.py
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -202,9 +202,37 @@ def get_filepath_or_buffer(
filepath_or_buffer = filepath_or_buffer.replace("s3n://", "s3://")
fsspec = import_optional_dependency("fsspec")
- file_obj = fsspec.open(
- filepath_or_buffer, mode=mode or "rb", **(storage_options or {})
- ).open()
+ # If botocore is installed we fallback to reading with anon=True
+ # to allow reads from public buckets
+ err_types_to_retry_with_anon: List[Any] = []
+ try:
+ import_optional_dependency("botocore")
+ from botocore.exceptions import ClientError, NoCredentialsError
+
+ err_types_to_retry_with_anon = [
+ ClientError,
+ NoCredentialsError,
+ PermissionError,
+ ]
+ except ImportError:
+ pass
+
+ try:
+ file_obj = fsspec.open(
+ filepath_or_buffer, mode=mode or "rb", **(storage_options or {})
+ ).open()
+ # GH 34626 Reads from Public Buckets without Credentials needs anon=True
+ except tuple(err_types_to_retry_with_anon):
+ if storage_options is None:
+ storage_options = {"anon": True}
+ else:
+ # don't mutate user input.
+ storage_options = dict(storage_options)
+ storage_options["anon"] = True
+ file_obj = fsspec.open(
+ filepath_or_buffer, mode=mode or "rb", **(storage_options or {})
+ ).open()
+
return file_obj, encoding, compression, True
if isinstance(filepath_or_buffer, (str, bytes, mmap.mmap)):
</patch> | [] | [] | |||
Qiskit__qiskit-9386 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DAGCircuitError: 'bit mapping invalid
### Informations
- **Qiskit: 0.39.2**:
- **Python: 3.10.9**:
- **Mac**:
### What is the current behavior?
I'm implementing quantum half adder on Jupyter Notebook.
When I try running my circuit on the simulator "qasm_simulator", Jupyter said
DAGCircuitError: 'bit mapping invalid: expected 4, got 8'
here is the code I've written. The error occurs on the last line of the third code.
```
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, Aer
#SUM
X = QuantumRegister(1, "in |X⟩")
Y = QuantumRegister(1, "in |Y⟩")
sum_out = QuantumRegister(1, "out SUM |0⟩")
SUM = QuantumCircuit(X, Y, sum_out, name='SUM')
SUM.cx(1, 2)
SUM.cx(0, 2)
fig = SUM.draw('mpl', True)
SUM = SUM.to_instruction()
fig
```
```
#half_adder
cout = QuantumRegister(1, 'out Carry |0⟩')
c = ClassicalRegister(4)
hadder = QuantumCircuit(X,Y,sum_out,cout,c)
hadder.ccx(X,Y,cout)
hadder.append(SUM,[0,1,2])
show = hadder.draw("mpl",True)
hadder = hadder.to_instruction()
show
```
```
#testing half_adder
qu = QuantumRegister(4)
cl = ClassicalRegister(4)
circ = QuantumCircuit(qu,cl)
circ.x(qu[0])
circ.x(qu[1])
circ.append(hadder,[0,1,2,3])
for i in range(0,4):
circ.measure(qu[i],cl[i])
circ.draw("mpl",True)
print(execute(circ,Aer.get_backend('qasm_simulator'), shots = 1).result().get_counts())
```
### What is the expected behavior?
I don't totally understand the error. I hope to troubleshoot to see the result.
### Suggested solutions
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2 [![License](https://img.shields.io/github/license/Qiskit/qiskit-terra.svg?style=popout-square)](https://opensource.org/licenses/Apache-2.0)<!--- long-description-skip-begin -->[![Release](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases)[![Downloads](https://img.shields.io/pypi/dm/qiskit-terra.svg?style=popout-square)](https://pypi.org/project/qiskit-terra/)[![Coverage Status](https://coveralls.io/repos/github/Qiskit/qiskit-terra/badge.svg?branch=main)](https://coveralls.io/github/Qiskit/qiskit-terra?branch=main)[![Minimum rustc 1.61.0](https://img.shields.io/badge/rustc-1.61.0+-blue.svg)](https://rust-lang.github.io/rfcs/2495-min-rust-version.html)<!--- long-description-skip-end -->
3
4 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
5
6 This library is the core component of Qiskit, **Terra**, which contains the building blocks for creating
7 and working with quantum circuits, programs, and algorithms. It also contains a compiler that supports
8 different quantum computers and a common interface for running programs on different quantum computer architectures.
9
10 For more details on how to use Qiskit you can refer to the documentation located here:
11
12 https://qiskit.org/documentation/
13
14
15 ## Installation
16
17 We encourage installing Qiskit via ``pip``. The following command installs the core Qiskit components, including Terra.
18
19 ```bash
20 pip install qiskit
21 ```
22
23 Pip will handle all dependencies automatically and you will always install the latest (and well-tested) version.
24
25 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-install-from-source-label).
26
27 ## Creating Your First Quantum Program in Qiskit Terra
28
29 Now that Qiskit is installed, it's time to begin working with Qiskit. To do this
30 we create a `QuantumCircuit` object to define a basic quantum program.
31
32 ```python
33 from qiskit import QuantumCircuit
34 qc = QuantumCircuit(2, 2)
35 qc.h(0)
36 qc.cx(0, 1)
37 qc.measure([0,1], [0,1])
38 ```
39
40 This simple example makes an entangled state, also called a [Bell state](https://qiskit.org/textbook/ch-gates/multiple-qubits-entangled-states.html#3.2-Entangled-States-).
41
42 Once you've made your first quantum circuit, you can then simulate it.
43 To do this, first we need to compile your circuit for the target backend we're going to run
44 on. In this case we are leveraging the built-in `BasicAer` simulator. However, this
45 simulator is primarily for testing and is limited in performance and functionality (as the name
46 implies). You should consider more sophisticated simulators, such as [`qiskit-aer`](https://github.com/Qiskit/qiskit-aer/),
47 for any real simulation work.
48
49 ```python
50 from qiskit import transpile
51 from qiskit.providers.basicaer import QasmSimulatorPy
52 backend_sim = QasmSimulatorPy()
53 transpiled_qc = transpile(qc, backend_sim)
54 ```
55
56 After compiling the circuit we can then run this on the ``backend`` object with:
57
58 ```python
59 result = backend_sim.run(transpiled_qc).result()
60 print(result.get_counts(qc))
61 ```
62
63 The output from this execution will look similar to this:
64
65 ```python
66 {'00': 513, '11': 511}
67 ```
68
69 For further examples of using Qiskit you can look at the example scripts in **examples/python**. You can start with
70 [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in the levels. Also
71 you can refer to the tutorials in the documentation here:
72
73 https://qiskit.org/documentation/tutorials.html
74
75
76 ### Executing your code on a real quantum chip
77
78 You can also use Qiskit to execute your code on a **real quantum processor**.
79 Qiskit provides an abstraction layer that lets users run quantum circuits on hardware from any
80 vendor that provides an interface to their systems through Qiskit. Using these ``providers`` you can run any Qiskit code against
81 real quantum computers. Some examples of published provider packages for running on real hardware are:
82
83 * https://github.com/Qiskit/qiskit-ibmq-provider
84 * https://github.com/Qiskit-Partners/qiskit-ionq
85 * https://github.com/Qiskit-Partners/qiskit-aqt-provider
86 * https://github.com/qiskit-community/qiskit-braket-provider
87 * https://github.com/qiskit-community/qiskit-quantinuum-provider
88 * https://github.com/rigetti/qiskit-rigetti
89
90 <!-- This is not an exhasutive list, and if you maintain a provider package please feel free to open a PR to add new providers -->
91
92 You can refer to the documentation of these packages for further instructions
93 on how to get access and use these systems.
94
95 ## Contribution Guidelines
96
97 If you'd like to contribute to Qiskit Terra, please take a look at our
98 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
99
100 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
101 [join the Qiskit Slack community](https://qisk.it/join-slack)
102 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
103 For questions that are more suited for a forum we use the `qiskit` tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
104
105 ## Next Steps
106
107 Now you're set up and ready to check out some of the other examples from our
108 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
109
110 ## Authors and Citation
111
112 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
113 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
114
115 ## Changelog and Release Notes
116
117 The changelog for a particular release is dynamically generated and gets
118 written to the release page on Github for each release. For example, you can
119 find the page for the `0.9.0` release here:
120
121 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
122
123 The changelog for the current release can be found in the releases tab:
124 [![Releases](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases)
125 The changelog provides a quick overview of notable changes for a given
126 release.
127
128 Additionally, as part of each release detailed release notes are written to
129 document in detail what has changed as part of a release. This includes any
130 documentation on potential breaking changes on upgrade and new features.
131 For example, you can find the release notes for the `0.9.0` release in the
132 Qiskit documentation here:
133
134 https://qiskit.org/documentation/release_notes.html#terra-0-9
135
136 ## License
137
138 [Apache License 2.0](LICENSE.txt)
139
[end of README.md]
[start of examples/python/rippleadd.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """
14 Ripple adder example based on Cuccaro et al., quant-ph/0410184.
15
16 """
17
18 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
19 from qiskit import BasicAer
20 from qiskit import execute
21
22 ###############################################################
23 # Set the backend name and coupling map.
24 ###############################################################
25 backend = BasicAer.get_backend("qasm_simulator")
26 coupling_map = [
27 [0, 1],
28 [0, 8],
29 [1, 2],
30 [1, 9],
31 [2, 3],
32 [2, 10],
33 [3, 4],
34 [3, 11],
35 [4, 5],
36 [4, 12],
37 [5, 6],
38 [5, 13],
39 [6, 7],
40 [6, 14],
41 [7, 15],
42 [8, 9],
43 [9, 10],
44 [10, 11],
45 [11, 12],
46 [12, 13],
47 [13, 14],
48 [14, 15],
49 ]
50
51 ###############################################################
52 # Make a quantum program for the n-bit ripple adder.
53 ###############################################################
54 n = 2
55
56 a = QuantumRegister(n, "a")
57 b = QuantumRegister(n, "b")
58 cin = QuantumRegister(1, "cin")
59 cout = QuantumRegister(1, "cout")
60 ans = ClassicalRegister(n + 1, "ans")
61 qc = QuantumCircuit(a, b, cin, cout, ans, name="rippleadd")
62
63
64 def majority(p, a, b, c):
65 """Majority gate."""
66 p.cx(c, b)
67 p.cx(c, a)
68 p.ccx(a, b, c)
69
70
71 def unmajority(p, a, b, c):
72 """Unmajority gate."""
73 p.ccx(a, b, c)
74 p.cx(c, a)
75 p.cx(a, b)
76
77
78 # Build a temporary subcircuit that adds a to b,
79 # storing the result in b
80 adder_subcircuit = QuantumCircuit(cin, a, b, cout)
81 majority(adder_subcircuit, cin[0], b[0], a[0])
82 for j in range(n - 1):
83 majority(adder_subcircuit, a[j], b[j + 1], a[j + 1])
84 adder_subcircuit.cx(a[n - 1], cout[0])
85 for j in reversed(range(n - 1)):
86 unmajority(adder_subcircuit, a[j], b[j + 1], a[j + 1])
87 unmajority(adder_subcircuit, cin[0], b[0], a[0])
88
89 # Set the inputs to the adder
90 qc.x(a[0]) # Set input a = 0...0001
91 qc.x(b) # Set input b = 1...1111
92 # Apply the adder
93 qc &= adder_subcircuit
94 # Measure the output register in the computational basis
95 for j in range(n):
96 qc.measure(b[j], ans[j])
97 qc.measure(cout[0], ans[n])
98
99 ###############################################################
100 # execute the program.
101 ###############################################################
102
103 # First version: not mapped
104 job = execute(qc, backend=backend, coupling_map=None, shots=1024)
105 result = job.result()
106 print(result.get_counts(qc))
107
108 # Second version: mapped to 2x8 array coupling graph
109 job = execute(qc, backend=backend, coupling_map=coupling_map, shots=1024)
110 result = job.result()
111 print(result.get_counts(qc))
112
113 # Both versions should give the same distribution
114
[end of examples/python/rippleadd.py]
[start of qiskit/circuit/__init__.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """
14 ========================================
15 Quantum Circuits (:mod:`qiskit.circuit`)
16 ========================================
17
18 .. currentmodule:: qiskit.circuit
19
20 Overview
21 ========
22
23 The fundamental element of quantum computing is the **quantum circuit**.
24 A quantum circuit is a computational routine consisting of coherent quantum
25 operations on quantum data, such as qubits. It is an ordered sequence of quantum
26 gates, measurements and resets, which may be conditioned on real-time classical
27 computation. A set of quantum gates is said to be universal if any unitary
28 transformation of the quantum data can be efficiently approximated arbitrarily well
29 as as sequence of gates in the set. Any quantum program can be represented by a
30 sequence of quantum circuits and classical near-time computation.
31
32 In Qiskit, this core element is represented by the :class:`QuantumCircuit` class.
33 Below is an example of a quantum circuit that makes a three-qubit GHZ state
34 defined as:
35
36 .. math::
37
38 |\\psi\\rangle = \\left(|000\\rangle+|111\\rangle\\right)/\\sqrt{2}
39
40
41 .. plot::
42 :include-source:
43
44 from qiskit import QuantumCircuit
45 # Create a circuit with a register of three qubits
46 circ = QuantumCircuit(3)
47 # H gate on qubit 0, putting this qubit in a superposition of |0> + |1>.
48 circ.h(0)
49 # A CX (CNOT) gate on control qubit 0 and target qubit 1 generating a Bell state.
50 circ.cx(0, 1)
51 # CX (CNOT) gate on control qubit 0 and target qubit 2 resulting in a GHZ state.
52 circ.cx(0, 2)
53 # Draw the circuit
54 circ.draw('mpl')
55
56
57 Supplementary Information
58 =========================
59
60 .. dropdown:: Quantum Circuit with conditionals
61 :animate: fade-in-slide-down
62
63 When building a quantum circuit, there can be interest in applying a certain gate only
64 if a classical register has a specific value. This can be done with the
65 :meth:`InstructionSet.c_if` method.
66
67 In the following example, we start with a single-qubit circuit formed by only a Hadamard gate
68 (:class:`~.HGate`), in which we expect to get :math:`|0\\rangle` and :math:`|1\\rangle`
69 with equal probability.
70
71 .. plot::
72 :include-source:
73
74 from qiskit import BasicAer, transpile, QuantumRegister, ClassicalRegister, QuantumCircuit
75
76 qr = QuantumRegister(1)
77 cr = ClassicalRegister(1)
78 qc = QuantumCircuit(qr, cr)
79 qc.h(0)
80 qc.measure(0, 0)
81 qc.draw('mpl')
82
83 .. code-block::
84
85 backend = BasicAer.get_backend('qasm_simulator')
86 tqc = transpile(qc, backend)
87 counts = backend.run(tqc).result().get_counts()
88
89 print(counts)
90
91 .. parsed-literal::
92
93 {'0': 524, '1': 500}
94
95 Now, we add an :class:`~.XGate` only if the value of the :class:`~.ClassicalRegister` is 0.
96 That way, if the state is :math:`|0\\rangle`, it will be changed to :math:`|1\\rangle` and
97 if the state is :math:`|1\\rangle`, it will not be changed at all, so the final state will
98 always be :math:`|1\\rangle`.
99
100 .. plot::
101 :include-source:
102
103 from qiskit import BasicAer, transpile, QuantumRegister, ClassicalRegister, QuantumCircuit
104
105 qr = QuantumRegister(1)
106 cr = ClassicalRegister(1)
107 qc = QuantumCircuit(qr, cr)
108 qc.h(0)
109 qc.measure(0, 0)
110
111 qc.x(0).c_if(cr, 0)
112 qc.measure(0, 0)
113
114 qc.draw('mpl')
115
116 .. code-block::
117
118 backend = BasicAer.get_backend('qasm_simulator')
119 tqc = transpile(qc, backend)
120 counts = backend.run(tqc).result().get_counts()
121
122 print(counts)
123
124 .. parsed-literal::
125
126 {'1': 1024}
127
128 .. dropdown:: Quantum Circuit Properties
129 :animate: fade-in-slide-down
130
131 When constructing quantum circuits, there are several properties that help quantify
132 the "size" of the circuits, and their ability to be run on a noisy quantum device.
133 Some of these, like number of qubits, are straightforward to understand, while others
134 like depth and number of tensor components require a bit more explanation. Here we will
135 explain all of these properties, and, in preparation for understanding how circuits change
136 when run on actual devices, highlight the conditions under which they change.
137
138 Consider the following circuit:
139
140 .. plot::
141 :include-source:
142
143 from qiskit import QuantumCircuit
144 qc = QuantumCircuit(12)
145 for idx in range(5):
146 qc.h(idx)
147 qc.cx(idx, idx+5)
148
149 qc.cx(1, 7)
150 qc.x(8)
151 qc.cx(1, 9)
152 qc.x(7)
153 qc.cx(1, 11)
154 qc.swap(6, 11)
155 qc.swap(6, 9)
156 qc.swap(6, 10)
157 qc.x(6)
158 qc.draw('mpl')
159
160 From the plot, it is easy to see that this circuit has 12 qubits, and a collection of
161 Hadamard, CNOT, X, and SWAP gates. But how to quantify this programmatically? Because we
162 can do single-qubit gates on all the qubits simultaneously, the number of qubits in this
163 circuit is equal to the **width** of the circuit:
164
165 .. code-block::
166
167 qc.width()
168
169 .. parsed-literal::
170
171 12
172
173 We can also just get the number of qubits directly:
174
175 .. code-block::
176
177 qc.num_qubits
178
179 .. parsed-literal::
180
181 12
182
183 .. important::
184
185 For a quantum circuit composed from just qubits, the circuit width is equal
186 to the number of qubits. This is the definition used in quantum computing. However,
187 for more complicated circuits with classical registers, and classically controlled gates,
188 this equivalence breaks down. As such, from now on we will not refer to the number of
189 qubits in a quantum circuit as the width.
190
191
192 It is also straightforward to get the number and type of the gates in a circuit using
193 :meth:`QuantumCircuit.count_ops`:
194
195 .. code-block::
196
197 qc.count_ops()
198
199 .. parsed-literal::
200
201 OrderedDict([('cx', 8), ('h', 5), ('x', 3), ('swap', 3)])
202
203 We can also get just the raw count of operations by computing the circuits
204 :meth:`QuantumCircuit.size`:
205
206 .. code-block::
207
208 qc.size()
209
210 .. parsed-literal::
211
212 19
213
214 A particularly important circuit property is known as the circuit **depth**. The depth
215 of a quantum circuit is a measure of how many "layers" of quantum gates, executed in
216 parallel, it takes to complete the computation defined by the circuit. Because quantum
217 gates take time to implement, the depth of a circuit roughly corresponds to the amount of
218 time it takes the quantum computer to execute the circuit. Thus, the depth of a circuit
219 is one important quantity used to measure if a quantum circuit can be run on a device.
220
221 The depth of a quantum circuit has a mathematical definition as the longest path in a
222 directed acyclic graph (DAG). However, such a definition is a bit hard to grasp, even for
223 experts. Fortunately, the depth of a circuit can be easily understood by anyone familiar
224 with playing `Tetris <https://en.wikipedia.org/wiki/Tetris>`_. Lets see how to compute this
225 graphically:
226
227 .. image:: /source_images/depth.gif
228
229
230 .. raw:: html
231
232 <br><br>
233
234
235 We can verify our graphical result using :meth:`QuantumCircuit.depth`:
236
237 .. code-block::
238
239 qc.depth()
240
241 .. parsed-literal::
242
243 9
244
245 .. raw:: html
246
247 <br>
248
249 Quantum Circuit API
250 ===================
251
252 Quantum Circuit Construction
253 ----------------------------
254
255 .. autosummary::
256 :toctree: ../stubs/
257
258 QuantumCircuit
259 QuantumRegister
260 Qubit
261 ClassicalRegister
262 Clbit
263 AncillaRegister
264 AncillaQubit
265 CircuitInstruction
266 Register
267 Bit
268
269 Gates and Instructions
270 ----------------------
271
272 .. autosummary::
273 :toctree: ../stubs/
274
275 Gate
276 ControlledGate
277 Delay
278 Instruction
279 InstructionSet
280 Operation
281 EquivalenceLibrary
282
283 Control Flow Operations
284 -----------------------
285
286 .. autosummary::
287 :toctree: ../stubs/
288
289 ControlFlowOp
290 IfElseOp
291 WhileLoopOp
292 ForLoopOp
293 BreakLoopOp
294 ContinueLoopOp
295
296 Parametric Quantum Circuits
297 ---------------------------
298
299 .. autosummary::
300 :toctree: ../stubs/
301
302 Parameter
303 ParameterVector
304 ParameterExpression
305
306 Random Circuits
307 ---------------
308
309 .. autosummary::
310 :toctree: ../stubs/
311
312 random.random_circuit
313 """
314 from .quantumcircuit import QuantumCircuit
315 from .classicalregister import ClassicalRegister, Clbit
316 from .quantumregister import QuantumRegister, Qubit, AncillaRegister, AncillaQubit
317 from .gate import Gate
318
319 # pylint: disable=cyclic-import
320 from .controlledgate import ControlledGate
321 from .instruction import Instruction
322 from .instructionset import InstructionSet
323 from .operation import Operation
324 from .barrier import Barrier
325 from .delay import Delay
326 from .measure import Measure
327 from .reset import Reset
328 from .parameter import Parameter
329 from .parametervector import ParameterVector
330 from .parameterexpression import ParameterExpression
331 from .quantumcircuitdata import CircuitInstruction
332 from .equivalence import EquivalenceLibrary
333 from .bit import Bit
334 from .register import Register
335 from . import library
336 from .commutation_checker import CommutationChecker
337
338 from .controlflow import (
339 ControlFlowOp,
340 WhileLoopOp,
341 ForLoopOp,
342 IfElseOp,
343 BreakLoopOp,
344 ContinueLoopOp,
345 )
346
347
348 _DEPRECATED_NAMES = {
349 "Int1": "qiskit.circuit.classicalfunction.types",
350 "Int2": "qiskit.circuit.classicalfunction.types",
351 "classical_function": "qiskit.circuit.classicalfunction",
352 "BooleanExpression": "qiskit.circuit.classicalfunction",
353 }
354
355
356 def __getattr__(name):
357 if name in _DEPRECATED_NAMES:
358 import importlib
359 import warnings
360
361 module_name = _DEPRECATED_NAMES[name]
362 warnings.warn(
363 f"Accessing '{name}' from '{__name__}' is deprecated since Qiskit Terra 0.22 "
364 f"and will be removed in 0.23. Import from '{module_name}' instead. "
365 "This will require installing 'tweedledum' as an optional dependency from Terra 0.23.",
366 DeprecationWarning,
367 stacklevel=2,
368 )
369 return getattr(importlib.import_module(module_name), name)
370 raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
371
[end of qiskit/circuit/__init__.py]
[start of qiskit/dagcircuit/dagcircuit.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2021.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """
14 Object to represent a quantum circuit as a directed acyclic graph (DAG).
15
16 The nodes in the graph are either input/output nodes or operation nodes.
17 The edges correspond to qubits or bits in the circuit. A directed edge
18 from node A to node B means that the (qu)bit passes from the output of A
19 to the input of B. The object's methods allow circuits to be constructed,
20 composed, and modified. Some natural properties like depth can be computed
21 directly from the graph.
22 """
23 from collections import OrderedDict, defaultdict
24 import copy
25 import itertools
26 import math
27 from typing import Generator, Any, List
28
29 import numpy as np
30 import rustworkx as rx
31
32 from qiskit.circuit import ControlFlowOp, ForLoopOp, IfElseOp, WhileLoopOp
33 from qiskit.circuit.exceptions import CircuitError
34 from qiskit.circuit.quantumregister import QuantumRegister, Qubit
35 from qiskit.circuit.classicalregister import ClassicalRegister, Clbit
36 from qiskit.circuit.gate import Gate
37 from qiskit.circuit.instruction import Instruction
38 from qiskit.circuit.parameterexpression import ParameterExpression
39 from qiskit.dagcircuit.exceptions import DAGCircuitError
40 from qiskit.dagcircuit.dagnode import DAGNode, DAGOpNode, DAGInNode, DAGOutNode
41 from qiskit.utils.deprecation import deprecate_function
42
43
44 class DAGCircuit:
45 """
46 Quantum circuit as a directed acyclic graph.
47
48 There are 3 types of nodes in the graph: inputs, outputs, and operations.
49 The nodes are connected by directed edges that correspond to qubits and
50 bits.
51 """
52
53 # pylint: disable=invalid-name
54
55 def __init__(self):
56 """Create an empty circuit."""
57
58 # Circuit name. Generally, this corresponds to the name
59 # of the QuantumCircuit from which the DAG was generated.
60 self.name = None
61
62 # Circuit metadata
63 self.metadata = None
64
65 # Set of wires (Register,idx) in the dag
66 self._wires = set()
67
68 # Map from wire (Register,idx) to input nodes of the graph
69 self.input_map = OrderedDict()
70
71 # Map from wire (Register,idx) to output nodes of the graph
72 self.output_map = OrderedDict()
73
74 # Directed multigraph whose nodes are inputs, outputs, or operations.
75 # Operation nodes have equal in- and out-degrees and carry
76 # additional data about the operation, including the argument order
77 # and parameter values.
78 # Input nodes have out-degree 1 and output nodes have in-degree 1.
79 # Edges carry wire labels (reg,idx) and each operation has
80 # corresponding in- and out-edges with the same wire labels.
81 self._multi_graph = rx.PyDAG()
82
83 # Map of qreg/creg name to Register object.
84 self.qregs = OrderedDict()
85 self.cregs = OrderedDict()
86
87 # List of Qubit/Clbit wires that the DAG acts on.
88 self.qubits: List[Qubit] = []
89 self.clbits: List[Clbit] = []
90
91 self._global_phase = 0
92 self._calibrations = defaultdict(dict)
93
94 self._op_names = {}
95
96 self.duration = None
97 self.unit = "dt"
98
99 @property
100 def wires(self):
101 """Return a list of the wires in order."""
102 return self.qubits + self.clbits
103
104 @property
105 def node_counter(self):
106 """
107 Returns the number of nodes in the dag.
108 """
109 return len(self._multi_graph)
110
111 @property
112 def global_phase(self):
113 """Return the global phase of the circuit."""
114 return self._global_phase
115
116 @global_phase.setter
117 def global_phase(self, angle):
118 """Set the global phase of the circuit.
119
120 Args:
121 angle (float, ParameterExpression)
122 """
123 if isinstance(angle, ParameterExpression):
124 self._global_phase = angle
125 else:
126 # Set the phase to the [0, 2π) interval
127 angle = float(angle)
128 if not angle:
129 self._global_phase = 0
130 else:
131 self._global_phase = angle % (2 * math.pi)
132
133 @property
134 def calibrations(self):
135 """Return calibration dictionary.
136
137 The custom pulse definition of a given gate is of the form
138 {'gate_name': {(qubits, params): schedule}}
139 """
140 return dict(self._calibrations)
141
142 @calibrations.setter
143 def calibrations(self, calibrations):
144 """Set the circuit calibration data from a dictionary of calibration definition.
145
146 Args:
147 calibrations (dict): A dictionary of input in the format
148 {'gate_name': {(qubits, gate_params): schedule}}
149 """
150 self._calibrations = defaultdict(dict, calibrations)
151
152 def add_calibration(self, gate, qubits, schedule, params=None):
153 """Register a low-level, custom pulse definition for the given gate.
154
155 Args:
156 gate (Union[Gate, str]): Gate information.
157 qubits (Union[int, Tuple[int]]): List of qubits to be measured.
158 schedule (Schedule): Schedule information.
159 params (Optional[List[Union[float, Parameter]]]): A list of parameters.
160
161 Raises:
162 Exception: if the gate is of type string and params is None.
163 """
164
165 def _format(operand):
166 try:
167 # Using float/complex value as a dict key is not good idea.
168 # This makes the mapping quite sensitive to the rounding error.
169 # However, the mechanism is already tied to the execution model (i.e. pulse gate)
170 # and we cannot easily update this rule.
171 # The same logic exists in QuantumCircuit.add_calibration.
172 evaluated = complex(operand)
173 if np.isreal(evaluated):
174 evaluated = float(evaluated.real)
175 if evaluated.is_integer():
176 evaluated = int(evaluated)
177 return evaluated
178 except TypeError:
179 # Unassigned parameter
180 return operand
181
182 if isinstance(gate, Gate):
183 params = gate.params
184 gate = gate.name
185 if params is not None:
186 params = tuple(map(_format, params))
187 else:
188 params = ()
189
190 self._calibrations[gate][(tuple(qubits), params)] = schedule
191
192 def has_calibration_for(self, node):
193 """Return True if the dag has a calibration defined for the node operation. In this
194 case, the operation does not need to be translated to the device basis.
195 """
196 if not self.calibrations or node.op.name not in self.calibrations:
197 return False
198 qubits = tuple(self.qubits.index(qubit) for qubit in node.qargs)
199 params = []
200 for p in node.op.params:
201 if isinstance(p, ParameterExpression) and not p.parameters:
202 params.append(float(p))
203 else:
204 params.append(p)
205 params = tuple(params)
206 return (qubits, params) in self.calibrations[node.op.name]
207
208 def remove_all_ops_named(self, opname):
209 """Remove all operation nodes with the given name."""
210 for n in self.named_nodes(opname):
211 self.remove_op_node(n)
212
213 def add_qubits(self, qubits):
214 """Add individual qubit wires."""
215 if any(not isinstance(qubit, Qubit) for qubit in qubits):
216 raise DAGCircuitError("not a Qubit instance.")
217
218 duplicate_qubits = set(self.qubits).intersection(qubits)
219 if duplicate_qubits:
220 raise DAGCircuitError("duplicate qubits %s" % duplicate_qubits)
221
222 self.qubits.extend(qubits)
223 for qubit in qubits:
224 self._add_wire(qubit)
225
226 def add_clbits(self, clbits):
227 """Add individual clbit wires."""
228 if any(not isinstance(clbit, Clbit) for clbit in clbits):
229 raise DAGCircuitError("not a Clbit instance.")
230
231 duplicate_clbits = set(self.clbits).intersection(clbits)
232 if duplicate_clbits:
233 raise DAGCircuitError("duplicate clbits %s" % duplicate_clbits)
234
235 self.clbits.extend(clbits)
236 for clbit in clbits:
237 self._add_wire(clbit)
238
239 def add_qreg(self, qreg):
240 """Add all wires in a quantum register."""
241 if not isinstance(qreg, QuantumRegister):
242 raise DAGCircuitError("not a QuantumRegister instance.")
243 if qreg.name in self.qregs:
244 raise DAGCircuitError("duplicate register %s" % qreg.name)
245 self.qregs[qreg.name] = qreg
246 existing_qubits = set(self.qubits)
247 for j in range(qreg.size):
248 if qreg[j] not in existing_qubits:
249 self.qubits.append(qreg[j])
250 self._add_wire(qreg[j])
251
252 def add_creg(self, creg):
253 """Add all wires in a classical register."""
254 if not isinstance(creg, ClassicalRegister):
255 raise DAGCircuitError("not a ClassicalRegister instance.")
256 if creg.name in self.cregs:
257 raise DAGCircuitError("duplicate register %s" % creg.name)
258 self.cregs[creg.name] = creg
259 existing_clbits = set(self.clbits)
260 for j in range(creg.size):
261 if creg[j] not in existing_clbits:
262 self.clbits.append(creg[j])
263 self._add_wire(creg[j])
264
265 def _add_wire(self, wire):
266 """Add a qubit or bit to the circuit.
267
268 Args:
269 wire (Bit): the wire to be added
270
271 This adds a pair of in and out nodes connected by an edge.
272
273 Raises:
274 DAGCircuitError: if trying to add duplicate wire
275 """
276 if wire not in self._wires:
277 self._wires.add(wire)
278
279 inp_node = DAGInNode(wire=wire)
280 outp_node = DAGOutNode(wire=wire)
281 input_map_id, output_map_id = self._multi_graph.add_nodes_from([inp_node, outp_node])
282 inp_node._node_id = input_map_id
283 outp_node._node_id = output_map_id
284 self.input_map[wire] = inp_node
285 self.output_map[wire] = outp_node
286 self._multi_graph.add_edge(inp_node._node_id, outp_node._node_id, wire)
287 else:
288 raise DAGCircuitError(f"duplicate wire {wire}")
289
290 def remove_clbits(self, *clbits):
291 """
292 Remove classical bits from the circuit. All bits MUST be idle.
293 Any registers with references to at least one of the specified bits will
294 also be removed.
295
296 Args:
297 clbits (List[Clbit]): The bits to remove.
298
299 Raises:
300 DAGCircuitError: a clbit is not a :obj:`.Clbit`, is not in the circuit,
301 or is not idle.
302 """
303 if any(not isinstance(clbit, Clbit) for clbit in clbits):
304 raise DAGCircuitError(
305 "clbits not of type Clbit: %s" % [b for b in clbits if not isinstance(b, Clbit)]
306 )
307
308 clbits = set(clbits)
309 unknown_clbits = clbits.difference(self.clbits)
310 if unknown_clbits:
311 raise DAGCircuitError("clbits not in circuit: %s" % unknown_clbits)
312
313 busy_clbits = {bit for bit in clbits if not self._is_wire_idle(bit)}
314 if busy_clbits:
315 raise DAGCircuitError("clbits not idle: %s" % busy_clbits)
316
317 # remove any references to bits
318 cregs_to_remove = {creg for creg in self.cregs.values() if not clbits.isdisjoint(creg)}
319 self.remove_cregs(*cregs_to_remove)
320
321 for clbit in clbits:
322 self._remove_idle_wire(clbit)
323 self.clbits.remove(clbit)
324
325 def remove_cregs(self, *cregs):
326 """
327 Remove classical registers from the circuit, leaving underlying bits
328 in place.
329
330 Raises:
331 DAGCircuitError: a creg is not a ClassicalRegister, or is not in
332 the circuit.
333 """
334 if any(not isinstance(creg, ClassicalRegister) for creg in cregs):
335 raise DAGCircuitError(
336 "cregs not of type ClassicalRegister: %s"
337 % [r for r in cregs if not isinstance(r, ClassicalRegister)]
338 )
339
340 unknown_cregs = set(cregs).difference(self.cregs.values())
341 if unknown_cregs:
342 raise DAGCircuitError("cregs not in circuit: %s" % unknown_cregs)
343
344 for creg in cregs:
345 del self.cregs[creg.name]
346
347 def remove_qubits(self, *qubits):
348 """
349 Remove quantum bits from the circuit. All bits MUST be idle.
350 Any registers with references to at least one of the specified bits will
351 also be removed.
352
353 Args:
354 qubits (List[Qubit]): The bits to remove.
355
356 Raises:
357 DAGCircuitError: a qubit is not a :obj:`.Qubit`, is not in the circuit,
358 or is not idle.
359 """
360 if any(not isinstance(qubit, Qubit) for qubit in qubits):
361 raise DAGCircuitError(
362 "qubits not of type Qubit: %s" % [b for b in qubits if not isinstance(b, Qubit)]
363 )
364
365 qubits = set(qubits)
366 unknown_qubits = qubits.difference(self.qubits)
367 if unknown_qubits:
368 raise DAGCircuitError("qubits not in circuit: %s" % unknown_qubits)
369
370 busy_qubits = {bit for bit in qubits if not self._is_wire_idle(bit)}
371 if busy_qubits:
372 raise DAGCircuitError("qubits not idle: %s" % busy_qubits)
373
374 # remove any references to bits
375 qregs_to_remove = {qreg for qreg in self.qregs.values() if not qubits.isdisjoint(qreg)}
376 self.remove_qregs(*qregs_to_remove)
377
378 for qubit in qubits:
379 self._remove_idle_wire(qubit)
380 self.qubits.remove(qubit)
381
382 def remove_qregs(self, *qregs):
383 """
384 Remove classical registers from the circuit, leaving underlying bits
385 in place.
386
387 Raises:
388 DAGCircuitError: a qreg is not a QuantumRegister, or is not in
389 the circuit.
390 """
391 if any(not isinstance(qreg, QuantumRegister) for qreg in qregs):
392 raise DAGCircuitError(
393 "qregs not of type QuantumRegister: %s"
394 % [r for r in qregs if not isinstance(r, QuantumRegister)]
395 )
396
397 unknown_qregs = set(qregs).difference(self.qregs.values())
398 if unknown_qregs:
399 raise DAGCircuitError("qregs not in circuit: %s" % unknown_qregs)
400
401 for qreg in qregs:
402 del self.qregs[qreg.name]
403
404 def _is_wire_idle(self, wire):
405 """Check if a wire is idle.
406
407 Args:
408 wire (Bit): a wire in the circuit.
409
410 Returns:
411 bool: true if the wire is idle, false otherwise.
412
413 Raises:
414 DAGCircuitError: the wire is not in the circuit.
415 """
416 if wire not in self._wires:
417 raise DAGCircuitError("wire %s not in circuit" % wire)
418
419 try:
420 child = next(self.successors(self.input_map[wire]))
421 except StopIteration as e:
422 raise DAGCircuitError(
423 "Invalid dagcircuit input node %s has no output" % self.input_map[wire]
424 ) from e
425 return child is self.output_map[wire]
426
427 def _remove_idle_wire(self, wire):
428 """Remove an idle qubit or bit from the circuit.
429
430 Args:
431 wire (Bit): the wire to be removed, which MUST be idle.
432 """
433 inp_node = self.input_map[wire]
434 oup_node = self.output_map[wire]
435
436 self._multi_graph.remove_node(inp_node._node_id)
437 self._multi_graph.remove_node(oup_node._node_id)
438 self._wires.remove(wire)
439 del self.input_map[wire]
440 del self.output_map[wire]
441
442 def _check_condition(self, name, condition):
443 """Verify that the condition is valid.
444
445 Args:
446 name (string): used for error reporting
447 condition (tuple or None): a condition tuple (ClassicalRegister, int) or (Clbit, bool)
448
449 Raises:
450 DAGCircuitError: if conditioning on an invalid register
451 """
452 if (
453 condition is not None
454 and condition[0] not in self.clbits
455 and condition[0].name not in self.cregs
456 ):
457 raise DAGCircuitError("invalid creg in condition for %s" % name)
458
459 def _check_bits(self, args, amap):
460 """Check the values of a list of (qu)bit arguments.
461
462 For each element of args, check that amap contains it.
463
464 Args:
465 args (list[Bit]): the elements to be checked
466 amap (dict): a dictionary keyed on Qubits/Clbits
467
468 Raises:
469 DAGCircuitError: if a qubit is not contained in amap
470 """
471 # Check for each wire
472 for wire in args:
473 if wire not in amap:
474 raise DAGCircuitError(f"(qu)bit {wire} not found in {amap}")
475
476 @staticmethod
477 def _bits_in_condition(cond):
478 """Return a list of bits in the given condition.
479
480 Args:
481 cond (tuple or None): optional condition (ClassicalRegister, int) or (Clbit, bool)
482
483 Returns:
484 list[Clbit]: list of classical bits
485
486 Raises:
487 CircuitError: if cond[0] is not ClassicalRegister or Clbit
488 """
489 if cond is None:
490 return []
491 elif isinstance(cond[0], ClassicalRegister):
492 # Returns a list of all the cbits in the given creg cond[0].
493 return cond[0][:]
494 elif isinstance(cond[0], Clbit):
495 # Returns a singleton list of the conditional cbit.
496 return [cond[0]]
497 else:
498 raise CircuitError("Condition must be used with ClassicalRegister or Clbit.")
499
500 def _increment_op(self, op):
501 if op.name in self._op_names:
502 self._op_names[op.name] += 1
503 else:
504 self._op_names[op.name] = 1
505
506 def _decrement_op(self, op):
507 if self._op_names[op.name] == 1:
508 del self._op_names[op.name]
509 else:
510 self._op_names[op.name] -= 1
511
512 def _add_op_node(self, op, qargs, cargs):
513 """Add a new operation node to the graph and assign properties.
514
515 Args:
516 op (qiskit.circuit.Operation): the operation associated with the DAG node
517 qargs (list[Qubit]): list of quantum wires to attach to.
518 cargs (list[Clbit]): list of classical wires to attach to.
519 Returns:
520 int: The integer node index for the new op node on the DAG
521 """
522 # Add a new operation node to the graph
523 new_node = DAGOpNode(op=op, qargs=qargs, cargs=cargs)
524 node_index = self._multi_graph.add_node(new_node)
525 new_node._node_id = node_index
526 self._increment_op(op)
527 return node_index
528
529 @deprecate_function(
530 "The DAGCircuit._copy_circuit_metadata method is deprecated as of 0.20.0. It will be "
531 "removed no earlier than 3 months after the release date. You should use the "
532 "DAGCircuit.copy_empty_like method instead, which acts identically.",
533 since="0.20.0",
534 )
535 def _copy_circuit_metadata(self):
536 """DEPRECATED"""
537 return self.copy_empty_like()
538
539 def copy_empty_like(self):
540 """Return a copy of self with the same structure but empty.
541
542 That structure includes:
543 * name and other metadata
544 * global phase
545 * duration
546 * all the qubits and clbits, including the registers.
547
548 Returns:
549 DAGCircuit: An empty copy of self.
550 """
551 target_dag = DAGCircuit()
552 target_dag.name = self.name
553 target_dag._global_phase = self._global_phase
554 target_dag.duration = self.duration
555 target_dag.unit = self.unit
556 target_dag.metadata = self.metadata
557
558 target_dag.add_qubits(self.qubits)
559 target_dag.add_clbits(self.clbits)
560
561 for qreg in self.qregs.values():
562 target_dag.add_qreg(qreg)
563 for creg in self.cregs.values():
564 target_dag.add_creg(creg)
565
566 return target_dag
567
568 def apply_operation_back(self, op, qargs=(), cargs=()):
569 """Apply an operation to the output of the circuit.
570
571 Args:
572 op (qiskit.circuit.Operation): the operation associated with the DAG node
573 qargs (tuple[Qubit]): qubits that op will be applied to
574 cargs (tuple[Clbit]): cbits that op will be applied to
575 Returns:
576 DAGOpNode: the node for the op that was added to the dag
577
578 Raises:
579 DAGCircuitError: if a leaf node is connected to multiple outputs
580
581 """
582 qargs = tuple(qargs) if qargs is not None else ()
583 cargs = tuple(cargs) if cargs is not None else ()
584
585 all_cbits = self._bits_in_condition(getattr(op, "condition", None))
586 all_cbits = set(all_cbits).union(cargs)
587
588 self._check_condition(op.name, getattr(op, "condition", None))
589 self._check_bits(qargs, self.output_map)
590 self._check_bits(all_cbits, self.output_map)
591
592 node_index = self._add_op_node(op, qargs, cargs)
593
594 # Add new in-edges from predecessors of the output nodes to the
595 # operation node while deleting the old in-edges of the output nodes
596 # and adding new edges from the operation node to each output node
597
598 al = [qargs, all_cbits]
599 self._multi_graph.insert_node_on_in_edges_multiple(
600 node_index, [self.output_map[q]._node_id for q in itertools.chain(*al)]
601 )
602 return self._multi_graph[node_index]
603
604 def apply_operation_front(self, op, qargs=(), cargs=()):
605 """Apply an operation to the input of the circuit.
606
607 Args:
608 op (qiskit.circuit.Operation): the operation associated with the DAG node
609 qargs (tuple[Qubit]): qubits that op will be applied to
610 cargs (tuple[Clbit]): cbits that op will be applied to
611 Returns:
612 DAGOpNode: the node for the op that was added to the dag
613
614 Raises:
615 DAGCircuitError: if initial nodes connected to multiple out edges
616 """
617 all_cbits = self._bits_in_condition(getattr(op, "condition", None))
618 all_cbits.extend(cargs)
619
620 self._check_condition(op.name, getattr(op, "condition", None))
621 self._check_bits(qargs, self.input_map)
622 self._check_bits(all_cbits, self.input_map)
623 node_index = self._add_op_node(op, qargs, cargs)
624
625 # Add new out-edges to successors of the input nodes from the
626 # operation node while deleting the old out-edges of the input nodes
627 # and adding new edges to the operation node from each input node
628 al = [qargs, all_cbits]
629 self._multi_graph.insert_node_on_out_edges_multiple(
630 node_index, [self.input_map[q]._node_id for q in itertools.chain(*al)]
631 )
632 return self._multi_graph[node_index]
633
634 @staticmethod
635 def _map_condition(wire_map, condition, target_cregs):
636 """Use the wire_map dict to change the condition tuple's creg name.
637
638 Args:
639 wire_map (dict): a map from source wires to destination wires
640 condition (tuple or None): (ClassicalRegister,int)
641 target_cregs (list[ClassicalRegister]): List of all cregs in the
642 target circuit onto which the condition might possibly be mapped.
643 Returns:
644 tuple(ClassicalRegister,int): new condition
645 Raises:
646 DAGCircuitError: if condition register not in wire_map, or if
647 wire_map maps condition onto more than one creg, or if the
648 specified condition is not present in a classical register.
649 """
650
651 if condition is None:
652 new_condition = None
653 else:
654 # if there is a condition, map the condition bits to the
655 # composed cregs based on the wire_map
656 is_reg = False
657 if isinstance(condition[0], Clbit):
658 cond_creg = [condition[0]]
659 else:
660 cond_creg = condition[0]
661 is_reg = True
662 cond_val = condition[1]
663 new_cond_val = 0
664 new_creg = None
665 bits_in_condcreg = [bit for bit in wire_map if bit in cond_creg]
666 for bit in bits_in_condcreg:
667 if is_reg:
668 try:
669 candidate_creg = next(
670 creg for creg in target_cregs if wire_map[bit] in creg
671 )
672 except StopIteration as ex:
673 raise DAGCircuitError(
674 "Did not find creg containing mapped clbit in conditional."
675 ) from ex
676 else:
677 # If cond is on a single Clbit then the candidate_creg is
678 # the target Clbit to which 'bit' is mapped to.
679 candidate_creg = wire_map[bit]
680 if new_creg is None:
681 new_creg = candidate_creg
682 elif new_creg != candidate_creg:
683 # Raise if wire_map maps condition creg on to more than one
684 # creg in target DAG.
685 raise DAGCircuitError(
686 "wire_map maps conditional register onto more than one creg."
687 )
688
689 if not is_reg:
690 # If the cond is on a single Clbit then the new_cond_val is the
691 # same as the cond_val since the new_creg is also a single Clbit.
692 new_cond_val = cond_val
693 elif 2 ** (cond_creg[:].index(bit)) & cond_val:
694 # If the conditional values of the Clbit 'bit' is 1 then the new_cond_val
695 # is updated such that the conditional value of the Clbit to which 'bit'
696 # is mapped to in new_creg is 1.
697 new_cond_val += 2 ** (new_creg[:].index(wire_map[bit]))
698 if new_creg is None:
699 raise DAGCircuitError("Condition registers not found in wire_map.")
700 new_condition = (new_creg, new_cond_val)
701 return new_condition
702
703 def _map_condition_with_import(self, op, wire_map, creg_map):
704 """Map the condition in ``op`` to its counterpart in ``self`` using ``wire_map`` and
705 ``creg_map`` as lookup caches. All single-bit conditions should have a cache hit in the
706 ``wire_map``, but registers may involve a full linear search the first time they are
707 encountered. ``creg_map`` is mutated by this function. ``wire_map`` is not; it is an error
708 if a wire is not in the map.
709
710 This is different to ``_map_condition`` because it always succeeds; since the mapping for
711 all wires in the condition is assumed to exist, there can be no fragmented registers. If
712 there is no matching register (has the same bits in the same order) in ``self``, a new
713 register alias is added to represent the condition. This does not change the bits available
714 to ``self``, it just adds a new aliased grouping of them."""
715 op_condition = getattr(op, "condition", None)
716 if op_condition is None:
717 return op
718 new_op = copy.copy(op)
719 target, value = op_condition
720 if isinstance(target, Clbit):
721 new_op.condition = (wire_map[target], value)
722 else:
723 if target.name not in creg_map:
724 mapped_bits = [wire_map[bit] for bit in target]
725 for our_creg in self.cregs.values():
726 if mapped_bits == list(our_creg):
727 new_target = our_creg
728 break
729 else:
730 new_target = ClassicalRegister(bits=[wire_map[bit] for bit in target])
731 self.add_creg(new_target)
732 creg_map[target.name] = new_target
733 new_op.condition = (creg_map[target.name], value)
734 return new_op
735
736 def compose(self, other, qubits=None, clbits=None, front=False, inplace=True):
737 """Compose the ``other`` circuit onto the output of this circuit.
738
739 A subset of input wires of ``other`` are mapped
740 to a subset of output wires of this circuit.
741
742 ``other`` can be narrower or of equal width to ``self``.
743
744 Args:
745 other (DAGCircuit): circuit to compose with self
746 qubits (list[Qubit|int]): qubits of self to compose onto.
747 clbits (list[Clbit|int]): clbits of self to compose onto.
748 front (bool): If True, front composition will be performed (not implemented yet)
749 inplace (bool): If True, modify the object. Otherwise return composed circuit.
750
751 Returns:
752 DAGCircuit: the composed dag (returns None if inplace==True).
753
754 Raises:
755 DAGCircuitError: if ``other`` is wider or there are duplicate edge mappings.
756 """
757 if front:
758 raise DAGCircuitError("Front composition not supported yet.")
759
760 if len(other.qubits) > len(self.qubits) or len(other.clbits) > len(self.clbits):
761 raise DAGCircuitError(
762 "Trying to compose with another DAGCircuit which has more 'in' edges."
763 )
764
765 # number of qubits and clbits must match number in circuit or None
766 identity_qubit_map = dict(zip(other.qubits, self.qubits))
767 identity_clbit_map = dict(zip(other.clbits, self.clbits))
768 if qubits is None:
769 qubit_map = identity_qubit_map
770 elif len(qubits) != len(other.qubits):
771 raise DAGCircuitError(
772 "Number of items in qubits parameter does not"
773 " match number of qubits in the circuit."
774 )
775 else:
776 qubit_map = {
777 other.qubits[i]: (self.qubits[q] if isinstance(q, int) else q)
778 for i, q in enumerate(qubits)
779 }
780 if clbits is None:
781 clbit_map = identity_clbit_map
782 elif len(clbits) != len(other.clbits):
783 raise DAGCircuitError(
784 "Number of items in clbits parameter does not"
785 " match number of clbits in the circuit."
786 )
787 else:
788 clbit_map = {
789 other.clbits[i]: (self.clbits[c] if isinstance(c, int) else c)
790 for i, c in enumerate(clbits)
791 }
792 edge_map = {**qubit_map, **clbit_map} or None
793
794 # if no edge_map, try to do a 1-1 mapping in order
795 if edge_map is None:
796 edge_map = {**identity_qubit_map, **identity_clbit_map}
797
798 # Check the edge_map for duplicate values
799 if len(set(edge_map.values())) != len(edge_map):
800 raise DAGCircuitError("duplicates in wire_map")
801
802 # Compose
803 if inplace:
804 dag = self
805 else:
806 dag = copy.deepcopy(self)
807 dag.global_phase += other.global_phase
808
809 for gate, cals in other.calibrations.items():
810 dag._calibrations[gate].update(cals)
811
812 for nd in other.topological_nodes():
813 if isinstance(nd, DAGInNode):
814 # if in edge_map, get new name, else use existing name
815 m_wire = edge_map.get(nd.wire, nd.wire)
816 # the mapped wire should already exist
817 if m_wire not in dag.output_map:
818 raise DAGCircuitError(
819 "wire %s[%d] not in self" % (m_wire.register.name, m_wire.index)
820 )
821 if nd.wire not in other._wires:
822 raise DAGCircuitError(
823 "inconsistent wire type for %s[%d] in other"
824 % (nd.register.name, nd.wire.index)
825 )
826 elif isinstance(nd, DAGOutNode):
827 # ignore output nodes
828 pass
829 elif isinstance(nd, DAGOpNode):
830 condition = dag._map_condition(
831 edge_map, getattr(nd.op, "condition", None), dag.cregs.values()
832 )
833 dag._check_condition(nd.op.name, condition)
834 m_qargs = [edge_map.get(x, x) for x in nd.qargs]
835 m_cargs = [edge_map.get(x, x) for x in nd.cargs]
836 op = nd.op.copy()
837 if condition and not isinstance(op, Instruction):
838 raise DAGCircuitError("Cannot add a condition on a generic Operation.")
839 op.condition = condition
840 dag.apply_operation_back(op, m_qargs, m_cargs)
841 else:
842 raise DAGCircuitError("bad node type %s" % type(nd))
843
844 if not inplace:
845 return dag
846 else:
847 return None
848
849 def reverse_ops(self):
850 """Reverse the operations in the ``self`` circuit.
851
852 Returns:
853 DAGCircuit: the reversed dag.
854 """
855 # TODO: speed up
856 # pylint: disable=cyclic-import
857 from qiskit.converters import dag_to_circuit, circuit_to_dag
858
859 qc = dag_to_circuit(self)
860 reversed_qc = qc.reverse_ops()
861 reversed_dag = circuit_to_dag(reversed_qc)
862 return reversed_dag
863
864 def idle_wires(self, ignore=None):
865 """Return idle wires.
866
867 Args:
868 ignore (list(str)): List of node names to ignore. Default: []
869
870 Yields:
871 Bit: Bit in idle wire.
872
873 Raises:
874 DAGCircuitError: If the DAG is invalid
875 """
876 if ignore is None:
877 ignore = set()
878 ignore_set = set(ignore)
879 for wire in self._wires:
880 if not ignore:
881 if self._is_wire_idle(wire):
882 yield wire
883 else:
884 for node in self.nodes_on_wire(wire, only_ops=True):
885 if node.op.name not in ignore_set:
886 # If we found an op node outside of ignore we can stop iterating over the wire
887 break
888 else:
889 yield wire
890
891 def size(self, *, recurse: bool = False):
892 """Return the number of operations. If there is control flow present, this count may only
893 be an estimate, as the complete control-flow path cannot be statically known.
894
895 Args:
896 recurse: if ``True``, then recurse into control-flow operations. For loops with
897 known-length iterators are counted unrolled. If-else blocks sum both of the two
898 branches. While loops are counted as if the loop body runs once only. Defaults to
899 ``False`` and raises :class:`.DAGCircuitError` if any control flow is present, to
900 avoid silently returning a mostly meaningless number.
901
902 Returns:
903 int: the circuit size
904
905 Raises:
906 DAGCircuitError: if an unknown :class:`.ControlFlowOp` is present in a call with
907 ``recurse=True``, or any control flow is present in a non-recursive call.
908 """
909 length = len(self._multi_graph) - 2 * len(self._wires)
910 if not recurse:
911 if any(x in self._op_names for x in ("for_loop", "while_loop", "if_else")):
912 raise DAGCircuitError(
913 "Size with control flow is ambiguous."
914 " You may use `recurse=True` to get a result,"
915 " but see this method's documentation for the meaning of this."
916 )
917 return length
918 # pylint: disable=cyclic-import
919 from qiskit.converters import circuit_to_dag
920
921 for node in self.op_nodes(ControlFlowOp):
922 if isinstance(node.op, ForLoopOp):
923 indexset = node.op.params[0]
924 inner = len(indexset) * circuit_to_dag(node.op.blocks[0]).size(recurse=True)
925 elif isinstance(node.op, WhileLoopOp):
926 inner = circuit_to_dag(node.op.blocks[0]).size(recurse=True)
927 elif isinstance(node.op, IfElseOp):
928 inner = sum(circuit_to_dag(block).size(recurse=True) for block in node.op.blocks)
929 else:
930 raise DAGCircuitError(f"unknown control-flow type: '{node.op.name}'")
931 # Replace the "1" for the node itself with the actual count.
932 length += inner - 1
933 return length
934
935 def depth(self, *, recurse: bool = False):
936 """Return the circuit depth. If there is control flow present, this count may only be an
937 estimate, as the complete control-flow path cannot be staticly known.
938
939 Args:
940 recurse: if ``True``, then recurse into control-flow operations. For loops
941 with known-length iterators are counted as if the loop had been manually unrolled
942 (*i.e.* with each iteration of the loop body written out explicitly).
943 If-else blocks take the longer case of the two branches. While loops are counted as
944 if the loop body runs once only. Defaults to ``False`` and raises
945 :class:`.DAGCircuitError` if any control flow is present, to avoid silently
946 returning a nonsensical number.
947
948 Returns:
949 int: the circuit depth
950
951 Raises:
952 DAGCircuitError: if not a directed acyclic graph
953 DAGCircuitError: if unknown control flow is present in a recursive call, or any control
954 flow is present in a non-recursive call.
955 """
956 if recurse:
957 from qiskit.converters import circuit_to_dag # pylint: disable=cyclic-import
958
959 node_lookup = {}
960 for node in self.op_nodes(ControlFlowOp):
961 weight = len(node.op.params[0]) if isinstance(node.op, ForLoopOp) else 1
962 if weight == 0:
963 node_lookup[node._node_id] = 0
964 else:
965 node_lookup[node._node_id] = weight * max(
966 circuit_to_dag(block).depth(recurse=True) for block in node.op.blocks
967 )
968
969 def weight_fn(_source, target, _edge):
970 return node_lookup.get(target, 1)
971
972 else:
973 if any(x in self._op_names for x in ("for_loop", "while_loop", "if_else")):
974 raise DAGCircuitError(
975 "Depth with control flow is ambiguous."
976 " You may use `recurse=True` to get a result,"
977 " but see this method's documentation for the meaning of this."
978 )
979 weight_fn = None
980
981 try:
982 depth = rx.dag_longest_path_length(self._multi_graph, weight_fn) - 1
983 except rx.DAGHasCycle as ex:
984 raise DAGCircuitError("not a DAG") from ex
985 return depth if depth >= 0 else 0
986
987 def width(self):
988 """Return the total number of qubits + clbits used by the circuit.
989 This function formerly returned the number of qubits by the calculation
990 return len(self._wires) - self.num_clbits()
991 but was changed by issue #2564 to return number of qubits + clbits
992 with the new function DAGCircuit.num_qubits replacing the former
993 semantic of DAGCircuit.width().
994 """
995 return len(self._wires)
996
997 def num_qubits(self):
998 """Return the total number of qubits used by the circuit.
999 num_qubits() replaces former use of width().
1000 DAGCircuit.width() now returns qubits + clbits for
1001 consistency with Circuit.width() [qiskit-terra #2564].
1002 """
1003 return len(self.qubits)
1004
1005 def num_clbits(self):
1006 """Return the total number of classical bits used by the circuit."""
1007 return len(self.clbits)
1008
1009 def num_tensor_factors(self):
1010 """Compute how many components the circuit can decompose into."""
1011 return rx.number_weakly_connected_components(self._multi_graph)
1012
1013 def __eq__(self, other):
1014 # Try to convert to float, but in case of unbound ParameterExpressions
1015 # a TypeError will be raise, fallback to normal equality in those
1016 # cases
1017 try:
1018 self_phase = float(self.global_phase)
1019 other_phase = float(other.global_phase)
1020 if (
1021 abs((self_phase - other_phase + np.pi) % (2 * np.pi) - np.pi) > 1.0e-10
1022 ): # TODO: atol?
1023 return False
1024 except TypeError:
1025 if self.global_phase != other.global_phase:
1026 return False
1027 if self.calibrations != other.calibrations:
1028 return False
1029
1030 self_bit_indices = {bit: idx for idx, bit in enumerate(self.qubits + self.clbits)}
1031 other_bit_indices = {bit: idx for idx, bit in enumerate(other.qubits + other.clbits)}
1032
1033 self_qreg_indices = {
1034 regname: [self_bit_indices[bit] for bit in reg] for regname, reg in self.qregs.items()
1035 }
1036 self_creg_indices = {
1037 regname: [self_bit_indices[bit] for bit in reg] for regname, reg in self.cregs.items()
1038 }
1039
1040 other_qreg_indices = {
1041 regname: [other_bit_indices[bit] for bit in reg] for regname, reg in other.qregs.items()
1042 }
1043 other_creg_indices = {
1044 regname: [other_bit_indices[bit] for bit in reg] for regname, reg in other.cregs.items()
1045 }
1046 if self_qreg_indices != other_qreg_indices or self_creg_indices != other_creg_indices:
1047 return False
1048
1049 def node_eq(node_self, node_other):
1050 return DAGNode.semantic_eq(node_self, node_other, self_bit_indices, other_bit_indices)
1051
1052 return rx.is_isomorphic_node_match(self._multi_graph, other._multi_graph, node_eq)
1053
1054 def topological_nodes(self, key=None):
1055 """
1056 Yield nodes in topological order.
1057
1058 Args:
1059 key (Callable): A callable which will take a DAGNode object and
1060 return a string sort key. If not specified the
1061 :attr:`~qiskit.dagcircuit.DAGNode.sort_key` attribute will be
1062 used as the sort key for each node.
1063
1064 Returns:
1065 generator(DAGOpNode, DAGInNode, or DAGOutNode): node in topological order
1066 """
1067
1068 def _key(x):
1069 return x.sort_key
1070
1071 if key is None:
1072 key = _key
1073
1074 return iter(rx.lexicographical_topological_sort(self._multi_graph, key=key))
1075
1076 def topological_op_nodes(self, key=None) -> Generator[DAGOpNode, Any, Any]:
1077 """
1078 Yield op nodes in topological order.
1079
1080 Allowed to pass in specific key to break ties in top order
1081
1082 Args:
1083 key (Callable): A callable which will take a DAGNode object and
1084 return a string sort key. If not specified the
1085 :attr:`~qiskit.dagcircuit.DAGNode.sort_key` attribute will be
1086 used as the sort key for each node.
1087
1088 Returns:
1089 generator(DAGOpNode): op node in topological order
1090 """
1091 return (nd for nd in self.topological_nodes(key) if isinstance(nd, DAGOpNode))
1092
1093 def replace_block_with_op(self, node_block, op, wire_pos_map, cycle_check=True):
1094 """Replace a block of nodes with a single node.
1095
1096 This is used to consolidate a block of DAGOpNodes into a single
1097 operation. A typical example is a block of gates being consolidated
1098 into a single ``UnitaryGate`` representing the unitary matrix of the
1099 block.
1100
1101 Args:
1102 node_block (List[DAGNode]): A list of dag nodes that represents the
1103 node block to be replaced
1104 op (qiskit.circuit.Operation): The operation to replace the
1105 block with
1106 wire_pos_map (Dict[Qubit, int]): The dictionary mapping the qarg to
1107 the position. This is necessary to reconstruct the qarg order
1108 over multiple gates in the combined single op node.
1109 cycle_check (bool): When set to True this method will check that
1110 replacing the provided ``node_block`` with a single node
1111 would introduce a cycle (which would invalidate the
1112 ``DAGCircuit``) and will raise a ``DAGCircuitError`` if a cycle
1113 would be introduced. This checking comes with a run time
1114 penalty. If you can guarantee that your input ``node_block`` is
1115 a contiguous block and won't introduce a cycle when it's
1116 contracted to a single node, this can be set to ``False`` to
1117 improve the runtime performance of this method.
1118
1119 Raises:
1120 DAGCircuitError: if ``cycle_check`` is set to ``True`` and replacing
1121 the specified block introduces a cycle or if ``node_block`` is
1122 empty.
1123
1124 Returns:
1125 DAGOpNode: The op node that replaces the block.
1126 """
1127 block_qargs = set()
1128 block_cargs = set()
1129 block_ids = [x._node_id for x in node_block]
1130
1131 # If node block is empty return early
1132 if not node_block:
1133 raise DAGCircuitError("Can't replace an empty node_block")
1134
1135 for nd in node_block:
1136 block_qargs |= set(nd.qargs)
1137 if isinstance(nd, DAGOpNode) and getattr(nd.op, "condition", None):
1138 block_cargs |= set(nd.cargs)
1139
1140 # Create replacement node
1141 new_node = DAGOpNode(
1142 op,
1143 sorted(block_qargs, key=lambda x: wire_pos_map[x]),
1144 sorted(block_cargs, key=lambda x: wire_pos_map[x]),
1145 )
1146
1147 try:
1148 new_node._node_id = self._multi_graph.contract_nodes(
1149 block_ids, new_node, check_cycle=cycle_check
1150 )
1151 except rx.DAGWouldCycle as ex:
1152 raise DAGCircuitError(
1153 "Replacing the specified node block would introduce a cycle"
1154 ) from ex
1155
1156 self._increment_op(op)
1157
1158 for nd in node_block:
1159 self._decrement_op(nd.op)
1160
1161 return new_node
1162
1163 def substitute_node_with_dag(self, node, input_dag, wires=None, propagate_condition=True):
1164 """Replace one node with dag.
1165
1166 Args:
1167 node (DAGOpNode): node to substitute
1168 input_dag (DAGCircuit): circuit that will substitute the node
1169 wires (list[Bit] | Dict[Bit, Bit]): gives an order for (qu)bits
1170 in the input circuit. If a list, then the bits refer to those in the ``input_dag``,
1171 and the order gets matched to the node wires by qargs first, then cargs, then
1172 conditions. If a dictionary, then a mapping of bits in the ``input_dag`` to those
1173 that the ``node`` acts on.
1174 propagate_condition (bool): If ``True`` (default), then any ``condition`` attribute on
1175 the operation within ``node`` is propagated to each node in the ``input_dag``. If
1176 ``False``, then the ``input_dag`` is assumed to faithfully implement suitable
1177 conditional logic already.
1178
1179 Returns:
1180 dict: maps node IDs from `input_dag` to their new node incarnations in `self`.
1181
1182 Raises:
1183 DAGCircuitError: if met with unexpected predecessor/successors
1184 """
1185 if not isinstance(node, DAGOpNode):
1186 raise DAGCircuitError(f"expected node DAGOpNode, got {type(node)}")
1187
1188 if isinstance(wires, dict):
1189 wire_map = wires
1190 else:
1191 wires = input_dag.wires if wires is None else wires
1192 node_cargs = set(node.cargs)
1193 node_wire_order = list(node.qargs) + list(node.cargs)
1194 # If we're not propagating it, the number of wires in the input DAG should include the
1195 # condition as well.
1196 if not propagate_condition:
1197 node_wire_order += [
1198 bit
1199 for bit in self._bits_in_condition(getattr(node.op, "condition", None))
1200 if bit not in node_cargs
1201 ]
1202 if len(wires) != len(node_wire_order):
1203 raise DAGCircuitError(
1204 f"bit mapping invalid: expected {len(node_wire_order)}, got {len(wires)}"
1205 )
1206 wire_map = dict(zip(wires, node_wire_order))
1207 if len(wire_map) != len(node_wire_order):
1208 raise DAGCircuitError("bit mapping invalid: some bits have duplicate entries")
1209 for input_dag_wire, our_wire in wire_map.items():
1210 if our_wire not in self.input_map:
1211 raise DAGCircuitError(f"bit mapping invalid: {our_wire} is not in this DAG")
1212 # Support mapping indiscriminately between Qubit and AncillaQubit, etc.
1213 check_type = Qubit if isinstance(our_wire, Qubit) else Clbit
1214 if not isinstance(input_dag_wire, check_type):
1215 raise DAGCircuitError(
1216 f"bit mapping invalid: {input_dag_wire} and {our_wire} are different bit types"
1217 )
1218
1219 reverse_wire_map = {b: a for a, b in wire_map.items()}
1220 creg_map = {}
1221 op_condition = getattr(node.op, "condition", None)
1222 if propagate_condition and op_condition is not None:
1223 in_dag = input_dag.copy_empty_like()
1224 target, value = op_condition
1225 if isinstance(target, Clbit):
1226 new_target = reverse_wire_map.get(target, Clbit())
1227 if new_target not in wire_map:
1228 in_dag.add_clbits([new_target])
1229 wire_map[new_target], reverse_wire_map[target] = target, new_target
1230 target_cargs = {new_target}
1231 else: # ClassicalRegister
1232 mapped_bits = [reverse_wire_map.get(bit, Clbit()) for bit in target]
1233 for ours, theirs in zip(target, mapped_bits):
1234 # Update to any new dummy bits we just created to the wire maps.
1235 wire_map[theirs], reverse_wire_map[ours] = ours, theirs
1236 new_target = ClassicalRegister(bits=mapped_bits)
1237 creg_map[new_target.name] = target
1238 in_dag.add_creg(new_target)
1239 target_cargs = set(new_target)
1240 new_condition = (new_target, value)
1241 for in_node in input_dag.topological_op_nodes():
1242 if getattr(in_node.op, "condition", None) is not None:
1243 raise DAGCircuitError(
1244 "cannot propagate a condition to an element that already has one"
1245 )
1246 if target_cargs.intersection(in_node.cargs):
1247 # This is for backwards compatibility with early versions of the method, as it is
1248 # a tested part of the API. In the newer model of a condition being an integral
1249 # part of the operation (not a separate property to be copied over), this error
1250 # is overzealous, because it forbids a custom instruction from implementing the
1251 # condition within its definition rather than at the top level.
1252 raise DAGCircuitError(
1253 "cannot propagate a condition to an element that acts on those bits"
1254 )
1255 new_op = copy.copy(in_node.op)
1256 new_op.condition = new_condition
1257 in_dag.apply_operation_back(new_op, in_node.qargs, in_node.cargs)
1258 else:
1259 in_dag = input_dag
1260
1261 if in_dag.global_phase:
1262 self.global_phase += in_dag.global_phase
1263
1264 # Add wire from pred to succ if no ops on mapped wire on ``in_dag``
1265 # rustworkx's substitute_node_with_subgraph lacks the DAGCircuit
1266 # context to know what to do in this case (the method won't even see
1267 # these nodes because they're filtered) so we manually retain the
1268 # edges prior to calling substitute_node_with_subgraph and set the
1269 # edge_map_fn callback kwarg to skip these edges when they're
1270 # encountered.
1271 for in_dag_wire, self_wire in wire_map.items():
1272 input_node = in_dag.input_map[in_dag_wire]
1273 output_node = in_dag.output_map[in_dag_wire]
1274 if in_dag._multi_graph.has_edge(input_node._node_id, output_node._node_id):
1275 pred = self._multi_graph.find_predecessors_by_edge(
1276 node._node_id, lambda edge, wire=self_wire: edge == wire
1277 )[0]
1278 succ = self._multi_graph.find_successors_by_edge(
1279 node._node_id, lambda edge, wire=self_wire: edge == wire
1280 )[0]
1281 self._multi_graph.add_edge(pred._node_id, succ._node_id, self_wire)
1282
1283 # Exlude any nodes from in_dag that are not a DAGOpNode or are on
1284 # bits outside the set specified by the wires kwarg
1285 def filter_fn(node):
1286 if not isinstance(node, DAGOpNode):
1287 return False
1288 for qarg in node.qargs:
1289 if qarg not in wire_map:
1290 return False
1291 return True
1292
1293 # Map edges into and out of node to the appropriate node from in_dag
1294 def edge_map_fn(source, _target, self_wire):
1295 wire = reverse_wire_map[self_wire]
1296 # successor edge
1297 if source == node._node_id:
1298 wire_output_id = in_dag.output_map[wire]._node_id
1299 out_index = in_dag._multi_graph.predecessor_indices(wire_output_id)[0]
1300 # Edge directly from from input nodes to output nodes in in_dag are
1301 # already handled prior to calling rustworkx. Don't map these edges
1302 # in rustworkx.
1303 if not isinstance(in_dag._multi_graph[out_index], DAGOpNode):
1304 return None
1305 # predecessor edge
1306 else:
1307 wire_input_id = in_dag.input_map[wire]._node_id
1308 out_index = in_dag._multi_graph.successor_indices(wire_input_id)[0]
1309 # Edge directly from from input nodes to output nodes in in_dag are
1310 # already handled prior to calling rustworkx. Don't map these edges
1311 # in rustworkx.
1312 if not isinstance(in_dag._multi_graph[out_index], DAGOpNode):
1313 return None
1314 return out_index
1315
1316 # Adjust edge weights from in_dag
1317 def edge_weight_map(wire):
1318 return wire_map[wire]
1319
1320 node_map = self._multi_graph.substitute_node_with_subgraph(
1321 node._node_id, in_dag._multi_graph, edge_map_fn, filter_fn, edge_weight_map
1322 )
1323 self._decrement_op(node.op)
1324
1325 # Iterate over nodes of input_circuit and update wires in node objects migrated
1326 # from in_dag
1327 for old_node_index, new_node_index in node_map.items():
1328 # update node attributes
1329 old_node = in_dag._multi_graph[old_node_index]
1330 m_op = self._map_condition_with_import(old_node.op, wire_map, creg_map)
1331 m_qargs = [wire_map[x] for x in old_node.qargs]
1332 m_cargs = [wire_map[x] for x in old_node.cargs]
1333 new_node = DAGOpNode(m_op, qargs=m_qargs, cargs=m_cargs)
1334 new_node._node_id = new_node_index
1335 self._multi_graph[new_node_index] = new_node
1336 self._increment_op(new_node.op)
1337
1338 return {k: self._multi_graph[v] for k, v in node_map.items()}
1339
1340 def substitute_node(self, node, op, inplace=False):
1341 """Replace an DAGOpNode with a single operation. qargs, cargs and
1342 conditions for the new operation will be inferred from the node to be
1343 replaced. The new operation will be checked to match the shape of the
1344 replaced operation.
1345
1346 Args:
1347 node (DAGOpNode): Node to be replaced
1348 op (qiskit.circuit.Operation): The :class:`qiskit.circuit.Operation`
1349 instance to be added to the DAG
1350 inplace (bool): Optional, default False. If True, existing DAG node
1351 will be modified to include op. Otherwise, a new DAG node will
1352 be used.
1353
1354 Returns:
1355 DAGOpNode: the new node containing the added operation.
1356
1357 Raises:
1358 DAGCircuitError: If replacement operation was incompatible with
1359 location of target node.
1360 """
1361
1362 if not isinstance(node, DAGOpNode):
1363 raise DAGCircuitError("Only DAGOpNodes can be replaced.")
1364
1365 if node.op.num_qubits != op.num_qubits or node.op.num_clbits != op.num_clbits:
1366 raise DAGCircuitError(
1367 "Cannot replace node of width ({} qubits, {} clbits) with "
1368 "operation of mismatched width ({} qubits, {} clbits).".format(
1369 node.op.num_qubits, node.op.num_clbits, op.num_qubits, op.num_clbits
1370 )
1371 )
1372
1373 if inplace:
1374 if op.name != node.op.name:
1375 self._increment_op(op)
1376 self._decrement_op(node.op)
1377 save_condition = getattr(node.op, "condition", None)
1378 node.op = op
1379 if save_condition and not isinstance(op, Instruction):
1380 raise DAGCircuitError("Cannot add a condition on a generic Operation.")
1381 node.op.condition = save_condition
1382 return node
1383
1384 new_node = copy.copy(node)
1385 save_condition = getattr(new_node.op, "condition", None)
1386 new_node.op = op
1387 if save_condition and not isinstance(new_node.op, Instruction):
1388 raise DAGCircuitError("Cannot add a condition on a generic Operation.")
1389 new_node.op.condition = save_condition
1390 self._multi_graph[node._node_id] = new_node
1391 if op.name != node.op.name:
1392 self._increment_op(op)
1393 self._decrement_op(node.op)
1394 return new_node
1395
1396 def node(self, node_id):
1397 """Get the node in the dag.
1398
1399 Args:
1400 node_id(int): Node identifier.
1401
1402 Returns:
1403 node: the node.
1404 """
1405 return self._multi_graph[node_id]
1406
1407 def nodes(self):
1408 """Iterator for node values.
1409
1410 Yield:
1411 node: the node.
1412 """
1413 yield from self._multi_graph.nodes()
1414
1415 def edges(self, nodes=None):
1416 """Iterator for edge values and source and dest node
1417
1418 This works by returning the output edges from the specified nodes. If
1419 no nodes are specified all edges from the graph are returned.
1420
1421 Args:
1422 nodes(DAGOpNode, DAGInNode, or DAGOutNode|list(DAGOpNode, DAGInNode, or DAGOutNode):
1423 Either a list of nodes or a single input node. If none is specified,
1424 all edges are returned from the graph.
1425
1426 Yield:
1427 edge: the edge in the same format as out_edges the tuple
1428 (source node, destination node, edge data)
1429 """
1430 if nodes is None:
1431 nodes = self._multi_graph.nodes()
1432
1433 elif isinstance(nodes, (DAGOpNode, DAGInNode, DAGOutNode)):
1434 nodes = [nodes]
1435 for node in nodes:
1436 raw_nodes = self._multi_graph.out_edges(node._node_id)
1437 for source, dest, edge in raw_nodes:
1438 yield (self._multi_graph[source], self._multi_graph[dest], edge)
1439
1440 def op_nodes(self, op=None, include_directives=True):
1441 """Get the list of "op" nodes in the dag.
1442
1443 Args:
1444 op (Type): :class:`qiskit.circuit.Operation` subclass op nodes to
1445 return. If None, return all op nodes.
1446 include_directives (bool): include `barrier`, `snapshot` etc.
1447
1448 Returns:
1449 list[DAGOpNode]: the list of node ids containing the given op.
1450 """
1451 nodes = []
1452 for node in self._multi_graph.nodes():
1453 if isinstance(node, DAGOpNode):
1454 if not include_directives and getattr(node.op, "_directive", False):
1455 continue
1456 if op is None or isinstance(node.op, op):
1457 nodes.append(node)
1458 return nodes
1459
1460 def gate_nodes(self):
1461 """Get the list of gate nodes in the dag.
1462
1463 Returns:
1464 list[DAGOpNode]: the list of DAGOpNodes that represent gates.
1465 """
1466 nodes = []
1467 for node in self.op_nodes():
1468 if isinstance(node.op, Gate):
1469 nodes.append(node)
1470 return nodes
1471
1472 def named_nodes(self, *names):
1473 """Get the set of "op" nodes with the given name."""
1474 named_nodes = []
1475 for node in self._multi_graph.nodes():
1476 if isinstance(node, DAGOpNode) and node.op.name in names:
1477 named_nodes.append(node)
1478 return named_nodes
1479
1480 def two_qubit_ops(self):
1481 """Get list of 2 qubit operations. Ignore directives like snapshot and barrier."""
1482 ops = []
1483 for node in self.op_nodes(include_directives=False):
1484 if len(node.qargs) == 2:
1485 ops.append(node)
1486 return ops
1487
1488 def multi_qubit_ops(self):
1489 """Get list of 3+ qubit operations. Ignore directives like snapshot and barrier."""
1490 ops = []
1491 for node in self.op_nodes(include_directives=False):
1492 if len(node.qargs) >= 3:
1493 ops.append(node)
1494 return ops
1495
1496 def longest_path(self):
1497 """Returns the longest path in the dag as a list of DAGOpNodes, DAGInNodes, and DAGOutNodes."""
1498 return [self._multi_graph[x] for x in rx.dag_longest_path(self._multi_graph)]
1499
1500 def successors(self, node):
1501 """Returns iterator of the successors of a node as DAGOpNodes and DAGOutNodes."""
1502 return iter(self._multi_graph.successors(node._node_id))
1503
1504 def predecessors(self, node):
1505 """Returns iterator of the predecessors of a node as DAGOpNodes and DAGInNodes."""
1506 return iter(self._multi_graph.predecessors(node._node_id))
1507
1508 def is_successor(self, node, node_succ):
1509 """Checks if a second node is in the successors of node."""
1510 return self._multi_graph.has_edge(node._node_id, node_succ._node_id)
1511
1512 def is_predecessor(self, node, node_pred):
1513 """Checks if a second node is in the predecessors of node."""
1514 return self._multi_graph.has_edge(node_pred._node_id, node._node_id)
1515
1516 def quantum_predecessors(self, node):
1517 """Returns iterator of the predecessors of a node that are
1518 connected by a quantum edge as DAGOpNodes and DAGInNodes."""
1519 return iter(
1520 self._multi_graph.find_predecessors_by_edge(
1521 node._node_id, lambda edge_data: isinstance(edge_data, Qubit)
1522 )
1523 )
1524
1525 def ancestors(self, node):
1526 """Returns set of the ancestors of a node as DAGOpNodes and DAGInNodes."""
1527 return {self._multi_graph[x] for x in rx.ancestors(self._multi_graph, node._node_id)}
1528
1529 def descendants(self, node):
1530 """Returns set of the descendants of a node as DAGOpNodes and DAGOutNodes."""
1531 return {self._multi_graph[x] for x in rx.descendants(self._multi_graph, node._node_id)}
1532
1533 def bfs_successors(self, node):
1534 """
1535 Returns an iterator of tuples of (DAGNode, [DAGNodes]) where the DAGNode is the current node
1536 and [DAGNode] is its successors in BFS order.
1537 """
1538 return iter(rx.bfs_successors(self._multi_graph, node._node_id))
1539
1540 def quantum_successors(self, node):
1541 """Returns iterator of the successors of a node that are
1542 connected by a quantum edge as Opnodes and DAGOutNodes."""
1543 return iter(
1544 self._multi_graph.find_successors_by_edge(
1545 node._node_id, lambda edge_data: isinstance(edge_data, Qubit)
1546 )
1547 )
1548
1549 def remove_op_node(self, node):
1550 """Remove an operation node n.
1551
1552 Add edges from predecessors to successors.
1553 """
1554 if not isinstance(node, DAGOpNode):
1555 raise DAGCircuitError(
1556 'The method remove_op_node only works on DAGOpNodes. A "%s" '
1557 "node type was wrongly provided." % type(node)
1558 )
1559
1560 self._multi_graph.remove_node_retain_edges(
1561 node._node_id, use_outgoing=False, condition=lambda edge1, edge2: edge1 == edge2
1562 )
1563 self._decrement_op(node.op)
1564
1565 def remove_ancestors_of(self, node):
1566 """Remove all of the ancestor operation nodes of node."""
1567 anc = rx.ancestors(self._multi_graph, node)
1568 # TODO: probably better to do all at once using
1569 # multi_graph.remove_nodes_from; same for related functions ...
1570
1571 for anc_node in anc:
1572 if isinstance(anc_node, DAGOpNode):
1573 self.remove_op_node(anc_node)
1574
1575 def remove_descendants_of(self, node):
1576 """Remove all of the descendant operation nodes of node."""
1577 desc = rx.descendants(self._multi_graph, node)
1578 for desc_node in desc:
1579 if isinstance(desc_node, DAGOpNode):
1580 self.remove_op_node(desc_node)
1581
1582 def remove_nonancestors_of(self, node):
1583 """Remove all of the non-ancestors operation nodes of node."""
1584 anc = rx.ancestors(self._multi_graph, node)
1585 comp = list(set(self._multi_graph.nodes()) - set(anc))
1586 for n in comp:
1587 if isinstance(n, DAGOpNode):
1588 self.remove_op_node(n)
1589
1590 def remove_nondescendants_of(self, node):
1591 """Remove all of the non-descendants operation nodes of node."""
1592 dec = rx.descendants(self._multi_graph, node)
1593 comp = list(set(self._multi_graph.nodes()) - set(dec))
1594 for n in comp:
1595 if isinstance(n, DAGOpNode):
1596 self.remove_op_node(n)
1597
1598 def front_layer(self):
1599 """Return a list of op nodes in the first layer of this dag."""
1600 graph_layers = self.multigraph_layers()
1601 try:
1602 next(graph_layers) # Remove input nodes
1603 except StopIteration:
1604 return []
1605
1606 op_nodes = [node for node in next(graph_layers) if isinstance(node, DAGOpNode)]
1607
1608 return op_nodes
1609
1610 def layers(self):
1611 """Yield a shallow view on a layer of this DAGCircuit for all d layers of this circuit.
1612
1613 A layer is a circuit whose gates act on disjoint qubits, i.e.,
1614 a layer has depth 1. The total number of layers equals the
1615 circuit depth d. The layers are indexed from 0 to d-1 with the
1616 earliest layer at index 0. The layers are constructed using a
1617 greedy algorithm. Each returned layer is a dict containing
1618 {"graph": circuit graph, "partition": list of qubit lists}.
1619
1620 The returned layer contains new (but semantically equivalent) DAGOpNodes, DAGInNodes,
1621 and DAGOutNodes. These are not the same as nodes of the original dag, but are equivalent
1622 via DAGNode.semantic_eq(node1, node2).
1623
1624 TODO: Gates that use the same cbits will end up in different
1625 layers as this is currently implemented. This may not be
1626 the desired behavior.
1627 """
1628 graph_layers = self.multigraph_layers()
1629 try:
1630 next(graph_layers) # Remove input nodes
1631 except StopIteration:
1632 return
1633
1634 for graph_layer in graph_layers:
1635
1636 # Get the op nodes from the layer, removing any input and output nodes.
1637 op_nodes = [node for node in graph_layer if isinstance(node, DAGOpNode)]
1638
1639 # Sort to make sure they are in the order they were added to the original DAG
1640 # It has to be done by node_id as graph_layer is just a list of nodes
1641 # with no implied topology
1642 # Drawing tools rely on _node_id to infer order of node creation
1643 # so we need this to be preserved by layers()
1644 op_nodes.sort(key=lambda nd: nd._node_id)
1645
1646 # Stop yielding once there are no more op_nodes in a layer.
1647 if not op_nodes:
1648 return
1649
1650 # Construct a shallow copy of self
1651 new_layer = self.copy_empty_like()
1652
1653 for node in op_nodes:
1654 # this creates new DAGOpNodes in the new_layer
1655 new_layer.apply_operation_back(node.op, node.qargs, node.cargs)
1656
1657 # The quantum registers that have an operation in this layer.
1658 support_list = [
1659 op_node.qargs
1660 for op_node in new_layer.op_nodes()
1661 if not getattr(op_node.op, "_directive", False)
1662 ]
1663
1664 yield {"graph": new_layer, "partition": support_list}
1665
1666 def serial_layers(self):
1667 """Yield a layer for all gates of this circuit.
1668
1669 A serial layer is a circuit with one gate. The layers have the
1670 same structure as in layers().
1671 """
1672 for next_node in self.topological_op_nodes():
1673 new_layer = self.copy_empty_like()
1674
1675 # Save the support of the operation we add to the layer
1676 support_list = []
1677 # Operation data
1678 op = copy.copy(next_node.op)
1679 qargs = copy.copy(next_node.qargs)
1680 cargs = copy.copy(next_node.cargs)
1681 condition = copy.copy(getattr(next_node.op, "condition", None))
1682 _ = self._bits_in_condition(condition)
1683
1684 # Add node to new_layer
1685 new_layer.apply_operation_back(op, qargs, cargs)
1686 # Add operation to partition
1687 if not getattr(next_node.op, "_directive", False):
1688 support_list.append(list(qargs))
1689 l_dict = {"graph": new_layer, "partition": support_list}
1690 yield l_dict
1691
1692 def multigraph_layers(self):
1693 """Yield layers of the multigraph."""
1694 first_layer = [x._node_id for x in self.input_map.values()]
1695 return iter(rx.layers(self._multi_graph, first_layer))
1696
1697 def collect_runs(self, namelist):
1698 """Return a set of non-conditional runs of "op" nodes with the given names.
1699
1700 For example, "... h q[0]; cx q[0],q[1]; cx q[0],q[1]; h q[1]; .."
1701 would produce the tuple of cx nodes as an element of the set returned
1702 from a call to collect_runs(["cx"]). If instead the cx nodes were
1703 "cx q[0],q[1]; cx q[1],q[0];", the method would still return the
1704 pair in a tuple. The namelist can contain names that are not
1705 in the circuit's basis.
1706
1707 Nodes must have only one successor to continue the run.
1708 """
1709
1710 def filter_fn(node):
1711 return (
1712 isinstance(node, DAGOpNode)
1713 and node.op.name in namelist
1714 and getattr(node.op, "condition", None) is None
1715 )
1716
1717 group_list = rx.collect_runs(self._multi_graph, filter_fn)
1718 return {tuple(x) for x in group_list}
1719
1720 def collect_1q_runs(self):
1721 """Return a set of non-conditional runs of 1q "op" nodes."""
1722
1723 def filter_fn(node):
1724 return (
1725 isinstance(node, DAGOpNode)
1726 and len(node.qargs) == 1
1727 and len(node.cargs) == 0
1728 and getattr(node.op, "condition", None) is None
1729 and not node.op.is_parameterized()
1730 and isinstance(node.op, Gate)
1731 and hasattr(node.op, "__array__")
1732 )
1733
1734 return rx.collect_runs(self._multi_graph, filter_fn)
1735
1736 def collect_2q_runs(self):
1737 """Return a set of non-conditional runs of 2q "op" nodes."""
1738
1739 to_qid = {}
1740 for i, qubit in enumerate(self.qubits):
1741 to_qid[qubit] = i
1742
1743 def filter_fn(node):
1744 if isinstance(node, DAGOpNode):
1745 return (
1746 isinstance(node.op, Gate)
1747 and len(node.qargs) <= 2
1748 and not getattr(node.op, "condition", None)
1749 and not node.op.is_parameterized()
1750 )
1751 else:
1752 return None
1753
1754 def color_fn(edge):
1755 if isinstance(edge, Qubit):
1756 return to_qid[edge]
1757 else:
1758 return None
1759
1760 return rx.collect_bicolor_runs(self._multi_graph, filter_fn, color_fn)
1761
1762 def nodes_on_wire(self, wire, only_ops=False):
1763 """
1764 Iterator for nodes that affect a given wire.
1765
1766 Args:
1767 wire (Bit): the wire to be looked at.
1768 only_ops (bool): True if only the ops nodes are wanted;
1769 otherwise, all nodes are returned.
1770 Yield:
1771 Iterator: the successive nodes on the given wire
1772
1773 Raises:
1774 DAGCircuitError: if the given wire doesn't exist in the DAG
1775 """
1776 current_node = self.input_map.get(wire, None)
1777
1778 if not current_node:
1779 raise DAGCircuitError("The given wire %s is not present in the circuit" % str(wire))
1780
1781 more_nodes = True
1782 while more_nodes:
1783 more_nodes = False
1784 # allow user to just get ops on the wire - not the input/output nodes
1785 if isinstance(current_node, DAGOpNode) or not only_ops:
1786 yield current_node
1787
1788 try:
1789 current_node = self._multi_graph.find_adjacent_node_by_edge(
1790 current_node._node_id, lambda x: wire == x
1791 )
1792 more_nodes = True
1793 except rx.NoSuitableNeighbors:
1794 pass
1795
1796 def count_ops(self, *, recurse: bool = True):
1797 """Count the occurrences of operation names.
1798
1799 Args:
1800 recurse: if ``True`` (default), then recurse into control-flow operations. In all
1801 cases, this counts only the number of times the operation appears in any possible
1802 block; both branches of if-elses are counted, and for- and while-loop blocks are
1803 only counted once.
1804
1805 Returns:
1806 Mapping[str, int]: a mapping of operation names to the number of times it appears.
1807 """
1808 if not recurse:
1809 return self._op_names.copy()
1810
1811 # pylint: disable=cyclic-import
1812 from qiskit.converters import circuit_to_dag
1813
1814 def inner(dag, counts):
1815 for name, count in dag._op_names.items():
1816 counts[name] += count
1817 for node in dag.op_nodes(ControlFlowOp):
1818 for block in node.op.blocks:
1819 counts = inner(circuit_to_dag(block), counts)
1820 return counts
1821
1822 return dict(inner(self, defaultdict(int)))
1823
1824 def count_ops_longest_path(self):
1825 """Count the occurrences of operation names on the longest path.
1826
1827 Returns a dictionary of counts keyed on the operation name.
1828 """
1829 op_dict = {}
1830 path = self.longest_path()
1831 path = path[1:-1] # remove qubits at beginning and end of path
1832 for node in path:
1833 name = node.op.name
1834 if name not in op_dict:
1835 op_dict[name] = 1
1836 else:
1837 op_dict[name] += 1
1838 return op_dict
1839
1840 def properties(self):
1841 """Return a dictionary of circuit properties."""
1842 summary = {
1843 "size": self.size(),
1844 "depth": self.depth(),
1845 "width": self.width(),
1846 "qubits": self.num_qubits(),
1847 "bits": self.num_clbits(),
1848 "factors": self.num_tensor_factors(),
1849 "operations": self.count_ops(),
1850 }
1851 return summary
1852
1853 def draw(self, scale=0.7, filename=None, style="color"):
1854 """
1855 Draws the dag circuit.
1856
1857 This function needs `pydot <https://github.com/erocarrera/pydot>`_, which in turn needs
1858 `Graphviz <https://www.graphviz.org/>`_ to be installed.
1859
1860 Args:
1861 scale (float): scaling factor
1862 filename (str): file path to save image to (format inferred from name)
1863 style (str):
1864 'plain': B&W graph;
1865 'color' (default): color input/output/op nodes
1866
1867 Returns:
1868 Ipython.display.Image: if in Jupyter notebook and not saving to file,
1869 otherwise None.
1870 """
1871 from qiskit.visualization.dag_visualization import dag_drawer
1872
1873 return dag_drawer(dag=self, scale=scale, filename=filename, style=style)
1874
[end of qiskit/dagcircuit/dagcircuit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| Qiskit/qiskit | 3ab57152c1d7e0eb572eb298f6fa922299492586 | DAGCircuitError: 'bit mapping invalid
### Informations
- **Qiskit: 0.39.2**:
- **Python: 3.10.9**:
- **Mac**:
### What is the current behavior?
I'm implementing quantum half adder on Jupyter Notebook.
When I try running my circuit on the simulator "qasm_simulator", Jupyter said
DAGCircuitError: 'bit mapping invalid: expected 4, got 8'
here is the code I've written. The error occurs on the last line of the third code.
```
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, Aer
#SUM
X = QuantumRegister(1, "in |X⟩")
Y = QuantumRegister(1, "in |Y⟩")
sum_out = QuantumRegister(1, "out SUM |0⟩")
SUM = QuantumCircuit(X, Y, sum_out, name='SUM')
SUM.cx(1, 2)
SUM.cx(0, 2)
fig = SUM.draw('mpl', True)
SUM = SUM.to_instruction()
fig
```
```
#half_adder
cout = QuantumRegister(1, 'out Carry |0⟩')
c = ClassicalRegister(4)
hadder = QuantumCircuit(X,Y,sum_out,cout,c)
hadder.ccx(X,Y,cout)
hadder.append(SUM,[0,1,2])
show = hadder.draw("mpl",True)
hadder = hadder.to_instruction()
show
```
```
#testing half_adder
qu = QuantumRegister(4)
cl = ClassicalRegister(4)
circ = QuantumCircuit(qu,cl)
circ.x(qu[0])
circ.x(qu[1])
circ.append(hadder,[0,1,2,3])
for i in range(0,4):
circ.measure(qu[i],cl[i])
circ.draw("mpl",True)
print(execute(circ,Aer.get_backend('qasm_simulator'), shots = 1).result().get_counts())
```
### What is the expected behavior?
I don't totally understand the error. I hope to troubleshoot to see the result.
### Suggested solutions
| Your immediate problem is that the line
```python
circ.append(hadder, [0, 1, 2, 3])
```
doesn't include any classical arguments to apply `hadder` to, but it expects 4 (though they're not used). Perhaps you either meant not to have the `ClassicalRegister` `c` in `hadder`, or you meant to write the above line as
```python
circ.append(hadder, [0, 1, 2, 3], [0, 1, 2, 3])
```
On our side, the `append` call I pulled out should have raised an error. I'm not certain why it didn't, but it definitely looks like a bug that it didn't. | 2023-01-18T12:43:42Z | <patch>
diff --git a/qiskit/circuit/instruction.py b/qiskit/circuit/instruction.py
--- a/qiskit/circuit/instruction.py
+++ b/qiskit/circuit/instruction.py
@@ -481,6 +481,11 @@ def broadcast_arguments(self, qargs, cargs):
f"The amount of qubit arguments {len(qargs)} does not match"
f" the instruction expectation ({self.num_qubits})."
)
+ if len(cargs) != self.num_clbits:
+ raise CircuitError(
+ f"The amount of clbit arguments {len(cargs)} does not match"
+ f" the instruction expectation ({self.num_clbits})."
+ )
# [[q[0], q[1]], [c[0], c[1]]] -> [q[0], c[0]], [q[1], c[1]]
flat_qargs = [qarg for sublist in qargs for qarg in sublist]
</patch> | [] | [] | |||
docker__compose-3056 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pyinstaller has issues with signals
There's a bunch of history in #1040 and #2055.
We've tried multiple implementations of signal handlers, but each has their own set of issues, but **ONLY** when run from the frozen binary created by pyinstaller.
It looks like there is a very old issue in pyinstaller around this: https://github.com/pyinstaller/pyinstaller/issues/208
These problems can manifest in three ways:
- a `thread.error` when a signal interrupts a thread lock
- the signal handlers being completely ignored and raising a `KeynoardInterupt` instead
- the signal handlers being registered but the try/except to handle the except is skipped (this could be caused by the signal firing multiple times for a single `ctrl-c`, but I can't really verify that's what is happening)
</issue>
<code>
[start of README.md]
1 Docker Compose
2 ==============
3 ![Docker Compose](logo.png?raw=true "Docker Compose Logo")
4
5 Compose is a tool for defining and running multi-container Docker applications.
6 With Compose, you use a Compose file to configure your application's services.
7 Then, using a single command, you create and start all the services
8 from your configuration. To learn more about all the features of Compose
9 see [the list of features](https://github.com/docker/compose/blob/release/docs/overview.md#features).
10
11 Compose is great for development, testing, and staging environments, as well as
12 CI workflows. You can learn more about each case in
13 [Common Use Cases](https://github.com/docker/compose/blob/release/docs/overview.md#common-use-cases).
14
15 Using Compose is basically a three-step process.
16
17 1. Define your app's environment with a `Dockerfile` so it can be
18 reproduced anywhere.
19 2. Define the services that make up your app in `docker-compose.yml` so
20 they can be run together in an isolated environment:
21 3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
22
23 A `docker-compose.yml` looks like this:
24
25 web:
26 build: .
27 ports:
28 - "5000:5000"
29 volumes:
30 - .:/code
31 links:
32 - redis
33 redis:
34 image: redis
35
36 For more information about the Compose file, see the
37 [Compose file reference](https://github.com/docker/compose/blob/release/docs/compose-file.md)
38
39 Compose has commands for managing the whole lifecycle of your application:
40
41 * Start, stop and rebuild services
42 * View the status of running services
43 * Stream the log output of running services
44 * Run a one-off command on a service
45
46 Installation and documentation
47 ------------------------------
48
49 - Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
50 - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
51 - Code repository for Compose is on [Github](https://github.com/docker/compose)
52 - If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new)
53
54 Contributing
55 ------------
56
57 [![Build Status](http://jenkins.dockerproject.org/buildStatus/icon?job=Compose%20Master)](http://jenkins.dockerproject.org/job/Compose%20Master/)
58
59 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
60
61 Releasing
62 ---------
63
64 Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/project/RELEASE-PROCESS.md).
65
[end of README.md]
[start of compose/cli/docopt_command.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import sys
5 from inspect import getdoc
6
7 from docopt import docopt
8 from docopt import DocoptExit
9
10
11 def docopt_full_help(docstring, *args, **kwargs):
12 try:
13 return docopt(docstring, *args, **kwargs)
14 except DocoptExit:
15 raise SystemExit(docstring)
16
17
18 class DocoptCommand(object):
19 def docopt_options(self):
20 return {'options_first': True}
21
22 def sys_dispatch(self):
23 self.dispatch(sys.argv[1:], None)
24
25 def dispatch(self, argv, global_options):
26 self.perform_command(*self.parse(argv, global_options))
27
28 def parse(self, argv, global_options):
29 options = docopt_full_help(getdoc(self), argv, **self.docopt_options())
30 command = options['COMMAND']
31
32 if command is None:
33 raise SystemExit(getdoc(self))
34
35 handler = self.get_handler(command)
36 docstring = getdoc(handler)
37
38 if docstring is None:
39 raise NoSuchCommand(command, self)
40
41 command_options = docopt_full_help(docstring, options['ARGS'], options_first=True)
42 return options, handler, command_options
43
44 def get_handler(self, command):
45 command = command.replace('-', '_')
46 # we certainly want to have "exec" command, since that's what docker client has
47 # but in python exec is a keyword
48 if command == "exec":
49 command = "exec_command"
50
51 if not hasattr(self, command):
52 raise NoSuchCommand(command, self)
53
54 return getattr(self, command)
55
56
57 class NoSuchCommand(Exception):
58 def __init__(self, command, supercommand):
59 super(NoSuchCommand, self).__init__("No such command: %s" % command)
60
61 self.command = command
62 self.supercommand = supercommand
63
[end of compose/cli/docopt_command.py]
[start of compose/cli/main.py]
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 import contextlib
6 import json
7 import logging
8 import re
9 import sys
10 from inspect import getdoc
11 from operator import attrgetter
12
13 from docker.errors import APIError
14 from requests.exceptions import ReadTimeout
15
16 from . import signals
17 from .. import __version__
18 from ..config import config
19 from ..config import ConfigurationError
20 from ..config import parse_environment
21 from ..config.serialize import serialize_config
22 from ..const import API_VERSION_TO_ENGINE_VERSION
23 from ..const import DEFAULT_TIMEOUT
24 from ..const import HTTP_TIMEOUT
25 from ..const import IS_WINDOWS_PLATFORM
26 from ..progress_stream import StreamOutputError
27 from ..project import NoSuchService
28 from ..service import BuildError
29 from ..service import ConvergenceStrategy
30 from ..service import ImageType
31 from ..service import NeedsBuildError
32 from .command import friendly_error_message
33 from .command import get_config_path_from_options
34 from .command import project_from_options
35 from .docopt_command import DocoptCommand
36 from .docopt_command import NoSuchCommand
37 from .errors import UserError
38 from .formatter import ConsoleWarningFormatter
39 from .formatter import Formatter
40 from .log_printer import LogPrinter
41 from .utils import get_version_info
42 from .utils import yesno
43
44
45 if not IS_WINDOWS_PLATFORM:
46 from dockerpty.pty import PseudoTerminal, RunOperation, ExecOperation
47
48 log = logging.getLogger(__name__)
49 console_handler = logging.StreamHandler(sys.stderr)
50
51
52 def main():
53 setup_logging()
54 try:
55 command = TopLevelCommand()
56 command.sys_dispatch()
57 except KeyboardInterrupt:
58 log.error("Aborting.")
59 sys.exit(1)
60 except (UserError, NoSuchService, ConfigurationError) as e:
61 log.error(e.msg)
62 sys.exit(1)
63 except NoSuchCommand as e:
64 commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand)))
65 log.error("No such command: %s\n\n%s", e.command, commands)
66 sys.exit(1)
67 except APIError as e:
68 log_api_error(e)
69 sys.exit(1)
70 except BuildError as e:
71 log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
72 sys.exit(1)
73 except StreamOutputError as e:
74 log.error(e)
75 sys.exit(1)
76 except NeedsBuildError as e:
77 log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name)
78 sys.exit(1)
79 except ReadTimeout as e:
80 log.error(
81 "An HTTP request took too long to complete. Retry with --verbose to "
82 "obtain debug information.\n"
83 "If you encounter this issue regularly because of slow network "
84 "conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher "
85 "value (current value: %s)." % HTTP_TIMEOUT
86 )
87 sys.exit(1)
88
89
90 def log_api_error(e):
91 if 'client is newer than server' in e.explanation:
92 # we need JSON formatted errors. In the meantime...
93 # TODO: fix this by refactoring project dispatch
94 # http://github.com/docker/compose/pull/2832#commitcomment-15923800
95 client_version = e.explanation.split('client API version: ')[1].split(',')[0]
96 log.error(
97 "The engine version is lesser than the minimum required by "
98 "compose. Your current project requires a Docker Engine of "
99 "version {version} or superior.".format(
100 version=API_VERSION_TO_ENGINE_VERSION[client_version]
101 ))
102 else:
103 log.error(e.explanation)
104
105
106 def setup_logging():
107 root_logger = logging.getLogger()
108 root_logger.addHandler(console_handler)
109 root_logger.setLevel(logging.DEBUG)
110
111 # Disable requests logging
112 logging.getLogger("requests").propagate = False
113
114
115 def setup_console_handler(handler, verbose):
116 if handler.stream.isatty():
117 format_class = ConsoleWarningFormatter
118 else:
119 format_class = logging.Formatter
120
121 if verbose:
122 handler.setFormatter(format_class('%(name)s.%(funcName)s: %(message)s'))
123 handler.setLevel(logging.DEBUG)
124 else:
125 handler.setFormatter(format_class())
126 handler.setLevel(logging.INFO)
127
128
129 # stolen from docopt master
130 def parse_doc_section(name, source):
131 pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)',
132 re.IGNORECASE | re.MULTILINE)
133 return [s.strip() for s in pattern.findall(source)]
134
135
136 class TopLevelCommand(DocoptCommand):
137 """Define and run multi-container applications with Docker.
138
139 Usage:
140 docker-compose [-f=<arg>...] [options] [COMMAND] [ARGS...]
141 docker-compose -h|--help
142
143 Options:
144 -f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
145 -p, --project-name NAME Specify an alternate project name (default: directory name)
146 --verbose Show more output
147 -v, --version Print version and exit
148
149 Commands:
150 build Build or rebuild services
151 config Validate and view the compose file
152 create Create services
153 down Stop and remove containers, networks, images, and volumes
154 events Receive real time events from containers
155 exec Execute a command in a running container
156 help Get help on a command
157 kill Kill containers
158 logs View output from containers
159 pause Pause services
160 port Print the public port for a port binding
161 ps List containers
162 pull Pulls service images
163 restart Restart services
164 rm Remove stopped containers
165 run Run a one-off command
166 scale Set number of containers for a service
167 start Start services
168 stop Stop services
169 unpause Unpause services
170 up Create and start containers
171 version Show the Docker-Compose version information
172 """
173 base_dir = '.'
174
175 def docopt_options(self):
176 options = super(TopLevelCommand, self).docopt_options()
177 options['version'] = get_version_info('compose')
178 return options
179
180 def perform_command(self, options, handler, command_options):
181 setup_console_handler(console_handler, options.get('--verbose'))
182
183 if options['COMMAND'] in ('help', 'version'):
184 # Skip looking up the compose file.
185 handler(None, command_options)
186 return
187
188 if options['COMMAND'] == 'config':
189 handler(options, command_options)
190 return
191
192 project = project_from_options(self.base_dir, options)
193 with friendly_error_message():
194 handler(project, command_options)
195
196 def build(self, project, options):
197 """
198 Build or rebuild services.
199
200 Services are built once and then tagged as `project_service`,
201 e.g. `composetest_db`. If you change a service's `Dockerfile` or the
202 contents of its build directory, you can run `docker-compose build` to rebuild it.
203
204 Usage: build [options] [SERVICE...]
205
206 Options:
207 --force-rm Always remove intermediate containers.
208 --no-cache Do not use cache when building the image.
209 --pull Always attempt to pull a newer version of the image.
210 """
211 project.build(
212 service_names=options['SERVICE'],
213 no_cache=bool(options.get('--no-cache', False)),
214 pull=bool(options.get('--pull', False)),
215 force_rm=bool(options.get('--force-rm', False)))
216
217 def config(self, config_options, options):
218 """
219 Validate and view the compose file.
220
221 Usage: config [options]
222
223 Options:
224 -q, --quiet Only validate the configuration, don't print
225 anything.
226 --services Print the service names, one per line.
227
228 """
229 config_path = get_config_path_from_options(config_options)
230 compose_config = config.load(config.find(self.base_dir, config_path))
231
232 if options['--quiet']:
233 return
234
235 if options['--services']:
236 print('\n'.join(service['name'] for service in compose_config.services))
237 return
238
239 print(serialize_config(compose_config))
240
241 def create(self, project, options):
242 """
243 Creates containers for a service.
244
245 Usage: create [options] [SERVICE...]
246
247 Options:
248 --force-recreate Recreate containers even if their configuration and
249 image haven't changed. Incompatible with --no-recreate.
250 --no-recreate If containers already exist, don't recreate them.
251 Incompatible with --force-recreate.
252 --no-build Don't build an image, even if it's missing
253 """
254 service_names = options['SERVICE']
255
256 project.create(
257 service_names=service_names,
258 strategy=convergence_strategy_from_opts(options),
259 do_build=not options['--no-build']
260 )
261
262 def down(self, project, options):
263 """
264 Stop containers and remove containers, networks, volumes, and images
265 created by `up`. Only containers and networks are removed by default.
266
267 Usage: down [options]
268
269 Options:
270 --rmi type Remove images, type may be one of: 'all' to remove
271 all images, or 'local' to remove only images that
272 don't have an custom name set by the `image` field
273 -v, --volumes Remove data volumes
274 """
275 image_type = image_type_from_opt('--rmi', options['--rmi'])
276 project.down(image_type, options['--volumes'])
277
278 def events(self, project, options):
279 """
280 Receive real time events from containers.
281
282 Usage: events [options] [SERVICE...]
283
284 Options:
285 --json Output events as a stream of json objects
286 """
287 def format_event(event):
288 attributes = ["%s=%s" % item for item in event['attributes'].items()]
289 return ("{time} {type} {action} {id} ({attrs})").format(
290 attrs=", ".join(sorted(attributes)),
291 **event)
292
293 def json_format_event(event):
294 event['time'] = event['time'].isoformat()
295 return json.dumps(event)
296
297 for event in project.events():
298 formatter = json_format_event if options['--json'] else format_event
299 print(formatter(event))
300 sys.stdout.flush()
301
302 def exec_command(self, project, options):
303 """
304 Execute a command in a running container
305
306 Usage: exec [options] SERVICE COMMAND [ARGS...]
307
308 Options:
309 -d Detached mode: Run command in the background.
310 --privileged Give extended privileges to the process.
311 --user USER Run the command as this user.
312 -T Disable pseudo-tty allocation. By default `docker-compose exec`
313 allocates a TTY.
314 --index=index index of the container if there are multiple
315 instances of a service [default: 1]
316 """
317 index = int(options.get('--index'))
318 service = project.get_service(options['SERVICE'])
319 try:
320 container = service.get_container(number=index)
321 except ValueError as e:
322 raise UserError(str(e))
323 command = [options['COMMAND']] + options['ARGS']
324 tty = not options["-T"]
325
326 create_exec_options = {
327 "privileged": options["--privileged"],
328 "user": options["--user"],
329 "tty": tty,
330 "stdin": tty,
331 }
332
333 exec_id = container.create_exec(command, **create_exec_options)
334
335 if options['-d']:
336 container.start_exec(exec_id, tty=tty)
337 return
338
339 signals.set_signal_handler_to_shutdown()
340 try:
341 operation = ExecOperation(
342 project.client,
343 exec_id,
344 interactive=tty,
345 )
346 pty = PseudoTerminal(project.client, operation)
347 pty.start()
348 except signals.ShutdownException:
349 log.info("received shutdown exception: closing")
350 exit_code = project.client.exec_inspect(exec_id).get("ExitCode")
351 sys.exit(exit_code)
352
353 def help(self, project, options):
354 """
355 Get help on a command.
356
357 Usage: help COMMAND
358 """
359 handler = self.get_handler(options['COMMAND'])
360 raise SystemExit(getdoc(handler))
361
362 def kill(self, project, options):
363 """
364 Force stop service containers.
365
366 Usage: kill [options] [SERVICE...]
367
368 Options:
369 -s SIGNAL SIGNAL to send to the container.
370 Default signal is SIGKILL.
371 """
372 signal = options.get('-s', 'SIGKILL')
373
374 project.kill(service_names=options['SERVICE'], signal=signal)
375
376 def logs(self, project, options):
377 """
378 View output from containers.
379
380 Usage: logs [options] [SERVICE...]
381
382 Options:
383 --no-color Produce monochrome output.
384 """
385 containers = project.containers(service_names=options['SERVICE'], stopped=True)
386
387 monochrome = options['--no-color']
388 print("Attaching to", list_containers(containers))
389 LogPrinter(containers, monochrome=monochrome).run()
390
391 def pause(self, project, options):
392 """
393 Pause services.
394
395 Usage: pause [SERVICE...]
396 """
397 containers = project.pause(service_names=options['SERVICE'])
398 exit_if(not containers, 'No containers to pause', 1)
399
400 def port(self, project, options):
401 """
402 Print the public port for a port binding.
403
404 Usage: port [options] SERVICE PRIVATE_PORT
405
406 Options:
407 --protocol=proto tcp or udp [default: tcp]
408 --index=index index of the container if there are multiple
409 instances of a service [default: 1]
410 """
411 index = int(options.get('--index'))
412 service = project.get_service(options['SERVICE'])
413 try:
414 container = service.get_container(number=index)
415 except ValueError as e:
416 raise UserError(str(e))
417 print(container.get_local_port(
418 options['PRIVATE_PORT'],
419 protocol=options.get('--protocol') or 'tcp') or '')
420
421 def ps(self, project, options):
422 """
423 List containers.
424
425 Usage: ps [options] [SERVICE...]
426
427 Options:
428 -q Only display IDs
429 """
430 containers = sorted(
431 project.containers(service_names=options['SERVICE'], stopped=True) +
432 project.containers(service_names=options['SERVICE'], one_off=True),
433 key=attrgetter('name'))
434
435 if options['-q']:
436 for container in containers:
437 print(container.id)
438 else:
439 headers = [
440 'Name',
441 'Command',
442 'State',
443 'Ports',
444 ]
445 rows = []
446 for container in containers:
447 command = container.human_readable_command
448 if len(command) > 30:
449 command = '%s ...' % command[:26]
450 rows.append([
451 container.name,
452 command,
453 container.human_readable_state,
454 container.human_readable_ports,
455 ])
456 print(Formatter().table(headers, rows))
457
458 def pull(self, project, options):
459 """
460 Pulls images for services.
461
462 Usage: pull [options] [SERVICE...]
463
464 Options:
465 --ignore-pull-failures Pull what it can and ignores images with pull failures.
466 """
467 project.pull(
468 service_names=options['SERVICE'],
469 ignore_pull_failures=options.get('--ignore-pull-failures')
470 )
471
472 def rm(self, project, options):
473 """
474 Remove stopped service containers.
475
476 By default, volumes attached to containers will not be removed. You can see all
477 volumes with `docker volume ls`.
478
479 Any data which is not in a volume will be lost.
480
481 Usage: rm [options] [SERVICE...]
482
483 Options:
484 -f, --force Don't ask to confirm removal
485 -v Remove volumes associated with containers
486 """
487 all_containers = project.containers(service_names=options['SERVICE'], stopped=True)
488 stopped_containers = [c for c in all_containers if not c.is_running]
489
490 if len(stopped_containers) > 0:
491 print("Going to remove", list_containers(stopped_containers))
492 if options.get('--force') \
493 or yesno("Are you sure? [yN] ", default=False):
494 project.remove_stopped(
495 service_names=options['SERVICE'],
496 v=options.get('-v', False)
497 )
498 else:
499 print("No stopped containers")
500
501 def run(self, project, options):
502 """
503 Run a one-off command on a service.
504
505 For example:
506
507 $ docker-compose run web python manage.py shell
508
509 By default, linked services will be started, unless they are already
510 running. If you do not want to start linked services, use
511 `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`.
512
513 Usage: run [options] [-p PORT...] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
514
515 Options:
516 -d Detached mode: Run container in the background, print
517 new container name.
518 --name NAME Assign a name to the container
519 --entrypoint CMD Override the entrypoint of the image.
520 -e KEY=VAL Set an environment variable (can be used multiple times)
521 -u, --user="" Run as specified username or uid
522 --no-deps Don't start linked services.
523 --rm Remove container after run. Ignored in detached mode.
524 -p, --publish=[] Publish a container's port(s) to the host
525 --service-ports Run command with the service's ports enabled and mapped
526 to the host.
527 -T Disable pseudo-tty allocation. By default `docker-compose run`
528 allocates a TTY.
529 """
530 service = project.get_service(options['SERVICE'])
531 detach = options['-d']
532
533 if IS_WINDOWS_PLATFORM and not detach:
534 raise UserError(
535 "Interactive mode is not yet supported on Windows.\n"
536 "Please pass the -d flag when using `docker-compose run`."
537 )
538
539 if options['COMMAND']:
540 command = [options['COMMAND']] + options['ARGS']
541 else:
542 command = service.options.get('command')
543
544 container_options = {
545 'command': command,
546 'tty': not (detach or options['-T'] or not sys.stdin.isatty()),
547 'stdin_open': not detach,
548 'detach': detach,
549 }
550
551 if options['-e']:
552 container_options['environment'] = parse_environment(options['-e'])
553
554 if options['--entrypoint']:
555 container_options['entrypoint'] = options.get('--entrypoint')
556
557 if options['--rm']:
558 container_options['restart'] = None
559
560 if options['--user']:
561 container_options['user'] = options.get('--user')
562
563 if not options['--service-ports']:
564 container_options['ports'] = []
565
566 if options['--publish']:
567 container_options['ports'] = options.get('--publish')
568
569 if options['--publish'] and options['--service-ports']:
570 raise UserError(
571 'Service port mapping and manual port mapping '
572 'can not be used togather'
573 )
574
575 if options['--name']:
576 container_options['name'] = options['--name']
577
578 run_one_off_container(container_options, project, service, options)
579
580 def scale(self, project, options):
581 """
582 Set number of containers to run for a service.
583
584 Numbers are specified in the form `service=num` as arguments.
585 For example:
586
587 $ docker-compose scale web=2 worker=3
588
589 Usage: scale [options] [SERVICE=NUM...]
590
591 Options:
592 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
593 (default: 10)
594 """
595 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
596
597 for s in options['SERVICE=NUM']:
598 if '=' not in s:
599 raise UserError('Arguments to scale should be in the form service=num')
600 service_name, num = s.split('=', 1)
601 try:
602 num = int(num)
603 except ValueError:
604 raise UserError('Number of containers for service "%s" is not a '
605 'number' % service_name)
606 project.get_service(service_name).scale(num, timeout=timeout)
607
608 def start(self, project, options):
609 """
610 Start existing containers.
611
612 Usage: start [SERVICE...]
613 """
614 containers = project.start(service_names=options['SERVICE'])
615 exit_if(not containers, 'No containers to start', 1)
616
617 def stop(self, project, options):
618 """
619 Stop running containers without removing them.
620
621 They can be started again with `docker-compose start`.
622
623 Usage: stop [options] [SERVICE...]
624
625 Options:
626 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
627 (default: 10)
628 """
629 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
630 project.stop(service_names=options['SERVICE'], timeout=timeout)
631
632 def restart(self, project, options):
633 """
634 Restart running containers.
635
636 Usage: restart [options] [SERVICE...]
637
638 Options:
639 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
640 (default: 10)
641 """
642 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
643 containers = project.restart(service_names=options['SERVICE'], timeout=timeout)
644 exit_if(not containers, 'No containers to restart', 1)
645
646 def unpause(self, project, options):
647 """
648 Unpause services.
649
650 Usage: unpause [SERVICE...]
651 """
652 containers = project.unpause(service_names=options['SERVICE'])
653 exit_if(not containers, 'No containers to unpause', 1)
654
655 def up(self, project, options):
656 """
657 Builds, (re)creates, starts, and attaches to containers for a service.
658
659 Unless they are already running, this command also starts any linked services.
660
661 The `docker-compose up` command aggregates the output of each container. When
662 the command exits, all containers are stopped. Running `docker-compose up -d`
663 starts the containers in the background and leaves them running.
664
665 If there are existing containers for a service, and the service's configuration
666 or image was changed after the container's creation, `docker-compose up` picks
667 up the changes by stopping and recreating the containers (preserving mounted
668 volumes). To prevent Compose from picking up changes, use the `--no-recreate`
669 flag.
670
671 If you want to force Compose to stop and recreate all containers, use the
672 `--force-recreate` flag.
673
674 Usage: up [options] [SERVICE...]
675
676 Options:
677 -d Detached mode: Run containers in the background,
678 print new container names.
679 Incompatible with --abort-on-container-exit.
680 --no-color Produce monochrome output.
681 --no-deps Don't start linked services.
682 --force-recreate Recreate containers even if their configuration
683 and image haven't changed.
684 Incompatible with --no-recreate.
685 --no-recreate If containers already exist, don't recreate them.
686 Incompatible with --force-recreate.
687 --no-build Don't build an image, even if it's missing
688 --abort-on-container-exit Stops all containers if any container was stopped.
689 Incompatible with -d.
690 -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown
691 when attached or when containers are already
692 running. (default: 10)
693 """
694 monochrome = options['--no-color']
695 start_deps = not options['--no-deps']
696 cascade_stop = options['--abort-on-container-exit']
697 service_names = options['SERVICE']
698 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
699 detached = options.get('-d')
700
701 if detached and cascade_stop:
702 raise UserError("--abort-on-container-exit and -d cannot be combined.")
703
704 with up_shutdown_context(project, service_names, timeout, detached):
705 to_attach = project.up(
706 service_names=service_names,
707 start_deps=start_deps,
708 strategy=convergence_strategy_from_opts(options),
709 do_build=not options['--no-build'],
710 timeout=timeout,
711 detached=detached)
712
713 if detached:
714 return
715 log_printer = build_log_printer(to_attach, service_names, monochrome, cascade_stop)
716 print("Attaching to", list_containers(log_printer.containers))
717 log_printer.run()
718
719 if cascade_stop:
720 print("Aborting on container exit...")
721 project.stop(service_names=service_names, timeout=timeout)
722
723 def version(self, project, options):
724 """
725 Show version informations
726
727 Usage: version [--short]
728
729 Options:
730 --short Shows only Compose's version number.
731 """
732 if options['--short']:
733 print(__version__)
734 else:
735 print(get_version_info('full'))
736
737
738 def convergence_strategy_from_opts(options):
739 no_recreate = options['--no-recreate']
740 force_recreate = options['--force-recreate']
741 if force_recreate and no_recreate:
742 raise UserError("--force-recreate and --no-recreate cannot be combined.")
743
744 if force_recreate:
745 return ConvergenceStrategy.always
746
747 if no_recreate:
748 return ConvergenceStrategy.never
749
750 return ConvergenceStrategy.changed
751
752
753 def image_type_from_opt(flag, value):
754 if not value:
755 return ImageType.none
756 try:
757 return ImageType[value]
758 except KeyError:
759 raise UserError("%s flag must be one of: all, local" % flag)
760
761
762 def run_one_off_container(container_options, project, service, options):
763 if not options['--no-deps']:
764 deps = service.get_dependency_names()
765 if deps:
766 project.up(
767 service_names=deps,
768 start_deps=True,
769 strategy=ConvergenceStrategy.never)
770
771 project.initialize()
772
773 container = service.create_container(
774 quiet=True,
775 one_off=True,
776 **container_options)
777
778 if options['-d']:
779 service.start_container(container)
780 print(container.name)
781 return
782
783 def remove_container(force=False):
784 if options['--rm']:
785 project.client.remove_container(container.id, force=True)
786
787 signals.set_signal_handler_to_shutdown()
788 try:
789 try:
790 operation = RunOperation(
791 project.client,
792 container.id,
793 interactive=not options['-T'],
794 logs=False,
795 )
796 pty = PseudoTerminal(project.client, operation)
797 sockets = pty.sockets()
798 service.start_container(container)
799 pty.start(sockets)
800 exit_code = container.wait()
801 except signals.ShutdownException:
802 project.client.stop(container.id)
803 exit_code = 1
804 except signals.ShutdownException:
805 project.client.kill(container.id)
806 remove_container(force=True)
807 sys.exit(2)
808
809 remove_container()
810 sys.exit(exit_code)
811
812
813 def build_log_printer(containers, service_names, monochrome, cascade_stop):
814 if service_names:
815 containers = [
816 container
817 for container in containers if container.service in service_names
818 ]
819 return LogPrinter(containers, monochrome=monochrome, cascade_stop=cascade_stop)
820
821
822 @contextlib.contextmanager
823 def up_shutdown_context(project, service_names, timeout, detached):
824 if detached:
825 yield
826 return
827
828 signals.set_signal_handler_to_shutdown()
829 try:
830 try:
831 yield
832 except signals.ShutdownException:
833 print("Gracefully stopping... (press Ctrl+C again to force)")
834 project.stop(service_names=service_names, timeout=timeout)
835 except signals.ShutdownException:
836 project.kill(service_names=service_names)
837 sys.exit(2)
838
839
840 def list_containers(containers):
841 return ", ".join(c.name for c in containers)
842
843
844 def exit_if(condition, message, exit_code):
845 if condition:
846 log.error(message)
847 raise SystemExit(exit_code)
848
[end of compose/cli/main.py]
[start of compose/cli/multiplexer.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 from threading import Thread
5
6 from six.moves import _thread as thread
7
8 try:
9 from Queue import Queue, Empty
10 except ImportError:
11 from queue import Queue, Empty # Python 3.x
12
13
14 STOP = object()
15
16
17 class Multiplexer(object):
18 """
19 Create a single iterator from several iterators by running all of them in
20 parallel and yielding results as they come in.
21 """
22
23 def __init__(self, iterators, cascade_stop=False):
24 self.iterators = iterators
25 self.cascade_stop = cascade_stop
26 self._num_running = len(iterators)
27 self.queue = Queue()
28
29 def loop(self):
30 self._init_readers()
31
32 while self._num_running > 0:
33 try:
34 item, exception = self.queue.get(timeout=0.1)
35
36 if exception:
37 raise exception
38
39 if item is STOP:
40 if self.cascade_stop is True:
41 break
42 else:
43 self._num_running -= 1
44 else:
45 yield item
46 except Empty:
47 pass
48 # See https://github.com/docker/compose/issues/189
49 except thread.error:
50 raise KeyboardInterrupt()
51
52 def _init_readers(self):
53 for iterator in self.iterators:
54 t = Thread(target=_enqueue_output, args=(iterator, self.queue))
55 t.daemon = True
56 t.start()
57
58
59 def _enqueue_output(iterator, queue):
60 try:
61 for item in iterator:
62 queue.put((item, None))
63 queue.put((STOP, None))
64 except Exception as e:
65 queue.put((None, e))
66
[end of compose/cli/multiplexer.py]
[start of compose/cli/signals.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import signal
5
6
7 class ShutdownException(Exception):
8 pass
9
10
11 def shutdown(signal, frame):
12 raise ShutdownException()
13
14
15 def set_signal_handler(handler):
16 signal.signal(signal.SIGINT, handler)
17 signal.signal(signal.SIGTERM, handler)
18
19
20 def set_signal_handler_to_shutdown():
21 set_signal_handler(shutdown)
22
[end of compose/cli/signals.py]
[start of compose/config/config.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import codecs
5 import functools
6 import logging
7 import operator
8 import os
9 import string
10 import sys
11 from collections import namedtuple
12
13 import six
14 import yaml
15 from cached_property import cached_property
16
17 from ..const import COMPOSEFILE_V1 as V1
18 from ..const import COMPOSEFILE_V2_0 as V2_0
19 from ..utils import build_string_dict
20 from .errors import CircularReference
21 from .errors import ComposeFileNotFound
22 from .errors import ConfigurationError
23 from .errors import VERSION_EXPLANATION
24 from .interpolation import interpolate_environment_variables
25 from .sort_services import get_container_name_from_network_mode
26 from .sort_services import get_service_name_from_network_mode
27 from .sort_services import sort_service_dicts
28 from .types import parse_extra_hosts
29 from .types import parse_restart_spec
30 from .types import ServiceLink
31 from .types import VolumeFromSpec
32 from .types import VolumeSpec
33 from .validation import match_named_volumes
34 from .validation import validate_against_config_schema
35 from .validation import validate_config_section
36 from .validation import validate_depends_on
37 from .validation import validate_extends_file_path
38 from .validation import validate_network_mode
39 from .validation import validate_service_constraints
40 from .validation import validate_top_level_object
41 from .validation import validate_ulimits
42
43
44 DOCKER_CONFIG_KEYS = [
45 'cap_add',
46 'cap_drop',
47 'cgroup_parent',
48 'command',
49 'cpu_quota',
50 'cpu_shares',
51 'cpuset',
52 'detach',
53 'devices',
54 'dns',
55 'dns_search',
56 'domainname',
57 'entrypoint',
58 'env_file',
59 'environment',
60 'extra_hosts',
61 'hostname',
62 'image',
63 'ipc',
64 'labels',
65 'links',
66 'mac_address',
67 'mem_limit',
68 'memswap_limit',
69 'net',
70 'pid',
71 'ports',
72 'privileged',
73 'read_only',
74 'restart',
75 'security_opt',
76 'shm_size',
77 'stdin_open',
78 'stop_signal',
79 'tty',
80 'user',
81 'volume_driver',
82 'volumes',
83 'volumes_from',
84 'working_dir',
85 ]
86
87 ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
88 'build',
89 'container_name',
90 'dockerfile',
91 'logging',
92 'network_mode',
93 ]
94
95 DOCKER_VALID_URL_PREFIXES = (
96 'http://',
97 'https://',
98 'git://',
99 'github.com/',
100 'git@',
101 )
102
103 SUPPORTED_FILENAMES = [
104 'docker-compose.yml',
105 'docker-compose.yaml',
106 ]
107
108 DEFAULT_OVERRIDE_FILENAME = 'docker-compose.override.yml'
109
110
111 log = logging.getLogger(__name__)
112
113
114 class ConfigDetails(namedtuple('_ConfigDetails', 'working_dir config_files')):
115 """
116 :param working_dir: the directory to use for relative paths in the config
117 :type working_dir: string
118 :param config_files: list of configuration files to load
119 :type config_files: list of :class:`ConfigFile`
120 """
121
122
123 class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
124 """
125 :param filename: filename of the config file
126 :type filename: string
127 :param config: contents of the config file
128 :type config: :class:`dict`
129 """
130
131 @classmethod
132 def from_filename(cls, filename):
133 return cls(filename, load_yaml(filename))
134
135 @cached_property
136 def version(self):
137 if 'version' not in self.config:
138 return V1
139
140 version = self.config['version']
141
142 if isinstance(version, dict):
143 log.warn('Unexpected type for "version" key in "{}". Assuming '
144 '"version" is the name of a service, and defaulting to '
145 'Compose file version 1.'.format(self.filename))
146 return V1
147
148 if not isinstance(version, six.string_types):
149 raise ConfigurationError(
150 'Version in "{}" is invalid - it should be a string.'
151 .format(self.filename))
152
153 if version == '1':
154 raise ConfigurationError(
155 'Version in "{}" is invalid. {}'
156 .format(self.filename, VERSION_EXPLANATION))
157
158 if version == '2':
159 version = V2_0
160
161 if version != V2_0:
162 raise ConfigurationError(
163 'Version in "{}" is unsupported. {}'
164 .format(self.filename, VERSION_EXPLANATION))
165
166 return version
167
168 def get_service(self, name):
169 return self.get_service_dicts()[name]
170
171 def get_service_dicts(self):
172 return self.config if self.version == V1 else self.config.get('services', {})
173
174 def get_volumes(self):
175 return {} if self.version == V1 else self.config.get('volumes', {})
176
177 def get_networks(self):
178 return {} if self.version == V1 else self.config.get('networks', {})
179
180
181 class Config(namedtuple('_Config', 'version services volumes networks')):
182 """
183 :param version: configuration version
184 :type version: int
185 :param services: List of service description dictionaries
186 :type services: :class:`list`
187 :param volumes: Dictionary mapping volume names to description dictionaries
188 :type volumes: :class:`dict`
189 :param networks: Dictionary mapping network names to description dictionaries
190 :type networks: :class:`dict`
191 """
192
193
194 class ServiceConfig(namedtuple('_ServiceConfig', 'working_dir filename name config')):
195
196 @classmethod
197 def with_abs_paths(cls, working_dir, filename, name, config):
198 if not working_dir:
199 raise ValueError("No working_dir for ServiceConfig.")
200
201 return cls(
202 os.path.abspath(working_dir),
203 os.path.abspath(filename) if filename else filename,
204 name,
205 config)
206
207
208 def find(base_dir, filenames):
209 if filenames == ['-']:
210 return ConfigDetails(
211 os.getcwd(),
212 [ConfigFile(None, yaml.safe_load(sys.stdin))])
213
214 if filenames:
215 filenames = [os.path.join(base_dir, f) for f in filenames]
216 else:
217 filenames = get_default_config_files(base_dir)
218
219 log.debug("Using configuration files: {}".format(",".join(filenames)))
220 return ConfigDetails(
221 os.path.dirname(filenames[0]),
222 [ConfigFile.from_filename(f) for f in filenames])
223
224
225 def validate_config_version(config_files):
226 main_file = config_files[0]
227 validate_top_level_object(main_file)
228 for next_file in config_files[1:]:
229 validate_top_level_object(next_file)
230
231 if main_file.version != next_file.version:
232 raise ConfigurationError(
233 "Version mismatch: file {0} specifies version {1} but "
234 "extension file {2} uses version {3}".format(
235 main_file.filename,
236 main_file.version,
237 next_file.filename,
238 next_file.version))
239
240
241 def get_default_config_files(base_dir):
242 (candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, base_dir)
243
244 if not candidates:
245 raise ComposeFileNotFound(SUPPORTED_FILENAMES)
246
247 winner = candidates[0]
248
249 if len(candidates) > 1:
250 log.warn("Found multiple config files with supported names: %s", ", ".join(candidates))
251 log.warn("Using %s\n", winner)
252
253 return [os.path.join(path, winner)] + get_default_override_file(path)
254
255
256 def get_default_override_file(path):
257 override_filename = os.path.join(path, DEFAULT_OVERRIDE_FILENAME)
258 return [override_filename] if os.path.exists(override_filename) else []
259
260
261 def find_candidates_in_parent_dirs(filenames, path):
262 """
263 Given a directory path to start, looks for filenames in the
264 directory, and then each parent directory successively,
265 until found.
266
267 Returns tuple (candidates, path).
268 """
269 candidates = [filename for filename in filenames
270 if os.path.exists(os.path.join(path, filename))]
271
272 if not candidates:
273 parent_dir = os.path.join(path, '..')
274 if os.path.abspath(parent_dir) != os.path.abspath(path):
275 return find_candidates_in_parent_dirs(filenames, parent_dir)
276
277 return (candidates, path)
278
279
280 def load(config_details):
281 """Load the configuration from a working directory and a list of
282 configuration files. Files are loaded in order, and merged on top
283 of each other to create the final configuration.
284
285 Return a fully interpolated, extended and validated configuration.
286 """
287 validate_config_version(config_details.config_files)
288
289 processed_files = [
290 process_config_file(config_file)
291 for config_file in config_details.config_files
292 ]
293 config_details = config_details._replace(config_files=processed_files)
294
295 main_file = config_details.config_files[0]
296 volumes = load_mapping(
297 config_details.config_files, 'get_volumes', 'Volume'
298 )
299 networks = load_mapping(
300 config_details.config_files, 'get_networks', 'Network'
301 )
302 service_dicts = load_services(
303 config_details.working_dir,
304 main_file,
305 [file.get_service_dicts() for file in config_details.config_files])
306
307 if main_file.version != V1:
308 for service_dict in service_dicts:
309 match_named_volumes(service_dict, volumes)
310
311 return Config(main_file.version, service_dicts, volumes, networks)
312
313
314 def load_mapping(config_files, get_func, entity_type):
315 mapping = {}
316
317 for config_file in config_files:
318 for name, config in getattr(config_file, get_func)().items():
319 mapping[name] = config or {}
320 if not config:
321 continue
322
323 external = config.get('external')
324 if external:
325 if len(config.keys()) > 1:
326 raise ConfigurationError(
327 '{} {} declared as external but specifies'
328 ' additional attributes ({}). '.format(
329 entity_type,
330 name,
331 ', '.join([k for k in config.keys() if k != 'external'])
332 )
333 )
334 if isinstance(external, dict):
335 config['external_name'] = external.get('name')
336 else:
337 config['external_name'] = name
338
339 mapping[name] = config
340
341 if 'driver_opts' in config:
342 config['driver_opts'] = build_string_dict(
343 config['driver_opts']
344 )
345
346 return mapping
347
348
349 def load_services(working_dir, config_file, service_configs):
350 def build_service(service_name, service_dict, service_names):
351 service_config = ServiceConfig.with_abs_paths(
352 working_dir,
353 config_file.filename,
354 service_name,
355 service_dict)
356 resolver = ServiceExtendsResolver(service_config, config_file)
357 service_dict = process_service(resolver.run())
358
359 service_config = service_config._replace(config=service_dict)
360 validate_service(service_config, service_names, config_file.version)
361 service_dict = finalize_service(
362 service_config,
363 service_names,
364 config_file.version)
365 return service_dict
366
367 def build_services(service_config):
368 service_names = service_config.keys()
369 return sort_service_dicts([
370 build_service(name, service_dict, service_names)
371 for name, service_dict in service_config.items()
372 ])
373
374 def merge_services(base, override):
375 all_service_names = set(base) | set(override)
376 return {
377 name: merge_service_dicts_from_files(
378 base.get(name, {}),
379 override.get(name, {}),
380 config_file.version)
381 for name in all_service_names
382 }
383
384 service_config = service_configs[0]
385 for next_config in service_configs[1:]:
386 service_config = merge_services(service_config, next_config)
387
388 return build_services(service_config)
389
390
391 def interpolate_config_section(filename, config, section):
392 validate_config_section(filename, config, section)
393 return interpolate_environment_variables(config, section)
394
395
396 def process_config_file(config_file, service_name=None):
397 services = interpolate_config_section(
398 config_file.filename,
399 config_file.get_service_dicts(),
400 'service')
401
402 if config_file.version == V2_0:
403 processed_config = dict(config_file.config)
404 processed_config['services'] = services
405 processed_config['volumes'] = interpolate_config_section(
406 config_file.filename,
407 config_file.get_volumes(),
408 'volume')
409 processed_config['networks'] = interpolate_config_section(
410 config_file.filename,
411 config_file.get_networks(),
412 'network')
413
414 if config_file.version == V1:
415 processed_config = services
416
417 config_file = config_file._replace(config=processed_config)
418 validate_against_config_schema(config_file)
419
420 if service_name and service_name not in services:
421 raise ConfigurationError(
422 "Cannot extend service '{}' in {}: Service not found".format(
423 service_name, config_file.filename))
424
425 return config_file
426
427
428 class ServiceExtendsResolver(object):
429 def __init__(self, service_config, config_file, already_seen=None):
430 self.service_config = service_config
431 self.working_dir = service_config.working_dir
432 self.already_seen = already_seen or []
433 self.config_file = config_file
434
435 @property
436 def signature(self):
437 return self.service_config.filename, self.service_config.name
438
439 def detect_cycle(self):
440 if self.signature in self.already_seen:
441 raise CircularReference(self.already_seen + [self.signature])
442
443 def run(self):
444 self.detect_cycle()
445
446 if 'extends' in self.service_config.config:
447 service_dict = self.resolve_extends(*self.validate_and_construct_extends())
448 return self.service_config._replace(config=service_dict)
449
450 return self.service_config
451
452 def validate_and_construct_extends(self):
453 extends = self.service_config.config['extends']
454 if not isinstance(extends, dict):
455 extends = {'service': extends}
456
457 config_path = self.get_extended_config_path(extends)
458 service_name = extends['service']
459
460 extends_file = ConfigFile.from_filename(config_path)
461 validate_config_version([self.config_file, extends_file])
462 extended_file = process_config_file(
463 extends_file,
464 service_name=service_name)
465 service_config = extended_file.get_service(service_name)
466
467 return config_path, service_config, service_name
468
469 def resolve_extends(self, extended_config_path, service_dict, service_name):
470 resolver = ServiceExtendsResolver(
471 ServiceConfig.with_abs_paths(
472 os.path.dirname(extended_config_path),
473 extended_config_path,
474 service_name,
475 service_dict),
476 self.config_file,
477 already_seen=self.already_seen + [self.signature])
478
479 service_config = resolver.run()
480 other_service_dict = process_service(service_config)
481 validate_extended_service_dict(
482 other_service_dict,
483 extended_config_path,
484 service_name)
485
486 return merge_service_dicts(
487 other_service_dict,
488 self.service_config.config,
489 self.config_file.version)
490
491 def get_extended_config_path(self, extends_options):
492 """Service we are extending either has a value for 'file' set, which we
493 need to obtain a full path too or we are extending from a service
494 defined in our own file.
495 """
496 filename = self.service_config.filename
497 validate_extends_file_path(
498 self.service_config.name,
499 extends_options,
500 filename)
501 if 'file' in extends_options:
502 return expand_path(self.working_dir, extends_options['file'])
503 return filename
504
505
506 def resolve_environment(service_dict):
507 """Unpack any environment variables from an env_file, if set.
508 Interpolate environment values if set.
509 """
510 env = {}
511 for env_file in service_dict.get('env_file', []):
512 env.update(env_vars_from_file(env_file))
513
514 env.update(parse_environment(service_dict.get('environment')))
515 return dict(resolve_env_var(k, v) for k, v in six.iteritems(env))
516
517
518 def resolve_build_args(build):
519 args = parse_build_arguments(build.get('args'))
520 return dict(resolve_env_var(k, v) for k, v in six.iteritems(args))
521
522
523 def validate_extended_service_dict(service_dict, filename, service):
524 error_prefix = "Cannot extend service '%s' in %s:" % (service, filename)
525
526 if 'links' in service_dict:
527 raise ConfigurationError(
528 "%s services with 'links' cannot be extended" % error_prefix)
529
530 if 'volumes_from' in service_dict:
531 raise ConfigurationError(
532 "%s services with 'volumes_from' cannot be extended" % error_prefix)
533
534 if 'net' in service_dict:
535 if get_container_name_from_network_mode(service_dict['net']):
536 raise ConfigurationError(
537 "%s services with 'net: container' cannot be extended" % error_prefix)
538
539 if 'network_mode' in service_dict:
540 if get_service_name_from_network_mode(service_dict['network_mode']):
541 raise ConfigurationError(
542 "%s services with 'network_mode: service' cannot be extended" % error_prefix)
543
544 if 'depends_on' in service_dict:
545 raise ConfigurationError(
546 "%s services with 'depends_on' cannot be extended" % error_prefix)
547
548
549 def validate_service(service_config, service_names, version):
550 service_dict, service_name = service_config.config, service_config.name
551 validate_service_constraints(service_dict, service_name, version)
552 validate_paths(service_dict)
553
554 validate_ulimits(service_config)
555 validate_network_mode(service_config, service_names)
556 validate_depends_on(service_config, service_names)
557
558 if not service_dict.get('image') and has_uppercase(service_name):
559 raise ConfigurationError(
560 "Service '{name}' contains uppercase characters which are not valid "
561 "as part of an image name. Either use a lowercase service name or "
562 "use the `image` field to set a custom name for the service image."
563 .format(name=service_name))
564
565
566 def process_service(service_config):
567 working_dir = service_config.working_dir
568 service_dict = dict(service_config.config)
569
570 if 'env_file' in service_dict:
571 service_dict['env_file'] = [
572 expand_path(working_dir, path)
573 for path in to_list(service_dict['env_file'])
574 ]
575
576 if 'build' in service_dict:
577 if isinstance(service_dict['build'], six.string_types):
578 service_dict['build'] = resolve_build_path(working_dir, service_dict['build'])
579 elif isinstance(service_dict['build'], dict) and 'context' in service_dict['build']:
580 path = service_dict['build']['context']
581 service_dict['build']['context'] = resolve_build_path(working_dir, path)
582
583 if 'volumes' in service_dict and service_dict.get('volume_driver') is None:
584 service_dict['volumes'] = resolve_volume_paths(working_dir, service_dict)
585
586 if 'labels' in service_dict:
587 service_dict['labels'] = parse_labels(service_dict['labels'])
588
589 if 'extra_hosts' in service_dict:
590 service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts'])
591
592 for field in ['dns', 'dns_search']:
593 if field in service_dict:
594 service_dict[field] = to_list(service_dict[field])
595
596 return service_dict
597
598
599 def finalize_service(service_config, service_names, version):
600 service_dict = dict(service_config.config)
601
602 if 'environment' in service_dict or 'env_file' in service_dict:
603 service_dict['environment'] = resolve_environment(service_dict)
604 service_dict.pop('env_file', None)
605
606 if 'volumes_from' in service_dict:
607 service_dict['volumes_from'] = [
608 VolumeFromSpec.parse(vf, service_names, version)
609 for vf in service_dict['volumes_from']
610 ]
611
612 if 'volumes' in service_dict:
613 service_dict['volumes'] = [
614 VolumeSpec.parse(v) for v in service_dict['volumes']]
615
616 if 'net' in service_dict:
617 network_mode = service_dict.pop('net')
618 container_name = get_container_name_from_network_mode(network_mode)
619 if container_name and container_name in service_names:
620 service_dict['network_mode'] = 'service:{}'.format(container_name)
621 else:
622 service_dict['network_mode'] = network_mode
623
624 if 'networks' in service_dict:
625 service_dict['networks'] = parse_networks(service_dict['networks'])
626
627 if 'restart' in service_dict:
628 service_dict['restart'] = parse_restart_spec(service_dict['restart'])
629
630 normalize_build(service_dict, service_config.working_dir)
631
632 service_dict['name'] = service_config.name
633 return normalize_v1_service_format(service_dict)
634
635
636 def normalize_v1_service_format(service_dict):
637 if 'log_driver' in service_dict or 'log_opt' in service_dict:
638 if 'logging' not in service_dict:
639 service_dict['logging'] = {}
640 if 'log_driver' in service_dict:
641 service_dict['logging']['driver'] = service_dict['log_driver']
642 del service_dict['log_driver']
643 if 'log_opt' in service_dict:
644 service_dict['logging']['options'] = service_dict['log_opt']
645 del service_dict['log_opt']
646
647 if 'dockerfile' in service_dict:
648 service_dict['build'] = service_dict.get('build', {})
649 service_dict['build'].update({
650 'dockerfile': service_dict.pop('dockerfile')
651 })
652
653 return service_dict
654
655
656 def merge_service_dicts_from_files(base, override, version):
657 """When merging services from multiple files we need to merge the `extends`
658 field. This is not handled by `merge_service_dicts()` which is used to
659 perform the `extends`.
660 """
661 new_service = merge_service_dicts(base, override, version)
662 if 'extends' in override:
663 new_service['extends'] = override['extends']
664 elif 'extends' in base:
665 new_service['extends'] = base['extends']
666 return new_service
667
668
669 class MergeDict(dict):
670 """A dict-like object responsible for merging two dicts into one."""
671
672 def __init__(self, base, override):
673 self.base = base
674 self.override = override
675
676 def needs_merge(self, field):
677 return field in self.base or field in self.override
678
679 def merge_field(self, field, merge_func, default=None):
680 if not self.needs_merge(field):
681 return
682
683 self[field] = merge_func(
684 self.base.get(field, default),
685 self.override.get(field, default))
686
687 def merge_mapping(self, field, parse_func):
688 if not self.needs_merge(field):
689 return
690
691 self[field] = parse_func(self.base.get(field))
692 self[field].update(parse_func(self.override.get(field)))
693
694 def merge_sequence(self, field, parse_func):
695 def parse_sequence_func(seq):
696 return to_mapping((parse_func(item) for item in seq), 'merge_field')
697
698 if not self.needs_merge(field):
699 return
700
701 merged = parse_sequence_func(self.base.get(field, []))
702 merged.update(parse_sequence_func(self.override.get(field, [])))
703 self[field] = [item.repr() for item in merged.values()]
704
705 def merge_scalar(self, field):
706 if self.needs_merge(field):
707 self[field] = self.override.get(field, self.base.get(field))
708
709
710 def merge_service_dicts(base, override, version):
711 md = MergeDict(base, override)
712
713 md.merge_mapping('environment', parse_environment)
714 md.merge_mapping('labels', parse_labels)
715 md.merge_mapping('ulimits', parse_ulimits)
716 md.merge_mapping('networks', parse_networks)
717 md.merge_sequence('links', ServiceLink.parse)
718
719 for field in ['volumes', 'devices']:
720 md.merge_field(field, merge_path_mappings)
721
722 for field in [
723 'depends_on',
724 'expose',
725 'external_links',
726 'ports',
727 'volumes_from',
728 ]:
729 md.merge_field(field, operator.add, default=[])
730
731 for field in ['dns', 'dns_search', 'env_file']:
732 md.merge_field(field, merge_list_or_string)
733
734 for field in set(ALLOWED_KEYS) - set(md):
735 md.merge_scalar(field)
736
737 if version == V1:
738 legacy_v1_merge_image_or_build(md, base, override)
739 elif md.needs_merge('build'):
740 md['build'] = merge_build(md, base, override)
741
742 return dict(md)
743
744
745 def merge_build(output, base, override):
746 def to_dict(service):
747 build_config = service.get('build', {})
748 if isinstance(build_config, six.string_types):
749 return {'context': build_config}
750 return build_config
751
752 md = MergeDict(to_dict(base), to_dict(override))
753 md.merge_scalar('context')
754 md.merge_scalar('dockerfile')
755 md.merge_mapping('args', parse_build_arguments)
756 return dict(md)
757
758
759 def legacy_v1_merge_image_or_build(output, base, override):
760 output.pop('image', None)
761 output.pop('build', None)
762 if 'image' in override:
763 output['image'] = override['image']
764 elif 'build' in override:
765 output['build'] = override['build']
766 elif 'image' in base:
767 output['image'] = base['image']
768 elif 'build' in base:
769 output['build'] = base['build']
770
771
772 def merge_environment(base, override):
773 env = parse_environment(base)
774 env.update(parse_environment(override))
775 return env
776
777
778 def split_env(env):
779 if isinstance(env, six.binary_type):
780 env = env.decode('utf-8', 'replace')
781 if '=' in env:
782 return env.split('=', 1)
783 else:
784 return env, None
785
786
787 def split_label(label):
788 if '=' in label:
789 return label.split('=', 1)
790 else:
791 return label, ''
792
793
794 def parse_dict_or_list(split_func, type_name, arguments):
795 if not arguments:
796 return {}
797
798 if isinstance(arguments, list):
799 return dict(split_func(e) for e in arguments)
800
801 if isinstance(arguments, dict):
802 return dict(arguments)
803
804 raise ConfigurationError(
805 "%s \"%s\" must be a list or mapping," %
806 (type_name, arguments)
807 )
808
809
810 parse_build_arguments = functools.partial(parse_dict_or_list, split_env, 'build arguments')
811 parse_environment = functools.partial(parse_dict_or_list, split_env, 'environment')
812 parse_labels = functools.partial(parse_dict_or_list, split_label, 'labels')
813 parse_networks = functools.partial(parse_dict_or_list, lambda k: (k, None), 'networks')
814
815
816 def parse_ulimits(ulimits):
817 if not ulimits:
818 return {}
819
820 if isinstance(ulimits, dict):
821 return dict(ulimits)
822
823
824 def resolve_env_var(key, val):
825 if val is not None:
826 return key, val
827 elif key in os.environ:
828 return key, os.environ[key]
829 else:
830 return key, None
831
832
833 def env_vars_from_file(filename):
834 """
835 Read in a line delimited file of environment variables.
836 """
837 if not os.path.exists(filename):
838 raise ConfigurationError("Couldn't find env file: %s" % filename)
839 env = {}
840 for line in codecs.open(filename, 'r', 'utf-8'):
841 line = line.strip()
842 if line and not line.startswith('#'):
843 k, v = split_env(line)
844 env[k] = v
845 return env
846
847
848 def resolve_volume_paths(working_dir, service_dict):
849 return [
850 resolve_volume_path(working_dir, volume)
851 for volume in service_dict['volumes']
852 ]
853
854
855 def resolve_volume_path(working_dir, volume):
856 container_path, host_path = split_path_mapping(volume)
857
858 if host_path is not None:
859 if host_path.startswith('.'):
860 host_path = expand_path(working_dir, host_path)
861 host_path = os.path.expanduser(host_path)
862 return u"{}:{}".format(host_path, container_path)
863 else:
864 return container_path
865
866
867 def normalize_build(service_dict, working_dir):
868
869 if 'build' in service_dict:
870 build = {}
871 # Shortcut where specifying a string is treated as the build context
872 if isinstance(service_dict['build'], six.string_types):
873 build['context'] = service_dict.pop('build')
874 else:
875 build.update(service_dict['build'])
876 if 'args' in build:
877 build['args'] = build_string_dict(resolve_build_args(build))
878
879 service_dict['build'] = build
880
881
882 def resolve_build_path(working_dir, build_path):
883 if is_url(build_path):
884 return build_path
885 return expand_path(working_dir, build_path)
886
887
888 def is_url(build_path):
889 return build_path.startswith(DOCKER_VALID_URL_PREFIXES)
890
891
892 def validate_paths(service_dict):
893 if 'build' in service_dict:
894 build = service_dict.get('build', {})
895
896 if isinstance(build, six.string_types):
897 build_path = build
898 elif isinstance(build, dict) and 'context' in build:
899 build_path = build['context']
900 else:
901 # We have a build section but no context, so nothing to validate
902 return
903
904 if (
905 not is_url(build_path) and
906 (not os.path.exists(build_path) or not os.access(build_path, os.R_OK))
907 ):
908 raise ConfigurationError(
909 "build path %s either does not exist, is not accessible, "
910 "or is not a valid URL." % build_path)
911
912
913 def merge_path_mappings(base, override):
914 d = dict_from_path_mappings(base)
915 d.update(dict_from_path_mappings(override))
916 return path_mappings_from_dict(d)
917
918
919 def dict_from_path_mappings(path_mappings):
920 if path_mappings:
921 return dict(split_path_mapping(v) for v in path_mappings)
922 else:
923 return {}
924
925
926 def path_mappings_from_dict(d):
927 return [join_path_mapping(v) for v in d.items()]
928
929
930 def split_path_mapping(volume_path):
931 """
932 Ascertain if the volume_path contains a host path as well as a container
933 path. Using splitdrive so windows absolute paths won't cause issues with
934 splitting on ':'.
935 """
936 # splitdrive has limitations when it comes to relative paths, so when it's
937 # relative, handle special case to set the drive to ''
938 if volume_path.startswith('.') or volume_path.startswith('~'):
939 drive, volume_config = '', volume_path
940 else:
941 drive, volume_config = os.path.splitdrive(volume_path)
942
943 if ':' in volume_config:
944 (host, container) = volume_config.split(':', 1)
945 return (container, drive + host)
946 else:
947 return (volume_path, None)
948
949
950 def join_path_mapping(pair):
951 (container, host) = pair
952 if host is None:
953 return container
954 else:
955 return ":".join((host, container))
956
957
958 def expand_path(working_dir, path):
959 return os.path.abspath(os.path.join(working_dir, os.path.expanduser(path)))
960
961
962 def merge_list_or_string(base, override):
963 return to_list(base) + to_list(override)
964
965
966 def to_list(value):
967 if value is None:
968 return []
969 elif isinstance(value, six.string_types):
970 return [value]
971 else:
972 return value
973
974
975 def to_mapping(sequence, key_field):
976 return {getattr(item, key_field): item for item in sequence}
977
978
979 def has_uppercase(name):
980 return any(char in string.ascii_uppercase for char in name)
981
982
983 def load_yaml(filename):
984 try:
985 with open(filename, 'r') as fh:
986 return yaml.safe_load(fh)
987 except (IOError, yaml.YAMLError) as e:
988 error_name = getattr(e, '__module__', '') + '.' + e.__class__.__name__
989 raise ConfigurationError(u"{}: {}".format(error_name, e))
990
[end of compose/config/config.py]
[start of compose/parallel.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import operator
5 import sys
6 from threading import Thread
7
8 from docker.errors import APIError
9 from six.moves.queue import Empty
10 from six.moves.queue import Queue
11
12 from compose.utils import get_output_stream
13
14
15 def perform_operation(func, arg, callback, index):
16 try:
17 callback((index, func(arg)))
18 except Exception as e:
19 callback((index, e))
20
21
22 def parallel_execute(objects, func, index_func, msg):
23 """For a given list of objects, call the callable passing in the first
24 object we give it.
25 """
26 objects = list(objects)
27 stream = get_output_stream(sys.stderr)
28 writer = ParallelStreamWriter(stream, msg)
29
30 for obj in objects:
31 writer.initialize(index_func(obj))
32
33 q = Queue()
34
35 # TODO: limit the number of threads #1828
36 for obj in objects:
37 t = Thread(
38 target=perform_operation,
39 args=(func, obj, q.put, index_func(obj)))
40 t.daemon = True
41 t.start()
42
43 done = 0
44 errors = {}
45
46 while done < len(objects):
47 try:
48 msg_index, result = q.get(timeout=1)
49 except Empty:
50 continue
51
52 if isinstance(result, APIError):
53 errors[msg_index] = "error", result.explanation
54 writer.write(msg_index, 'error')
55 elif isinstance(result, Exception):
56 errors[msg_index] = "unexpected_exception", result
57 else:
58 writer.write(msg_index, 'done')
59 done += 1
60
61 if not errors:
62 return
63
64 stream.write("\n")
65 for msg_index, (result, error) in errors.items():
66 stream.write("ERROR: for {} {} \n".format(msg_index, error))
67 if result == 'unexpected_exception':
68 raise error
69
70
71 class ParallelStreamWriter(object):
72 """Write out messages for operations happening in parallel.
73
74 Each operation has it's own line, and ANSI code characters are used
75 to jump to the correct line, and write over the line.
76 """
77
78 def __init__(self, stream, msg):
79 self.stream = stream
80 self.msg = msg
81 self.lines = []
82
83 def initialize(self, obj_index):
84 self.lines.append(obj_index)
85 self.stream.write("{} {} ... \r\n".format(self.msg, obj_index))
86 self.stream.flush()
87
88 def write(self, obj_index, status):
89 position = self.lines.index(obj_index)
90 diff = len(self.lines) - position
91 # move up
92 self.stream.write("%c[%dA" % (27, diff))
93 # erase
94 self.stream.write("%c[2K\r" % 27)
95 self.stream.write("{} {} ... {}\r".format(self.msg, obj_index, status))
96 # move back down
97 self.stream.write("%c[%dB" % (27, diff))
98 self.stream.flush()
99
100
101 def parallel_operation(containers, operation, options, message):
102 parallel_execute(
103 containers,
104 operator.methodcaller(operation, **options),
105 operator.attrgetter('name'),
106 message)
107
108
109 def parallel_remove(containers, options):
110 stopped_containers = [c for c in containers if not c.is_running]
111 parallel_operation(stopped_containers, 'remove', options, 'Removing')
112
113
114 def parallel_stop(containers, options):
115 parallel_operation(containers, 'stop', options, 'Stopping')
116
117
118 def parallel_start(containers, options):
119 parallel_operation(containers, 'start', options, 'Starting')
120
121
122 def parallel_pause(containers, options):
123 parallel_operation(containers, 'pause', options, 'Pausing')
124
125
126 def parallel_unpause(containers, options):
127 parallel_operation(containers, 'unpause', options, 'Unpausing')
128
129
130 def parallel_kill(containers, options):
131 parallel_operation(containers, 'kill', options, 'Killing')
132
133
134 def parallel_restart(containers, options):
135 parallel_operation(containers, 'restart', options, 'Restarting')
136
[end of compose/parallel.py]
[start of compose/service.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import logging
5 import re
6 import sys
7 from collections import namedtuple
8 from operator import attrgetter
9
10 import enum
11 import six
12 from docker.errors import APIError
13 from docker.utils import LogConfig
14 from docker.utils.ports import build_port_bindings
15 from docker.utils.ports import split_port
16
17 from . import __version__
18 from .config import DOCKER_CONFIG_KEYS
19 from .config import merge_environment
20 from .config.types import VolumeSpec
21 from .const import DEFAULT_TIMEOUT
22 from .const import LABEL_CONFIG_HASH
23 from .const import LABEL_CONTAINER_NUMBER
24 from .const import LABEL_ONE_OFF
25 from .const import LABEL_PROJECT
26 from .const import LABEL_SERVICE
27 from .const import LABEL_VERSION
28 from .container import Container
29 from .parallel import parallel_execute
30 from .parallel import parallel_start
31 from .progress_stream import stream_output
32 from .progress_stream import StreamOutputError
33 from .utils import json_hash
34
35
36 log = logging.getLogger(__name__)
37
38
39 DOCKER_START_KEYS = [
40 'cap_add',
41 'cap_drop',
42 'cgroup_parent',
43 'cpu_quota',
44 'devices',
45 'dns',
46 'dns_search',
47 'env_file',
48 'extra_hosts',
49 'ipc',
50 'read_only',
51 'log_driver',
52 'log_opt',
53 'mem_limit',
54 'memswap_limit',
55 'pid',
56 'privileged',
57 'restart',
58 'security_opt',
59 'shm_size',
60 'volumes_from',
61 ]
62
63
64 class BuildError(Exception):
65 def __init__(self, service, reason):
66 self.service = service
67 self.reason = reason
68
69
70 class NeedsBuildError(Exception):
71 def __init__(self, service):
72 self.service = service
73
74
75 class NoSuchImageError(Exception):
76 pass
77
78
79 ServiceName = namedtuple('ServiceName', 'project service number')
80
81
82 ConvergencePlan = namedtuple('ConvergencePlan', 'action containers')
83
84
85 @enum.unique
86 class ConvergenceStrategy(enum.Enum):
87 """Enumeration for all possible convergence strategies. Values refer to
88 when containers should be recreated.
89 """
90 changed = 1
91 always = 2
92 never = 3
93
94 @property
95 def allows_recreate(self):
96 return self is not type(self).never
97
98
99 @enum.unique
100 class ImageType(enum.Enum):
101 """Enumeration for the types of images known to compose."""
102 none = 0
103 local = 1
104 all = 2
105
106
107 class Service(object):
108 def __init__(
109 self,
110 name,
111 client=None,
112 project='default',
113 use_networking=False,
114 links=None,
115 volumes_from=None,
116 network_mode=None,
117 networks=None,
118 **options
119 ):
120 self.name = name
121 self.client = client
122 self.project = project
123 self.use_networking = use_networking
124 self.links = links or []
125 self.volumes_from = volumes_from or []
126 self.network_mode = network_mode or NetworkMode(None)
127 self.networks = networks or {}
128 self.options = options
129
130 def containers(self, stopped=False, one_off=False, filters={}):
131 filters.update({'label': self.labels(one_off=one_off)})
132
133 return list(filter(None, [
134 Container.from_ps(self.client, container)
135 for container in self.client.containers(
136 all=stopped,
137 filters=filters)]))
138
139 def get_container(self, number=1):
140 """Return a :class:`compose.container.Container` for this service. The
141 container must be active, and match `number`.
142 """
143 labels = self.labels() + ['{0}={1}'.format(LABEL_CONTAINER_NUMBER, number)]
144 for container in self.client.containers(filters={'label': labels}):
145 return Container.from_ps(self.client, container)
146
147 raise ValueError("No container found for %s_%s" % (self.name, number))
148
149 def start(self, **options):
150 containers = self.containers(stopped=True)
151 for c in containers:
152 self.start_container_if_stopped(c, **options)
153 return containers
154
155 def scale(self, desired_num, timeout=DEFAULT_TIMEOUT):
156 """
157 Adjusts the number of containers to the specified number and ensures
158 they are running.
159
160 - creates containers until there are at least `desired_num`
161 - stops containers until there are at most `desired_num` running
162 - starts containers until there are at least `desired_num` running
163 - removes all stopped containers
164 """
165 if self.custom_container_name and desired_num > 1:
166 log.warn('The "%s" service is using the custom container name "%s". '
167 'Docker requires each container to have a unique name. '
168 'Remove the custom name to scale the service.'
169 % (self.name, self.custom_container_name))
170
171 if self.specifies_host_port():
172 log.warn('The "%s" service specifies a port on the host. If multiple containers '
173 'for this service are created on a single host, the port will clash.'
174 % self.name)
175
176 def create_and_start(service, number):
177 container = service.create_container(number=number, quiet=True)
178 service.start_container(container)
179 return container
180
181 def stop_and_remove(container):
182 container.stop(timeout=timeout)
183 container.remove()
184
185 running_containers = self.containers(stopped=False)
186 num_running = len(running_containers)
187
188 if desired_num == num_running:
189 # do nothing as we already have the desired number
190 log.info('Desired container number already achieved')
191 return
192
193 if desired_num > num_running:
194 # we need to start/create until we have desired_num
195 all_containers = self.containers(stopped=True)
196
197 if num_running != len(all_containers):
198 # we have some stopped containers, let's start them up again
199 stopped_containers = sorted(
200 (c for c in all_containers if not c.is_running),
201 key=attrgetter('number'))
202
203 num_stopped = len(stopped_containers)
204
205 if num_stopped + num_running > desired_num:
206 num_to_start = desired_num - num_running
207 containers_to_start = stopped_containers[:num_to_start]
208 else:
209 containers_to_start = stopped_containers
210
211 parallel_start(containers_to_start, {})
212
213 num_running += len(containers_to_start)
214
215 num_to_create = desired_num - num_running
216 next_number = self._next_container_number()
217 container_numbers = [
218 number for number in range(
219 next_number, next_number + num_to_create
220 )
221 ]
222
223 parallel_execute(
224 container_numbers,
225 lambda n: create_and_start(service=self, number=n),
226 lambda n: self.get_container_name(n),
227 "Creating and starting"
228 )
229
230 if desired_num < num_running:
231 num_to_stop = num_running - desired_num
232
233 sorted_running_containers = sorted(
234 running_containers,
235 key=attrgetter('number'))
236
237 parallel_execute(
238 sorted_running_containers[-num_to_stop:],
239 stop_and_remove,
240 lambda c: c.name,
241 "Stopping and removing",
242 )
243
244 def create_container(self,
245 one_off=False,
246 do_build=True,
247 previous_container=None,
248 number=None,
249 quiet=False,
250 **override_options):
251 """
252 Create a container for this service. If the image doesn't exist, attempt to pull
253 it.
254 """
255 self.ensure_image_exists(do_build=do_build)
256
257 container_options = self._get_container_create_options(
258 override_options,
259 number or self._next_container_number(one_off=one_off),
260 one_off=one_off,
261 previous_container=previous_container,
262 )
263
264 if 'name' in container_options and not quiet:
265 log.info("Creating %s" % container_options['name'])
266
267 return Container.create(self.client, **container_options)
268
269 def ensure_image_exists(self, do_build=True):
270 try:
271 self.image()
272 return
273 except NoSuchImageError:
274 pass
275
276 if self.can_be_built():
277 if do_build:
278 self.build()
279 else:
280 raise NeedsBuildError(self)
281 else:
282 self.pull()
283
284 def image(self):
285 try:
286 return self.client.inspect_image(self.image_name)
287 except APIError as e:
288 if e.response.status_code == 404 and e.explanation and 'No such image' in str(e.explanation):
289 raise NoSuchImageError("Image '{}' not found".format(self.image_name))
290 else:
291 raise
292
293 @property
294 def image_name(self):
295 return self.options.get('image', '{s.project}_{s.name}'.format(s=self))
296
297 def convergence_plan(self, strategy=ConvergenceStrategy.changed):
298 containers = self.containers(stopped=True)
299
300 if not containers:
301 return ConvergencePlan('create', [])
302
303 if strategy is ConvergenceStrategy.never:
304 return ConvergencePlan('start', containers)
305
306 if (
307 strategy is ConvergenceStrategy.always or
308 self._containers_have_diverged(containers)
309 ):
310 return ConvergencePlan('recreate', containers)
311
312 stopped = [c for c in containers if not c.is_running]
313
314 if stopped:
315 return ConvergencePlan('start', stopped)
316
317 return ConvergencePlan('noop', containers)
318
319 def _containers_have_diverged(self, containers):
320 config_hash = None
321
322 try:
323 config_hash = self.config_hash
324 except NoSuchImageError as e:
325 log.debug(
326 'Service %s has diverged: %s',
327 self.name, six.text_type(e),
328 )
329 return True
330
331 has_diverged = False
332
333 for c in containers:
334 container_config_hash = c.labels.get(LABEL_CONFIG_HASH, None)
335 if container_config_hash != config_hash:
336 log.debug(
337 '%s has diverged: %s != %s',
338 c.name, container_config_hash, config_hash,
339 )
340 has_diverged = True
341
342 return has_diverged
343
344 def execute_convergence_plan(self,
345 plan,
346 do_build=True,
347 timeout=DEFAULT_TIMEOUT,
348 detached=False,
349 start=True):
350 (action, containers) = plan
351 should_attach_logs = not detached
352
353 if action == 'create':
354 container = self.create_container(do_build=do_build)
355
356 if should_attach_logs:
357 container.attach_log_stream()
358
359 if start:
360 self.start_container(container)
361
362 return [container]
363
364 elif action == 'recreate':
365 return [
366 self.recreate_container(
367 container,
368 do_build=do_build,
369 timeout=timeout,
370 attach_logs=should_attach_logs,
371 start_new_container=start
372 )
373 for container in containers
374 ]
375
376 elif action == 'start':
377 if start:
378 for container in containers:
379 self.start_container_if_stopped(container, attach_logs=should_attach_logs)
380
381 return containers
382
383 elif action == 'noop':
384 for c in containers:
385 log.info("%s is up-to-date" % c.name)
386
387 return containers
388
389 else:
390 raise Exception("Invalid action: {}".format(action))
391
392 def recreate_container(
393 self,
394 container,
395 do_build=False,
396 timeout=DEFAULT_TIMEOUT,
397 attach_logs=False,
398 start_new_container=True):
399 """Recreate a container.
400
401 The original container is renamed to a temporary name so that data
402 volumes can be copied to the new container, before the original
403 container is removed.
404 """
405 log.info("Recreating %s" % container.name)
406
407 container.stop(timeout=timeout)
408 container.rename_to_tmp_name()
409 new_container = self.create_container(
410 do_build=do_build,
411 previous_container=container,
412 number=container.labels.get(LABEL_CONTAINER_NUMBER),
413 quiet=True,
414 )
415 if attach_logs:
416 new_container.attach_log_stream()
417 if start_new_container:
418 self.start_container(new_container)
419 container.remove()
420 return new_container
421
422 def start_container_if_stopped(self, container, attach_logs=False):
423 if not container.is_running:
424 log.info("Starting %s" % container.name)
425 if attach_logs:
426 container.attach_log_stream()
427 return self.start_container(container)
428
429 def start_container(self, container):
430 self.connect_container_to_networks(container)
431 container.start()
432 return container
433
434 def connect_container_to_networks(self, container):
435 connected_networks = container.get('NetworkSettings.Networks')
436
437 for network, aliases in self.networks.items():
438 if network in connected_networks:
439 self.client.disconnect_container_from_network(
440 container.id, network)
441
442 self.client.connect_container_to_network(
443 container.id, network,
444 aliases=list(self._get_aliases(container).union(aliases)),
445 links=self._get_links(False),
446 )
447
448 def remove_duplicate_containers(self, timeout=DEFAULT_TIMEOUT):
449 for c in self.duplicate_containers():
450 log.info('Removing %s' % c.name)
451 c.stop(timeout=timeout)
452 c.remove()
453
454 def duplicate_containers(self):
455 containers = sorted(
456 self.containers(stopped=True),
457 key=lambda c: c.get('Created'),
458 )
459
460 numbers = set()
461
462 for c in containers:
463 if c.number in numbers:
464 yield c
465 else:
466 numbers.add(c.number)
467
468 @property
469 def config_hash(self):
470 return json_hash(self.config_dict())
471
472 def config_dict(self):
473 return {
474 'options': self.options,
475 'image_id': self.image()['Id'],
476 'links': self.get_link_names(),
477 'net': self.network_mode.id,
478 'networks': list(self.networks.keys()),
479 'volumes_from': [
480 (v.source.name, v.mode)
481 for v in self.volumes_from if isinstance(v.source, Service)
482 ],
483 }
484
485 def get_dependency_names(self):
486 net_name = self.network_mode.service_name
487 return (self.get_linked_service_names() +
488 self.get_volumes_from_names() +
489 ([net_name] if net_name else []) +
490 self.options.get('depends_on', []))
491
492 def get_linked_service_names(self):
493 return [service.name for (service, _) in self.links]
494
495 def get_link_names(self):
496 return [(service.name, alias) for service, alias in self.links]
497
498 def get_volumes_from_names(self):
499 return [s.source.name for s in self.volumes_from if isinstance(s.source, Service)]
500
501 # TODO: this would benefit from github.com/docker/docker/pull/14699
502 # to remove the need to inspect every container
503 def _next_container_number(self, one_off=False):
504 containers = filter(None, [
505 Container.from_ps(self.client, container)
506 for container in self.client.containers(
507 all=True,
508 filters={'label': self.labels(one_off=one_off)})
509 ])
510 numbers = [c.number for c in containers]
511 return 1 if not numbers else max(numbers) + 1
512
513 def _get_aliases(self, container):
514 if container.labels.get(LABEL_ONE_OFF) == "True":
515 return set()
516
517 return {self.name, container.short_id}
518
519 def _get_links(self, link_to_self):
520 links = {}
521
522 for service, link_name in self.links:
523 for container in service.containers():
524 links[link_name or service.name] = container.name
525 links[container.name] = container.name
526 links[container.name_without_project] = container.name
527
528 if link_to_self:
529 for container in self.containers():
530 links[self.name] = container.name
531 links[container.name] = container.name
532 links[container.name_without_project] = container.name
533
534 for external_link in self.options.get('external_links') or []:
535 if ':' not in external_link:
536 link_name = external_link
537 else:
538 external_link, link_name = external_link.split(':')
539 links[link_name] = external_link
540
541 return [
542 (alias, container_name)
543 for (container_name, alias) in links.items()
544 ]
545
546 def _get_volumes_from(self):
547 return [build_volume_from(spec) for spec in self.volumes_from]
548
549 def _get_container_create_options(
550 self,
551 override_options,
552 number,
553 one_off=False,
554 previous_container=None):
555 add_config_hash = (not one_off and not override_options)
556
557 container_options = dict(
558 (k, self.options[k])
559 for k in DOCKER_CONFIG_KEYS if k in self.options)
560 container_options.update(override_options)
561
562 if not container_options.get('name'):
563 container_options['name'] = self.get_container_name(number, one_off)
564
565 container_options.setdefault('detach', True)
566
567 # If a qualified hostname was given, split it into an
568 # unqualified hostname and a domainname unless domainname
569 # was also given explicitly. This matches the behavior of
570 # the official Docker CLI in that scenario.
571 if ('hostname' in container_options and
572 'domainname' not in container_options and
573 '.' in container_options['hostname']):
574 parts = container_options['hostname'].partition('.')
575 container_options['hostname'] = parts[0]
576 container_options['domainname'] = parts[2]
577
578 if 'ports' in container_options or 'expose' in self.options:
579 container_options['ports'] = build_container_ports(
580 container_options,
581 self.options)
582
583 container_options['environment'] = merge_environment(
584 self.options.get('environment'),
585 override_options.get('environment'))
586
587 binds, affinity = merge_volume_bindings(
588 container_options.get('volumes') or [],
589 previous_container)
590 override_options['binds'] = binds
591 container_options['environment'].update(affinity)
592
593 if 'volumes' in container_options:
594 container_options['volumes'] = dict(
595 (v.internal, {}) for v in container_options['volumes'])
596
597 container_options['image'] = self.image_name
598
599 container_options['labels'] = build_container_labels(
600 container_options.get('labels', {}),
601 self.labels(one_off=one_off),
602 number,
603 self.config_hash if add_config_hash else None)
604
605 # Delete options which are only used when starting
606 for key in DOCKER_START_KEYS:
607 container_options.pop(key, None)
608
609 container_options['host_config'] = self._get_container_host_config(
610 override_options,
611 one_off=one_off)
612
613 container_options['environment'] = format_environment(
614 container_options['environment'])
615 return container_options
616
617 def _get_container_host_config(self, override_options, one_off=False):
618 options = dict(self.options, **override_options)
619
620 logging_dict = options.get('logging', None)
621 log_config = get_log_config(logging_dict)
622
623 return self.client.create_host_config(
624 links=self._get_links(link_to_self=one_off),
625 port_bindings=build_port_bindings(options.get('ports') or []),
626 binds=options.get('binds'),
627 volumes_from=self._get_volumes_from(),
628 privileged=options.get('privileged', False),
629 network_mode=self.network_mode.mode,
630 devices=options.get('devices'),
631 dns=options.get('dns'),
632 dns_search=options.get('dns_search'),
633 restart_policy=options.get('restart'),
634 cap_add=options.get('cap_add'),
635 cap_drop=options.get('cap_drop'),
636 mem_limit=options.get('mem_limit'),
637 memswap_limit=options.get('memswap_limit'),
638 ulimits=build_ulimits(options.get('ulimits')),
639 log_config=log_config,
640 extra_hosts=options.get('extra_hosts'),
641 read_only=options.get('read_only'),
642 pid_mode=options.get('pid'),
643 security_opt=options.get('security_opt'),
644 ipc_mode=options.get('ipc'),
645 cgroup_parent=options.get('cgroup_parent'),
646 cpu_quota=options.get('cpu_quota'),
647 shm_size=options.get('shm_size'),
648 )
649
650 def build(self, no_cache=False, pull=False, force_rm=False):
651 log.info('Building %s' % self.name)
652
653 build_opts = self.options.get('build', {})
654 path = build_opts.get('context')
655 # python2 os.path() doesn't support unicode, so we need to encode it to
656 # a byte string
657 if not six.PY3:
658 path = path.encode('utf8')
659
660 build_output = self.client.build(
661 path=path,
662 tag=self.image_name,
663 stream=True,
664 rm=True,
665 forcerm=force_rm,
666 pull=pull,
667 nocache=no_cache,
668 dockerfile=build_opts.get('dockerfile', None),
669 buildargs=build_opts.get('args', None),
670 )
671
672 try:
673 all_events = stream_output(build_output, sys.stdout)
674 except StreamOutputError as e:
675 raise BuildError(self, six.text_type(e))
676
677 # Ensure the HTTP connection is not reused for another
678 # streaming command, as the Docker daemon can sometimes
679 # complain about it
680 self.client.close()
681
682 image_id = None
683
684 for event in all_events:
685 if 'stream' in event:
686 match = re.search(r'Successfully built ([0-9a-f]+)', event.get('stream', ''))
687 if match:
688 image_id = match.group(1)
689
690 if image_id is None:
691 raise BuildError(self, event if all_events else 'Unknown')
692
693 return image_id
694
695 def can_be_built(self):
696 return 'build' in self.options
697
698 def labels(self, one_off=False):
699 return [
700 '{0}={1}'.format(LABEL_PROJECT, self.project),
701 '{0}={1}'.format(LABEL_SERVICE, self.name),
702 '{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False")
703 ]
704
705 @property
706 def custom_container_name(self):
707 return self.options.get('container_name')
708
709 def get_container_name(self, number, one_off=False):
710 if self.custom_container_name and not one_off:
711 return self.custom_container_name
712
713 return build_container_name(self.project, self.name, number, one_off)
714
715 def remove_image(self, image_type):
716 if not image_type or image_type == ImageType.none:
717 return False
718 if image_type == ImageType.local and self.options.get('image'):
719 return False
720
721 log.info("Removing image %s", self.image_name)
722 try:
723 self.client.remove_image(self.image_name)
724 return True
725 except APIError as e:
726 log.error("Failed to remove image for service %s: %s", self.name, e)
727 return False
728
729 def specifies_host_port(self):
730 def has_host_port(binding):
731 _, external_bindings = split_port(binding)
732
733 # there are no external bindings
734 if external_bindings is None:
735 return False
736
737 # we only need to check the first binding from the range
738 external_binding = external_bindings[0]
739
740 # non-tuple binding means there is a host port specified
741 if not isinstance(external_binding, tuple):
742 return True
743
744 # extract actual host port from tuple of (host_ip, host_port)
745 _, host_port = external_binding
746 if host_port is not None:
747 return True
748
749 return False
750
751 return any(has_host_port(binding) for binding in self.options.get('ports', []))
752
753 def pull(self, ignore_pull_failures=False):
754 if 'image' not in self.options:
755 return
756
757 repo, tag, separator = parse_repository_tag(self.options['image'])
758 tag = tag or 'latest'
759 log.info('Pulling %s (%s%s%s)...' % (self.name, repo, separator, tag))
760 output = self.client.pull(
761 repo,
762 tag=tag,
763 stream=True,
764 )
765
766 try:
767 stream_output(output, sys.stdout)
768 except StreamOutputError as e:
769 if not ignore_pull_failures:
770 raise
771 else:
772 log.error(six.text_type(e))
773
774
775 class NetworkMode(object):
776 """A `standard` network mode (ex: host, bridge)"""
777
778 service_name = None
779
780 def __init__(self, network_mode):
781 self.network_mode = network_mode
782
783 @property
784 def id(self):
785 return self.network_mode
786
787 mode = id
788
789
790 class ContainerNetworkMode(object):
791 """A network mode that uses a container's network stack."""
792
793 service_name = None
794
795 def __init__(self, container):
796 self.container = container
797
798 @property
799 def id(self):
800 return self.container.id
801
802 @property
803 def mode(self):
804 return 'container:' + self.container.id
805
806
807 class ServiceNetworkMode(object):
808 """A network mode that uses a service's network stack."""
809
810 def __init__(self, service):
811 self.service = service
812
813 @property
814 def id(self):
815 return self.service.name
816
817 service_name = id
818
819 @property
820 def mode(self):
821 containers = self.service.containers()
822 if containers:
823 return 'container:' + containers[0].id
824
825 log.warn("Service %s is trying to use reuse the network stack "
826 "of another service that is not running." % (self.id))
827 return None
828
829
830 # Names
831
832
833 def build_container_name(project, service, number, one_off=False):
834 bits = [project, service]
835 if one_off:
836 bits.append('run')
837 return '_'.join(bits + [str(number)])
838
839
840 # Images
841
842 def parse_repository_tag(repo_path):
843 """Splits image identification into base image path, tag/digest
844 and it's separator.
845
846 Example:
847
848 >>> parse_repository_tag('user/repo@sha256:digest')
849 ('user/repo', 'sha256:digest', '@')
850 >>> parse_repository_tag('user/repo:v1')
851 ('user/repo', 'v1', ':')
852 """
853 tag_separator = ":"
854 digest_separator = "@"
855
856 if digest_separator in repo_path:
857 repo, tag = repo_path.rsplit(digest_separator, 1)
858 return repo, tag, digest_separator
859
860 repo, tag = repo_path, ""
861 if tag_separator in repo_path:
862 repo, tag = repo_path.rsplit(tag_separator, 1)
863 if "/" in tag:
864 repo, tag = repo_path, ""
865
866 return repo, tag, tag_separator
867
868
869 # Volumes
870
871
872 def merge_volume_bindings(volumes, previous_container):
873 """Return a list of volume bindings for a container. Container data volumes
874 are replaced by those from the previous container.
875 """
876 affinity = {}
877
878 volume_bindings = dict(
879 build_volume_binding(volume)
880 for volume in volumes
881 if volume.external)
882
883 if previous_container:
884 old_volumes = get_container_data_volumes(previous_container, volumes)
885 warn_on_masked_volume(volumes, old_volumes, previous_container.service)
886 volume_bindings.update(
887 build_volume_binding(volume) for volume in old_volumes)
888
889 if old_volumes:
890 affinity = {'affinity:container': '=' + previous_container.id}
891
892 return list(volume_bindings.values()), affinity
893
894
895 def get_container_data_volumes(container, volumes_option):
896 """Find the container data volumes that are in `volumes_option`, and return
897 a mapping of volume bindings for those volumes.
898 """
899 volumes = []
900 volumes_option = volumes_option or []
901
902 container_mounts = dict(
903 (mount['Destination'], mount)
904 for mount in container.get('Mounts') or {}
905 )
906
907 image_volumes = [
908 VolumeSpec.parse(volume)
909 for volume in
910 container.image_config['ContainerConfig'].get('Volumes') or {}
911 ]
912
913 for volume in set(volumes_option + image_volumes):
914 # No need to preserve host volumes
915 if volume.external:
916 continue
917
918 mount = container_mounts.get(volume.internal)
919
920 # New volume, doesn't exist in the old container
921 if not mount:
922 continue
923
924 # Volume was previously a host volume, now it's a container volume
925 if not mount.get('Name'):
926 continue
927
928 # Copy existing volume from old container
929 volume = volume._replace(external=mount['Name'])
930 volumes.append(volume)
931
932 return volumes
933
934
935 def warn_on_masked_volume(volumes_option, container_volumes, service):
936 container_volumes = dict(
937 (volume.internal, volume.external)
938 for volume in container_volumes)
939
940 for volume in volumes_option:
941 if (
942 volume.external and
943 volume.internal in container_volumes and
944 container_volumes.get(volume.internal) != volume.external
945 ):
946 log.warn((
947 "Service \"{service}\" is using volume \"{volume}\" from the "
948 "previous container. Host mapping \"{host_path}\" has no effect. "
949 "Remove the existing containers (with `docker-compose rm {service}`) "
950 "to use the host volume mapping."
951 ).format(
952 service=service,
953 volume=volume.internal,
954 host_path=volume.external))
955
956
957 def build_volume_binding(volume_spec):
958 return volume_spec.internal, volume_spec.repr()
959
960
961 def build_volume_from(volume_from_spec):
962 """
963 volume_from can be either a service or a container. We want to return the
964 container.id and format it into a string complete with the mode.
965 """
966 if isinstance(volume_from_spec.source, Service):
967 containers = volume_from_spec.source.containers(stopped=True)
968 if not containers:
969 return "{}:{}".format(
970 volume_from_spec.source.create_container().id,
971 volume_from_spec.mode)
972
973 container = containers[0]
974 return "{}:{}".format(container.id, volume_from_spec.mode)
975 elif isinstance(volume_from_spec.source, Container):
976 return "{}:{}".format(volume_from_spec.source.id, volume_from_spec.mode)
977
978
979 # Labels
980
981
982 def build_container_labels(label_options, service_labels, number, config_hash):
983 labels = dict(label_options or {})
984 labels.update(label.split('=', 1) for label in service_labels)
985 labels[LABEL_CONTAINER_NUMBER] = str(number)
986 labels[LABEL_VERSION] = __version__
987
988 if config_hash:
989 log.debug("Added config hash: %s" % config_hash)
990 labels[LABEL_CONFIG_HASH] = config_hash
991
992 return labels
993
994
995 # Ulimits
996
997
998 def build_ulimits(ulimit_config):
999 if not ulimit_config:
1000 return None
1001 ulimits = []
1002 for limit_name, soft_hard_values in six.iteritems(ulimit_config):
1003 if isinstance(soft_hard_values, six.integer_types):
1004 ulimits.append({'name': limit_name, 'soft': soft_hard_values, 'hard': soft_hard_values})
1005 elif isinstance(soft_hard_values, dict):
1006 ulimit_dict = {'name': limit_name}
1007 ulimit_dict.update(soft_hard_values)
1008 ulimits.append(ulimit_dict)
1009
1010 return ulimits
1011
1012
1013 def get_log_config(logging_dict):
1014 log_driver = logging_dict.get('driver', "") if logging_dict else ""
1015 log_options = logging_dict.get('options', None) if logging_dict else None
1016 return LogConfig(
1017 type=log_driver,
1018 config=log_options
1019 )
1020
1021
1022 # TODO: remove once fix is available in docker-py
1023 def format_environment(environment):
1024 def format_env(key, value):
1025 if value is None:
1026 return key
1027 return '{key}={value}'.format(key=key, value=value)
1028 return [format_env(*item) for item in environment.items()]
1029
1030 # Ports
1031
1032
1033 def build_container_ports(container_options, options):
1034 ports = []
1035 all_ports = container_options.get('ports', []) + options.get('expose', [])
1036 for port_range in all_ports:
1037 internal_range, _ = split_port(port_range)
1038 for port in internal_range:
1039 port = str(port)
1040 if '/' in port:
1041 port = tuple(port.split('/'))
1042 ports.append(port)
1043 return ports
1044
[end of compose/service.py]
[start of compose/utils.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import codecs
5 import hashlib
6 import json
7 import json.decoder
8
9 import six
10
11
12 json_decoder = json.JSONDecoder()
13
14
15 def get_output_stream(stream):
16 if six.PY3:
17 return stream
18 return codecs.getwriter('utf-8')(stream)
19
20
21 def stream_as_text(stream):
22 """Given a stream of bytes or text, if any of the items in the stream
23 are bytes convert them to text.
24
25 This function can be removed once docker-py returns text streams instead
26 of byte streams.
27 """
28 for data in stream:
29 if not isinstance(data, six.text_type):
30 data = data.decode('utf-8', 'replace')
31 yield data
32
33
34 def line_splitter(buffer, separator=u'\n'):
35 index = buffer.find(six.text_type(separator))
36 if index == -1:
37 return None
38 return buffer[:index + 1], buffer[index + 1:]
39
40
41 def split_buffer(stream, splitter=None, decoder=lambda a: a):
42 """Given a generator which yields strings and a splitter function,
43 joins all input, splits on the separator and yields each chunk.
44
45 Unlike string.split(), each chunk includes the trailing
46 separator, except for the last one if none was found on the end
47 of the input.
48 """
49 splitter = splitter or line_splitter
50 buffered = six.text_type('')
51
52 for data in stream_as_text(stream):
53 buffered += data
54 while True:
55 buffer_split = splitter(buffered)
56 if buffer_split is None:
57 break
58
59 item, buffered = buffer_split
60 yield item
61
62 if buffered:
63 yield decoder(buffered)
64
65
66 def json_splitter(buffer):
67 """Attempt to parse a json object from a buffer. If there is at least one
68 object, return it and the rest of the buffer, otherwise return None.
69 """
70 try:
71 obj, index = json_decoder.raw_decode(buffer)
72 rest = buffer[json.decoder.WHITESPACE.match(buffer, index).end():]
73 return obj, rest
74 except ValueError:
75 return None
76
77
78 def json_stream(stream):
79 """Given a stream of text, return a stream of json objects.
80 This handles streams which are inconsistently buffered (some entries may
81 be newline delimited, and others are not).
82 """
83 return split_buffer(stream, json_splitter, json_decoder.decode)
84
85
86 def json_hash(obj):
87 dump = json.dumps(obj, sort_keys=True, separators=(',', ':'))
88 h = hashlib.sha256()
89 h.update(dump.encode('utf8'))
90 return h.hexdigest()
91
92
93 def microseconds_from_time_nano(time_nano):
94 return int(time_nano % 1000000000 / 1000)
95
96
97 def build_string_dict(source_dict):
98 return dict((k, str(v)) for k, v in source_dict.items())
99
[end of compose/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| docker/compose | 768460483089f2f712f32eb859c95d1ba30fdc0e | Pyinstaller has issues with signals
There's a bunch of history in #1040 and #2055.
We've tried multiple implementations of signal handlers, but each has their own set of issues, but **ONLY** when run from the frozen binary created by pyinstaller.
It looks like there is a very old issue in pyinstaller around this: https://github.com/pyinstaller/pyinstaller/issues/208
These problems can manifest in three ways:
- a `thread.error` when a signal interrupts a thread lock
- the signal handlers being completely ignored and raising a `KeynoardInterupt` instead
- the signal handlers being registered but the try/except to handle the except is skipped (this could be caused by the signal firing multiple times for a single `ctrl-c`, but I can't really verify that's what is happening)
| https://github.com/pyinstaller/pyinstaller/pull/1822 seems to fix it!
We could run my patched version to build the binaries if they don't want to accept the patch upstream. I'll prepare a PR so it can be tested on OSX.
It looks like the windows branch uses a completely different function, so there should be no impact on windows.
Having just upgraded to 1.6.1, I'm now hitting this most of the time. It's an irregular behaviour: sometimes CTRL-C stops the container, some times it aborts. Quite an annoying bug, leaving containers running in the background when I wasn't aware of it!
| 2016-03-01T21:46:06Z | <patch>
diff --git a/compose/cli/main.py b/compose/cli/main.py
--- a/compose/cli/main.py
+++ b/compose/cli/main.py
@@ -54,7 +54,7 @@ def main():
try:
command = TopLevelCommand()
command.sys_dispatch()
- except KeyboardInterrupt:
+ except (KeyboardInterrupt, signals.ShutdownException):
log.error("Aborting.")
sys.exit(1)
except (UserError, NoSuchService, ConfigurationError) as e:
diff --git a/compose/cli/multiplexer.py b/compose/cli/multiplexer.py
--- a/compose/cli/multiplexer.py
+++ b/compose/cli/multiplexer.py
@@ -10,6 +10,7 @@
except ImportError:
from queue import Queue, Empty # Python 3.x
+from compose.cli.signals import ShutdownException
STOP = object()
@@ -47,7 +48,7 @@ def loop(self):
pass
# See https://github.com/docker/compose/issues/189
except thread.error:
- raise KeyboardInterrupt()
+ raise ShutdownException()
def _init_readers(self):
for iterator in self.iterators:
diff --git a/compose/parallel.py b/compose/parallel.py
--- a/compose/parallel.py
+++ b/compose/parallel.py
@@ -6,9 +6,11 @@
from threading import Thread
from docker.errors import APIError
+from six.moves import _thread as thread
from six.moves.queue import Empty
from six.moves.queue import Queue
+from compose.cli.signals import ShutdownException
from compose.utils import get_output_stream
@@ -26,19 +28,7 @@ def parallel_execute(objects, func, index_func, msg):
objects = list(objects)
stream = get_output_stream(sys.stderr)
writer = ParallelStreamWriter(stream, msg)
-
- for obj in objects:
- writer.initialize(index_func(obj))
-
- q = Queue()
-
- # TODO: limit the number of threads #1828
- for obj in objects:
- t = Thread(
- target=perform_operation,
- args=(func, obj, q.put, index_func(obj)))
- t.daemon = True
- t.start()
+ q = setup_queue(writer, objects, func, index_func)
done = 0
errors = {}
@@ -48,6 +38,9 @@ def parallel_execute(objects, func, index_func, msg):
msg_index, result = q.get(timeout=1)
except Empty:
continue
+ # See https://github.com/docker/compose/issues/189
+ except thread.error:
+ raise ShutdownException()
if isinstance(result, APIError):
errors[msg_index] = "error", result.explanation
@@ -68,6 +61,23 @@ def parallel_execute(objects, func, index_func, msg):
raise error
+def setup_queue(writer, objects, func, index_func):
+ for obj in objects:
+ writer.initialize(index_func(obj))
+
+ q = Queue()
+
+ # TODO: limit the number of threads #1828
+ for obj in objects:
+ t = Thread(
+ target=perform_operation,
+ args=(func, obj, q.put, index_func(obj)))
+ t.daemon = True
+ t.start()
+
+ return q
+
+
class ParallelStreamWriter(object):
"""Write out messages for operations happening in parallel.
</patch> | [] | [] | |||
googleapis__google-cloud-python-10162 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BigQuery: raise a `TypeError` if a dictionary is passed to `insert_rows_json`
**Is your feature request related to a problem? Please describe.**
If I want to only insert a single row at a time into a table, it's easy to accidentally try something like:
```python
json_row = {"col1": "hello", "col2": "world"}
errors = client.insert_rows_json(
table,
json_row
)
```
This results in a `400 BadRequest` error from the API, because it expects a list of rows, not a single row.
**Describe the solution you'd like**
It's difficult to debug this situation from the API response, so it'd be better if we raised a client-side error for passing in the wrong type for `json_rows`.
**Describe alternatives you've considered**
Leave as-is and request a better server-side message. This may be difficult to do, as the error happens at a level above BigQuery, which translates JSON to Protobuf for internal use.
**Additional context**
This issue was encountered by a customer engineer, and it took me a bit of debugging to figure out the actual issue. I expect other customers will encounter this problem as well.
</issue>
<code>
[start of README.rst]
1 Google Cloud Python Client
2 ==========================
3
4 Python idiomatic clients for `Google Cloud Platform`_ services.
5
6 .. _Google Cloud Platform: https://cloud.google.com/
7
8 **Heads up**! These libraries are supported on App Engine standard's `Python 3 runtime`_ but are *not* supported on App Engine's `Python 2 runtime`_.
9
10 .. _Python 3 runtime: https://cloud.google.com/appengine/docs/standard/python3
11 .. _Python 2 runtime: https://cloud.google.com/appengine/docs/standard/python
12
13 General Availability
14 --------------------
15
16 **GA** (general availability) indicates that the client library for a
17 particular service is stable, and that the code surface will not change in
18 backwards-incompatible ways unless either absolutely necessary (e.g. because
19 of critical security issues) or with an extensive deprecation period.
20 Issues and requests against GA libraries are addressed with the highest
21 priority.
22
23 .. note::
24
25 Sub-components of GA libraries explicitly marked as beta in the
26 import path (e.g. ``google.cloud.language_v1beta2``) should be considered
27 to be beta.
28
29 The following client libraries have **GA** support:
30
31 - `Google BigQuery`_ (`BigQuery README`_, `BigQuery Documentation`_)
32 - `Google Cloud Bigtable`_ (`Bigtable README`_, `Bigtable Documentation`_)
33 - `Google Cloud Datastore`_ (`Datastore README`_, `Datastore Documentation`_)
34 - `Google Cloud KMS`_ (`KMS README`_, `KMS Documentation`_)
35 - `Google Cloud Natural Language`_ (`Natural Language README`_, `Natural Language Documentation`_)
36 - `Google Cloud Pub/Sub`_ (`Pub/Sub README`_, `Pub/Sub Documentation`_)
37 - `Google Cloud Scheduler`_ (`Scheduler README`_, `Scheduler Documentation`_)
38 - `Google Cloud Spanner`_ (`Spanner README`_, `Spanner Documentation`_)
39 - `Google Cloud Speech to Text`_ (`Speech to Text README`_, `Speech to Text Documentation`_)
40 - `Google Cloud Storage`_ (`Storage README`_, `Storage Documentation`_)
41 - `Google Cloud Tasks`_ (`Tasks README`_, `Tasks Documentation`_)
42 - `Google Cloud Translation`_ (`Translation README`_, `Translation Documentation`_)
43 - `Stackdriver Logging`_ (`Logging README`_, `Logging Documentation`_)
44
45 .. _Google BigQuery: https://pypi.org/project/google-cloud-bigquery/
46 .. _BigQuery README: https://github.com/googleapis/google-cloud-python/tree/master/bigquery
47 .. _BigQuery Documentation: https://googleapis.dev/python/bigquery/latest
48
49 .. _Google Cloud Bigtable: https://pypi.org/project/google-cloud-bigtable/
50 .. _Bigtable README: https://github.com/googleapis/google-cloud-python/tree/master/bigtable
51 .. _Bigtable Documentation: https://googleapis.dev/python/bigtable/latest
52
53 .. _Google Cloud Datastore: https://pypi.org/project/google-cloud-datastore/
54 .. _Datastore README: https://github.com/googleapis/google-cloud-python/tree/master/datastore
55 .. _Datastore Documentation: https://googleapis.dev/python/datastore/latest
56
57 .. _Google Cloud KMS: https://pypi.org/project/google-cloud-kms/
58 .. _KMS README: https://github.com/googleapis/google-cloud-python/tree/master/kms
59 .. _KMS Documentation: https://googleapis.dev/python/cloudkms/latest
60
61 .. _Google Cloud Natural Language: https://pypi.org/project/google-cloud-language/
62 .. _Natural Language README: https://github.com/googleapis/google-cloud-python/tree/master/language
63 .. _Natural Language Documentation: https://googleapis.dev/python/language/latest
64
65 .. _Google Cloud Pub/Sub: https://pypi.org/project/google-cloud-pubsub/
66 .. _Pub/Sub README: https://github.com/googleapis/google-cloud-python/tree/master/pubsub
67 .. _Pub/Sub Documentation: https://googleapis.dev/python/pubsub/latest
68
69 .. _Google Cloud Spanner: https://pypi.org/project/google-cloud-spanner
70 .. _Spanner README: https://github.com/googleapis/google-cloud-python/tree/master/spanner
71 .. _Spanner Documentation: https://googleapis.dev/python/spanner/latest
72
73 .. _Google Cloud Speech to Text: https://pypi.org/project/google-cloud-speech/
74 .. _Speech to Text README: https://github.com/googleapis/google-cloud-python/tree/master/speech
75 .. _Speech to Text Documentation: https://googleapis.dev/python/speech/latest
76
77 .. _Google Cloud Storage: https://pypi.org/project/google-cloud-storage/
78 .. _Storage README: https://github.com/googleapis/google-cloud-python/tree/master/storage
79 .. _Storage Documentation: https://googleapis.dev/python/storage/latest
80
81 .. _Google Cloud Tasks: https://pypi.org/project/google-cloud-tasks/
82 .. _Tasks README: https://github.com/googleapis/google-cloud-python/tree/master/tasks
83 .. _Tasks Documentation: https://googleapis.dev/python/cloudtasks/latest
84
85 .. _Google Cloud Translation: https://pypi.org/project/google-cloud-translate/
86 .. _Translation README: https://github.com/googleapis/google-cloud-python/tree/master/translate
87 .. _Translation Documentation: https://googleapis.dev/python/translation/latest
88
89 .. _Google Cloud Scheduler: https://pypi.org/project/google-cloud-scheduler/
90 .. _Scheduler README: https://github.com/googleapis/google-cloud-python/tree/master/scheduler
91 .. _Scheduler Documentation: https://googleapis.dev/python/cloudscheduler/latest
92
93 .. _Stackdriver Logging: https://pypi.org/project/google-cloud-logging/
94 .. _Logging README: https://github.com/googleapis/google-cloud-python/tree/master/logging
95 .. _Logging Documentation: https://googleapis.dev/python/logging/latest
96
97 Beta Support
98 ------------
99
100 **Beta** indicates that the client library for a particular service is
101 mostly stable and is being prepared for release. Issues and requests
102 against beta libraries are addressed with a higher priority.
103
104 The following client libraries have **beta** support:
105
106 - `Google Cloud Billing Budgets`_ (`Billing Budgets README`_, `Billing Budgets Documentation`_)
107 - `Google Cloud Data Catalog`_ (`Data Catalog README`_, `Data Catalog Documentation`_)
108 - `Google Cloud Firestore`_ (`Firestore README`_, `Firestore Documentation`_)
109 - `Google Cloud Video Intelligence`_ (`Video Intelligence README`_, `Video Intelligence Documentation`_)
110 - `Google Cloud Vision`_ (`Vision README`_, `Vision Documentation`_)
111
112 .. _Google Cloud Billing Budgets: https://pypi.org/project/google-cloud-billing-budgets/
113 .. _Billing Budgets README: https://github.com/googleapis/google-cloud-python/tree/master/billingbudgets
114 .. _Billing Budgets Documentation: https://googleapis.dev/python/billingbudgets/latest
115
116 .. _Google Cloud Data Catalog: https://pypi.org/project/google-cloud-datacatalog/
117 .. _Data Catalog README: https://github.com/googleapis/google-cloud-python/tree/master/datacatalog
118 .. _Data Catalog Documentation: https://googleapis.dev/python/datacatalog/latest
119
120 .. _Google Cloud Firestore: https://pypi.org/project/google-cloud-firestore/
121 .. _Firestore README: https://github.com/googleapis/google-cloud-python/tree/master/firestore
122 .. _Firestore Documentation: https://googleapis.dev/python/firestore/latest
123
124 .. _Google Cloud Video Intelligence: https://pypi.org/project/google-cloud-videointelligence
125 .. _Video Intelligence README: https://github.com/googleapis/google-cloud-python/tree/master/videointelligence
126 .. _Video Intelligence Documentation: https://googleapis.dev/python/videointelligence/latest
127
128 .. _Google Cloud Vision: https://pypi.org/project/google-cloud-vision/
129 .. _Vision README: https://github.com/googleapis/google-cloud-python/tree/master/vision
130 .. _Vision Documentation: https://googleapis.dev/python/vision/latest
131
132
133 Alpha Support
134 -------------
135
136 **Alpha** indicates that the client library for a particular service is
137 still a work-in-progress and is more likely to get backwards-incompatible
138 updates. See `versioning`_ for more details.
139
140 The following client libraries have **alpha** support:
141
142 - `Google Cloud Asset`_ (`Asset README`_, `Asset Documentation`_)
143 - `Google Cloud AutoML`_ (`AutoML README`_, `AutoML Documentation`_)
144 - `Google BigQuery Data Transfer`_ (`BigQuery Data Transfer README`_, `BigQuery Documentation`_)
145 - `Google Cloud Bigtable - HappyBase`_ (`HappyBase README`_, `HappyBase Documentation`_)
146 - `Google Cloud Build`_ (`Cloud Build README`_, `Cloud Build Documentation`_)
147 - `Google Cloud Container`_ (`Container README`_, `Container Documentation`_)
148 - `Google Cloud Container Analysis`_ (`Container Analysis README`_, `Container Analysis Documentation`_)
149 - `Google Cloud Dataproc`_ (`Dataproc README`_, `Dataproc Documentation`_)
150 - `Google Cloud DLP`_ (`DLP README`_, `DLP Documentation`_)
151 - `Google Cloud DNS`_ (`DNS README`_, `DNS Documentation`_)
152 - `Google Cloud IoT`_ (`IoT README`_, `IoT Documentation`_)
153 - `Google Cloud Memorystore for Redis`_ (`Redis README`_, `Redis Documentation`_)
154 - `Google Cloud Recommender`_ (`Recommender README`_, `Recommender Documentation`_)
155 - `Google Cloud Resource Manager`_ (`Resource Manager README`_, `Resource Manager Documentation`_)
156 - `Google Cloud Runtime Configuration`_ (`Runtime Config README`_, `Runtime Config Documentation`_)
157 - `Google Cloud Security Scanner`_ (`Security Scanner README`_ , `Security Scanner Documentation`_)
158 - `Google Cloud Trace`_ (`Trace README`_, `Trace Documentation`_)
159 - `Google Cloud Text-to-Speech`_ (`Text-to-Speech README`_, `Text-to-Speech Documentation`_)
160 - `Grafeas`_ (`Grafeas README`_, `Grafeas Documentation`_)
161 - `Stackdriver Error Reporting`_ (`Error Reporting README`_, `Error Reporting Documentation`_)
162 - `Stackdriver Monitoring`_ (`Monitoring README`_, `Monitoring Documentation`_)
163
164 .. _Google Cloud Asset: https://pypi.org/project/google-cloud-asset/
165 .. _Asset README: https://github.com/googleapis/google-cloud-python/blob/master/asset
166 .. _Asset Documentation: https://googleapis.dev/python/cloudasset/latest
167
168 .. _Google Cloud AutoML: https://pypi.org/project/google-cloud-automl/
169 .. _AutoML README: https://github.com/googleapis/google-cloud-python/blob/master/automl
170 .. _AutoML Documentation: https://googleapis.dev/python/automl/latest
171
172 .. _Google BigQuery Data Transfer: https://pypi.org/project/google-cloud-bigquery-datatransfer/
173 .. _BigQuery Data Transfer README: https://github.com/googleapis/google-cloud-python/tree/master/bigquery_datatransfer
174 .. _BigQuery Documentation: https://googleapis.dev/python/bigquery/latest
175
176 .. _Google Cloud Bigtable - HappyBase: https://pypi.org/project/google-cloud-happybase/
177 .. _HappyBase README: https://github.com/googleapis/google-cloud-python-happybase
178 .. _HappyBase Documentation: https://google-cloud-python-happybase.readthedocs.io/en/latest/
179
180 .. _Google Cloud Build: https://pypi.org/project/google-cloud-build/
181 .. _Cloud Build README: https://github.com/googleapis/google-cloud-python/cloudbuild
182 .. _Cloud Build Documentation: https://googleapis.dev/python/cloudbuild/latest
183
184 .. _Google Cloud Container: https://pypi.org/project/google-cloud-container/
185 .. _Container README: https://github.com/googleapis/google-cloud-python/tree/master/container
186 .. _Container Documentation: https://googleapis.dev/python/container/latest
187
188 .. _Google Cloud Container Analysis: https://pypi.org/project/google-cloud-containeranalysis/
189 .. _Container Analysis README: https://github.com/googleapis/google-cloud-python/tree/master/containeranalysis
190 .. _Container Analysis Documentation: https://googleapis.dev/python/containeranalysis/latest
191
192 .. _Google Cloud Dataproc: https://pypi.org/project/google-cloud-dataproc/
193 .. _Dataproc README: https://github.com/googleapis/google-cloud-python/tree/master/dataproc
194 .. _Dataproc Documentation: https://googleapis.dev/python/dataproc/latest
195
196 .. _Google Cloud DLP: https://pypi.org/project/google-cloud-dlp/
197 .. _DLP README: https://github.com/googleapis/google-cloud-python/tree/master/dlp
198 .. _DLP Documentation: https://googleapis.dev/python/dlp/latest
199
200 .. _Google Cloud DNS: https://pypi.org/project/google-cloud-dns/
201 .. _DNS README: https://github.com/googleapis/google-cloud-python/tree/master/dns
202 .. _DNS Documentation: https://googleapis.dev/python/dns/latest
203
204 .. _Google Cloud IoT: https://pypi.org/project/google-cloud-iot/
205 .. _IoT README: https://github.com/googleapis/google-cloud-python/tree/master/iot
206 .. _IoT Documentation: https://googleapis.dev/python/cloudiot/latest
207
208 .. _Google Cloud Memorystore for Redis: https://pypi.org/project/google-cloud-redis/
209 .. _Redis README: https://github.com/googleapis/google-cloud-python/tree/master/redis
210 .. _Redis Documentation: https://googleapis.dev/python/redis/latest
211
212 .. _Google Cloud Recommender: https://pypi.org/project/google-cloud-recommender/
213 .. _Recommender README: https://github.com/googleapis/google-cloud-python/tree/master/recommender
214 .. _Recommender Documentation: https://googleapis.dev/python/recommender/latest
215
216 .. _Google Cloud Resource Manager: https://pypi.org/project/google-cloud-resource-manager/
217 .. _Resource Manager README: https://github.com/googleapis/google-cloud-python/tree/master/resource_manager
218 .. _Resource Manager Documentation: https://googleapis.dev/python/cloudresourcemanager/latest
219
220 .. _Google Cloud Runtime Configuration: https://pypi.org/project/google-cloud-runtimeconfig/
221 .. _Runtime Config README: https://github.com/googleapis/google-cloud-python/tree/master/runtimeconfig
222 .. _Runtime Config Documentation: https://googleapis.dev/python/runtimeconfig/latest
223
224 .. _Google Cloud Security Scanner: https://pypi.org/project/google-cloud-websecurityscanner/
225 .. _Security Scanner README: https://github.com/googleapis/google-cloud-python/blob/master/websecurityscanner
226 .. _Security Scanner Documentation: https://googleapis.dev/python/websecurityscanner/latest
227
228 .. _Google Cloud Text-to-Speech: https://pypi.org/project/google-cloud-texttospeech/
229 .. _Text-to-Speech README: https://github.com/googleapis/google-cloud-python/tree/master/texttospeech
230 .. _Text-to-Speech Documentation: https://googleapis.dev/python/texttospeech/latest
231
232 .. _Google Cloud Trace: https://pypi.org/project/google-cloud-trace/
233 .. _Trace README: https://github.com/googleapis/google-cloud-python/tree/master/trace
234 .. _Trace Documentation: https://googleapis.dev/python/cloudtrace/latest
235
236 .. _Grafeas: https://pypi.org/project/grafeas/
237 .. _Grafeas README: https://github.com/googleapis/google-cloud-python/tree/master/grafeas
238 .. _Grafeas Documentation: https://googleapis.dev/python/grafeas/latest
239
240 .. _Stackdriver Error Reporting: https://pypi.org/project/google-cloud-error-reporting/
241 .. _Error Reporting README: https://github.com/googleapis/google-cloud-python/tree/master/error_reporting
242 .. _Error Reporting Documentation: https://googleapis.dev/python/clouderrorreporting/latest
243
244 .. _Stackdriver Monitoring: https://pypi.org/project/google-cloud-monitoring/
245 .. _Monitoring README: https://github.com/googleapis/google-cloud-python/tree/master/monitoring
246 .. _Monitoring Documentation: https://googleapis.dev/python/monitoring/latest
247
248 .. _versioning: https://github.com/googleapis/google-cloud-python/blob/master/CONTRIBUTING.rst#versioning
249
250 If you need support for other Google APIs, check out the
251 `Google APIs Python Client library`_.
252
253 .. _Google APIs Python Client library: https://github.com/google/google-api-python-client
254
255
256 Example Applications
257 --------------------
258
259 - `getting-started-python`_ - A sample and `tutorial`_ that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine.
260 - `google-cloud-python-expenses-demo`_ - A sample expenses demo using Cloud Datastore and Cloud Storage
261
262 .. _getting-started-python: https://github.com/GoogleCloudPlatform/getting-started-python
263 .. _tutorial: https://cloud.google.com/python
264 .. _google-cloud-python-expenses-demo: https://github.com/GoogleCloudPlatform/google-cloud-python-expenses-demo
265
266
267 Authentication
268 --------------
269
270 With ``google-cloud-python`` we try to make authentication as painless as possible.
271 Check out the `Authentication section`_ in our documentation to learn more.
272 You may also find the `authentication document`_ shared by all the
273 ``google-cloud-*`` libraries to be helpful.
274
275 .. _Authentication section: https://googleapis.dev/python/google-api-core/latest/auth.html
276 .. _authentication document: https://github.com/googleapis/google-cloud-common/tree/master/authentication
277
278 Contributing
279 ------------
280
281 Contributions to this library are always welcome and highly encouraged.
282
283 See the `CONTRIBUTING doc`_ for more information on how to get started.
284
285 .. _CONTRIBUTING doc: https://github.com/googleapis/google-cloud-python/blob/master/CONTRIBUTING.rst
286
287
288 Community
289 ---------
290
291 Google Cloud Platform Python developers hang out in `Slack`_ in the ``#python``
292 channel, click here to `get an invitation`_.
293
294 .. _Slack: https://googlecloud-community.slack.com
295 .. _get an invitation: https://gcp-slack.appspot.com/
296
297
298 License
299 -------
300
301 Apache 2.0 - See `the LICENSE`_ for more information.
302
303 .. _the LICENSE: https://github.com/googleapis/google-cloud-python/blob/master/LICENSE
304
[end of README.rst]
[start of bigquery/google/cloud/bigquery/table.py]
1 # Copyright 2015 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Define API Tables."""
16
17 from __future__ import absolute_import
18
19 import copy
20 import datetime
21 import functools
22 import logging
23 import operator
24 import warnings
25
26 import six
27
28 try:
29 from google.cloud import bigquery_storage_v1beta1
30 except ImportError: # pragma: NO COVER
31 bigquery_storage_v1beta1 = None
32
33 try:
34 import pandas
35 except ImportError: # pragma: NO COVER
36 pandas = None
37
38 try:
39 import pyarrow
40 except ImportError: # pragma: NO COVER
41 pyarrow = None
42
43 try:
44 import tqdm
45 except ImportError: # pragma: NO COVER
46 tqdm = None
47
48 import google.api_core.exceptions
49 from google.api_core.page_iterator import HTTPIterator
50
51 import google.cloud._helpers
52 from google.cloud.bigquery import _helpers
53 from google.cloud.bigquery import _pandas_helpers
54 from google.cloud.bigquery.schema import _build_schema_resource
55 from google.cloud.bigquery.schema import _parse_schema_resource
56 from google.cloud.bigquery.schema import _to_schema_fields
57 from google.cloud.bigquery.external_config import ExternalConfig
58 from google.cloud.bigquery.encryption_configuration import EncryptionConfiguration
59
60
61 _LOGGER = logging.getLogger(__name__)
62
63 _NO_BQSTORAGE_ERROR = (
64 "The google-cloud-bigquery-storage library is not installed, "
65 "please install google-cloud-bigquery-storage to use bqstorage features."
66 )
67 _NO_PANDAS_ERROR = (
68 "The pandas library is not installed, please install "
69 "pandas to use the to_dataframe() function."
70 )
71 _NO_PYARROW_ERROR = (
72 "The pyarrow library is not installed, please install "
73 "pyarrow to use the to_arrow() function."
74 )
75 _NO_TQDM_ERROR = (
76 "A progress bar was requested, but there was an error loading the tqdm "
77 "library. Please install tqdm to use the progress bar functionality."
78 )
79 _TABLE_HAS_NO_SCHEMA = 'Table has no schema: call "client.get_table()"'
80
81
82 def _reference_getter(table):
83 """A :class:`~google.cloud.bigquery.table.TableReference` pointing to
84 this table.
85
86 Returns:
87 google.cloud.bigquery.table.TableReference: pointer to this table.
88 """
89 from google.cloud.bigquery import dataset
90
91 dataset_ref = dataset.DatasetReference(table.project, table.dataset_id)
92 return TableReference(dataset_ref, table.table_id)
93
94
95 def _view_use_legacy_sql_getter(table):
96 """bool: Specifies whether to execute the view with Legacy or Standard SQL.
97
98 This boolean specifies whether to execute the view with Legacy SQL
99 (:data:`True`) or Standard SQL (:data:`False`). The client side default is
100 :data:`False`. The server-side default is :data:`True`. If this table is
101 not a view, :data:`None` is returned.
102
103 Raises:
104 ValueError: For invalid value types.
105 """
106 view = table._properties.get("view")
107 if view is not None:
108 # The server-side default for useLegacySql is True.
109 return view.get("useLegacySql", True)
110 # In some cases, such as in a table list no view object is present, but the
111 # resource still represents a view. Use the type as a fallback.
112 if table.table_type == "VIEW":
113 # The server-side default for useLegacySql is True.
114 return True
115
116
117 class TableReference(object):
118 """TableReferences are pointers to tables.
119
120 See
121 https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablereference
122
123 Args:
124 dataset_ref (google.cloud.bigquery.dataset.DatasetReference):
125 A pointer to the dataset
126 table_id (str): The ID of the table
127 """
128
129 def __init__(self, dataset_ref, table_id):
130 self._project = dataset_ref.project
131 self._dataset_id = dataset_ref.dataset_id
132 self._table_id = table_id
133
134 @property
135 def project(self):
136 """str: Project bound to the table"""
137 return self._project
138
139 @property
140 def dataset_id(self):
141 """str: ID of dataset containing the table."""
142 return self._dataset_id
143
144 @property
145 def table_id(self):
146 """str: The table ID."""
147 return self._table_id
148
149 @property
150 def path(self):
151 """str: URL path for the table's APIs."""
152 return "/projects/%s/datasets/%s/tables/%s" % (
153 self._project,
154 self._dataset_id,
155 self._table_id,
156 )
157
158 @classmethod
159 def from_string(cls, table_id, default_project=None):
160 """Construct a table reference from table ID string.
161
162 Args:
163 table_id (str):
164 A table ID in standard SQL format. If ``default_project``
165 is not specified, this must included a project ID, dataset
166 ID, and table ID, each separated by ``.``.
167 default_project (str):
168 Optional. The project ID to use when ``table_id`` does not
169 include a project ID.
170
171 Returns:
172 TableReference: Table reference parsed from ``table_id``.
173
174 Examples:
175 >>> TableReference.from_string('my-project.mydataset.mytable')
176 TableRef...(DatasetRef...('my-project', 'mydataset'), 'mytable')
177
178 Raises:
179 ValueError:
180 If ``table_id`` is not a fully-qualified table ID in
181 standard SQL format.
182 """
183 from google.cloud.bigquery.dataset import DatasetReference
184
185 (
186 output_project_id,
187 output_dataset_id,
188 output_table_id,
189 ) = _helpers._parse_3_part_id(
190 table_id, default_project=default_project, property_name="table_id"
191 )
192
193 return cls(
194 DatasetReference(output_project_id, output_dataset_id), output_table_id
195 )
196
197 @classmethod
198 def from_api_repr(cls, resource):
199 """Factory: construct a table reference given its API representation
200
201 Args:
202 resource (Dict[str, object]):
203 Table reference representation returned from the API
204
205 Returns:
206 google.cloud.bigquery.table.TableReference:
207 Table reference parsed from ``resource``.
208 """
209 from google.cloud.bigquery.dataset import DatasetReference
210
211 project = resource["projectId"]
212 dataset_id = resource["datasetId"]
213 table_id = resource["tableId"]
214 return cls(DatasetReference(project, dataset_id), table_id)
215
216 def to_api_repr(self):
217 """Construct the API resource representation of this table reference.
218
219 Returns:
220 Dict[str, object]: Table reference represented as an API resource
221 """
222 return {
223 "projectId": self._project,
224 "datasetId": self._dataset_id,
225 "tableId": self._table_id,
226 }
227
228 def to_bqstorage(self):
229 """Construct a BigQuery Storage API representation of this table.
230
231 Install the ``google-cloud-bigquery-storage`` package to use this
232 feature.
233
234 If the ``table_id`` contains a partition identifier (e.g.
235 ``my_table$201812``) or a snapshot identifier (e.g.
236 ``mytable@1234567890``), it is ignored. Use
237 :class:`google.cloud.bigquery_storage_v1beta1.types.TableReadOptions`
238 to filter rows by partition. Use
239 :class:`google.cloud.bigquery_storage_v1beta1.types.TableModifiers`
240 to select a specific snapshot to read from.
241
242 Returns:
243 google.cloud.bigquery_storage_v1beta1.types.TableReference:
244 A reference to this table in the BigQuery Storage API.
245
246 Raises:
247 ValueError:
248 If the :mod:`google.cloud.bigquery_storage_v1beta1` module
249 cannot be imported.
250 """
251 if bigquery_storage_v1beta1 is None:
252 raise ValueError(_NO_BQSTORAGE_ERROR)
253
254 table_ref = bigquery_storage_v1beta1.types.TableReference()
255 table_ref.project_id = self._project
256 table_ref.dataset_id = self._dataset_id
257 table_id = self._table_id
258
259 if "@" in table_id:
260 table_id = table_id.split("@")[0]
261
262 if "$" in table_id:
263 table_id = table_id.split("$")[0]
264
265 table_ref.table_id = table_id
266
267 return table_ref
268
269 def _key(self):
270 """A tuple key that uniquely describes this field.
271
272 Used to compute this instance's hashcode and evaluate equality.
273
274 Returns:
275 Tuple[str]: The contents of this :class:`DatasetReference`.
276 """
277 return (self._project, self._dataset_id, self._table_id)
278
279 def __eq__(self, other):
280 if not isinstance(other, TableReference):
281 return NotImplemented
282 return self._key() == other._key()
283
284 def __ne__(self, other):
285 return not self == other
286
287 def __hash__(self):
288 return hash(self._key())
289
290 def __repr__(self):
291 from google.cloud.bigquery.dataset import DatasetReference
292
293 dataset_ref = DatasetReference(self._project, self._dataset_id)
294 return "TableReference({}, '{}')".format(repr(dataset_ref), self._table_id)
295
296
297 class Table(object):
298 """Tables represent a set of rows whose values correspond to a schema.
299
300 See
301 https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource-table
302
303 Args:
304 table_ref (Union[google.cloud.bigquery.table.TableReference, str]):
305 A pointer to a table. If ``table_ref`` is a string, it must
306 included a project ID, dataset ID, and table ID, each separated
307 by ``.``.
308 schema (Optional[Sequence[Union[ \
309 :class:`~google.cloud.bigquery.schema.SchemaField`, \
310 Mapping[str, Any] \
311 ]]]):
312 The table's schema. If any item is a mapping, its content must be
313 compatible with
314 :meth:`~google.cloud.bigquery.schema.SchemaField.from_api_repr`.
315 """
316
317 _PROPERTY_TO_API_FIELD = {
318 "friendly_name": "friendlyName",
319 "expires": "expirationTime",
320 "time_partitioning": "timePartitioning",
321 "partitioning_type": "timePartitioning",
322 "partition_expiration": "timePartitioning",
323 "view_use_legacy_sql": "view",
324 "view_query": "view",
325 "external_data_configuration": "externalDataConfiguration",
326 "encryption_configuration": "encryptionConfiguration",
327 "require_partition_filter": "requirePartitionFilter",
328 }
329
330 def __init__(self, table_ref, schema=None):
331 table_ref = _table_arg_to_table_ref(table_ref)
332 self._properties = {"tableReference": table_ref.to_api_repr(), "labels": {}}
333 # Let the @property do validation.
334 if schema is not None:
335 self.schema = schema
336
337 @property
338 def project(self):
339 """str: Project bound to the table."""
340 return self._properties["tableReference"]["projectId"]
341
342 @property
343 def dataset_id(self):
344 """str: ID of dataset containing the table."""
345 return self._properties["tableReference"]["datasetId"]
346
347 @property
348 def table_id(self):
349 """str: ID of the table."""
350 return self._properties["tableReference"]["tableId"]
351
352 reference = property(_reference_getter)
353
354 @property
355 def path(self):
356 """str: URL path for the table's APIs."""
357 return "/projects/%s/datasets/%s/tables/%s" % (
358 self.project,
359 self.dataset_id,
360 self.table_id,
361 )
362
363 @property
364 def require_partition_filter(self):
365 """bool: If set to true, queries over the partitioned table require a
366 partition filter that can be used for partition elimination to be
367 specified.
368 """
369 return self._properties.get("requirePartitionFilter")
370
371 @require_partition_filter.setter
372 def require_partition_filter(self, value):
373 self._properties["requirePartitionFilter"] = value
374
375 @property
376 def schema(self):
377 """Sequence[Union[ \
378 :class:`~google.cloud.bigquery.schema.SchemaField`, \
379 Mapping[str, Any] \
380 ]]:
381 Table's schema.
382
383 Raises:
384 Exception:
385 If ``schema`` is not a sequence, or if any item in the sequence
386 is not a :class:`~google.cloud.bigquery.schema.SchemaField`
387 instance or a compatible mapping representation of the field.
388 """
389 prop = self._properties.get("schema")
390 if not prop:
391 return []
392 else:
393 return _parse_schema_resource(prop)
394
395 @schema.setter
396 def schema(self, value):
397 if value is None:
398 self._properties["schema"] = None
399 else:
400 value = _to_schema_fields(value)
401 self._properties["schema"] = {"fields": _build_schema_resource(value)}
402
403 @property
404 def labels(self):
405 """Dict[str, str]: Labels for the table.
406
407 This method always returns a dict. To change a table's labels,
408 modify the dict, then call ``Client.update_table``. To delete a
409 label, set its value to :data:`None` before updating.
410
411 Raises:
412 ValueError: If ``value`` type is invalid.
413 """
414 return self._properties.setdefault("labels", {})
415
416 @labels.setter
417 def labels(self, value):
418 if not isinstance(value, dict):
419 raise ValueError("Pass a dict")
420 self._properties["labels"] = value
421
422 @property
423 def encryption_configuration(self):
424 """google.cloud.bigquery.encryption_configuration.EncryptionConfiguration: Custom
425 encryption configuration for the table.
426
427 Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None`
428 if using default encryption.
429
430 See `protecting data with Cloud KMS keys
431 <https://cloud.google.com/bigquery/docs/customer-managed-encryption>`_
432 in the BigQuery documentation.
433 """
434 prop = self._properties.get("encryptionConfiguration")
435 if prop is not None:
436 prop = EncryptionConfiguration.from_api_repr(prop)
437 return prop
438
439 @encryption_configuration.setter
440 def encryption_configuration(self, value):
441 api_repr = value
442 if value is not None:
443 api_repr = value.to_api_repr()
444 self._properties["encryptionConfiguration"] = api_repr
445
446 @property
447 def created(self):
448 """Union[datetime.datetime, None]: Datetime at which the table was
449 created (:data:`None` until set from the server).
450 """
451 creation_time = self._properties.get("creationTime")
452 if creation_time is not None:
453 # creation_time will be in milliseconds.
454 return google.cloud._helpers._datetime_from_microseconds(
455 1000.0 * float(creation_time)
456 )
457
458 @property
459 def etag(self):
460 """Union[str, None]: ETag for the table resource (:data:`None` until
461 set from the server).
462 """
463 return self._properties.get("etag")
464
465 @property
466 def modified(self):
467 """Union[datetime.datetime, None]: Datetime at which the table was last
468 modified (:data:`None` until set from the server).
469 """
470 modified_time = self._properties.get("lastModifiedTime")
471 if modified_time is not None:
472 # modified_time will be in milliseconds.
473 return google.cloud._helpers._datetime_from_microseconds(
474 1000.0 * float(modified_time)
475 )
476
477 @property
478 def num_bytes(self):
479 """Union[int, None]: The size of the table in bytes (:data:`None` until
480 set from the server).
481 """
482 return _helpers._int_or_none(self._properties.get("numBytes"))
483
484 @property
485 def num_rows(self):
486 """Union[int, None]: The number of rows in the table (:data:`None`
487 until set from the server).
488 """
489 return _helpers._int_or_none(self._properties.get("numRows"))
490
491 @property
492 def self_link(self):
493 """Union[str, None]: URL for the table resource (:data:`None` until set
494 from the server).
495 """
496 return self._properties.get("selfLink")
497
498 @property
499 def full_table_id(self):
500 """Union[str, None]: ID for the table (:data:`None` until set from the
501 server).
502
503 In the format ``project_id:dataset_id.table_id``.
504 """
505 return self._properties.get("id")
506
507 @property
508 def table_type(self):
509 """Union[str, None]: The type of the table (:data:`None` until set from
510 the server).
511
512 Possible values are ``'TABLE'``, ``'VIEW'``, or ``'EXTERNAL'``.
513 """
514 return self._properties.get("type")
515
516 @property
517 def range_partitioning(self):
518 """Optional[google.cloud.bigquery.table.RangePartitioning]:
519 Configures range-based partitioning for a table.
520
521 .. note::
522 **Beta**. The integer range partitioning feature is in a
523 pre-release state and might change or have limited support.
524
525 Only specify at most one of
526 :attr:`~google.cloud.bigquery.table.Table.time_partitioning` or
527 :attr:`~google.cloud.bigquery.table.Table.range_partitioning`.
528
529 Raises:
530 ValueError:
531 If the value is not
532 :class:`~google.cloud.bigquery.table.RangePartitioning` or
533 :data:`None`.
534 """
535 resource = self._properties.get("rangePartitioning")
536 if resource is not None:
537 return RangePartitioning(_properties=resource)
538
539 @range_partitioning.setter
540 def range_partitioning(self, value):
541 resource = value
542 if isinstance(value, RangePartitioning):
543 resource = value._properties
544 elif value is not None:
545 raise ValueError(
546 "Expected value to be RangePartitioning or None, got {}.".format(value)
547 )
548 self._properties["rangePartitioning"] = resource
549
550 @property
551 def time_partitioning(self):
552 """Optional[google.cloud.bigquery.table.TimePartitioning]: Configures time-based
553 partitioning for a table.
554
555 Only specify at most one of
556 :attr:`~google.cloud.bigquery.table.Table.time_partitioning` or
557 :attr:`~google.cloud.bigquery.table.Table.range_partitioning`.
558
559 Raises:
560 ValueError:
561 If the value is not
562 :class:`~google.cloud.bigquery.table.TimePartitioning` or
563 :data:`None`.
564 """
565 prop = self._properties.get("timePartitioning")
566 if prop is not None:
567 return TimePartitioning.from_api_repr(prop)
568
569 @time_partitioning.setter
570 def time_partitioning(self, value):
571 api_repr = value
572 if isinstance(value, TimePartitioning):
573 api_repr = value.to_api_repr()
574 elif value is not None:
575 raise ValueError(
576 "value must be google.cloud.bigquery.table.TimePartitioning " "or None"
577 )
578 self._properties["timePartitioning"] = api_repr
579
580 @property
581 def partitioning_type(self):
582 """Union[str, None]: Time partitioning of the table if it is
583 partitioned (Defaults to :data:`None`).
584
585 The only partitioning type that is currently supported is
586 :attr:`~google.cloud.bigquery.table.TimePartitioningType.DAY`.
587 """
588 warnings.warn(
589 "This method will be deprecated in future versions. Please use "
590 "Table.time_partitioning.type_ instead.",
591 PendingDeprecationWarning,
592 stacklevel=2,
593 )
594 if self.time_partitioning is not None:
595 return self.time_partitioning.type_
596
597 @partitioning_type.setter
598 def partitioning_type(self, value):
599 warnings.warn(
600 "This method will be deprecated in future versions. Please use "
601 "Table.time_partitioning.type_ instead.",
602 PendingDeprecationWarning,
603 stacklevel=2,
604 )
605 if self.time_partitioning is None:
606 self._properties["timePartitioning"] = {}
607 self._properties["timePartitioning"]["type"] = value
608
609 @property
610 def partition_expiration(self):
611 """Union[int, None]: Expiration time in milliseconds for a partition.
612
613 If :attr:`partition_expiration` is set and :attr:`type_` is
614 not set, :attr:`type_` will default to
615 :attr:`~google.cloud.bigquery.table.TimePartitioningType.DAY`.
616 """
617 warnings.warn(
618 "This method will be deprecated in future versions. Please use "
619 "Table.time_partitioning.expiration_ms instead.",
620 PendingDeprecationWarning,
621 stacklevel=2,
622 )
623 if self.time_partitioning is not None:
624 return self.time_partitioning.expiration_ms
625
626 @partition_expiration.setter
627 def partition_expiration(self, value):
628 warnings.warn(
629 "This method will be deprecated in future versions. Please use "
630 "Table.time_partitioning.expiration_ms instead.",
631 PendingDeprecationWarning,
632 stacklevel=2,
633 )
634 if self.time_partitioning is None:
635 self._properties["timePartitioning"] = {"type": TimePartitioningType.DAY}
636 self._properties["timePartitioning"]["expirationMs"] = str(value)
637
638 @property
639 def clustering_fields(self):
640 """Union[List[str], None]: Fields defining clustering for the table
641
642 (Defaults to :data:`None`).
643
644 Clustering fields are immutable after table creation.
645
646 .. note::
647
648 As of 2018-06-29, clustering fields cannot be set on a table
649 which does not also have time partioning defined.
650 """
651 prop = self._properties.get("clustering")
652 if prop is not None:
653 return list(prop.get("fields", ()))
654
655 @clustering_fields.setter
656 def clustering_fields(self, value):
657 """Union[List[str], None]: Fields defining clustering for the table
658
659 (Defaults to :data:`None`).
660 """
661 if value is not None:
662 prop = self._properties.setdefault("clustering", {})
663 prop["fields"] = value
664 else:
665 if "clustering" in self._properties:
666 del self._properties["clustering"]
667
668 @property
669 def description(self):
670 """Union[str, None]: Description of the table (defaults to
671 :data:`None`).
672
673 Raises:
674 ValueError: For invalid value types.
675 """
676 return self._properties.get("description")
677
678 @description.setter
679 def description(self, value):
680 if not isinstance(value, six.string_types) and value is not None:
681 raise ValueError("Pass a string, or None")
682 self._properties["description"] = value
683
684 @property
685 def expires(self):
686 """Union[datetime.datetime, None]: Datetime at which the table will be
687 deleted.
688
689 Raises:
690 ValueError: For invalid value types.
691 """
692 expiration_time = self._properties.get("expirationTime")
693 if expiration_time is not None:
694 # expiration_time will be in milliseconds.
695 return google.cloud._helpers._datetime_from_microseconds(
696 1000.0 * float(expiration_time)
697 )
698
699 @expires.setter
700 def expires(self, value):
701 if not isinstance(value, datetime.datetime) and value is not None:
702 raise ValueError("Pass a datetime, or None")
703 value_ms = google.cloud._helpers._millis_from_datetime(value)
704 self._properties["expirationTime"] = _helpers._str_or_none(value_ms)
705
706 @property
707 def friendly_name(self):
708 """Union[str, None]: Title of the table (defaults to :data:`None`).
709
710 Raises:
711 ValueError: For invalid value types.
712 """
713 return self._properties.get("friendlyName")
714
715 @friendly_name.setter
716 def friendly_name(self, value):
717 if not isinstance(value, six.string_types) and value is not None:
718 raise ValueError("Pass a string, or None")
719 self._properties["friendlyName"] = value
720
721 @property
722 def location(self):
723 """Union[str, None]: Location in which the table is hosted
724
725 Defaults to :data:`None`.
726 """
727 return self._properties.get("location")
728
729 @property
730 def view_query(self):
731 """Union[str, None]: SQL query defining the table as a view (defaults
732 to :data:`None`).
733
734 By default, the query is treated as Standard SQL. To use Legacy
735 SQL, set :attr:`view_use_legacy_sql` to :data:`True`.
736
737 Raises:
738 ValueError: For invalid value types.
739 """
740 view = self._properties.get("view")
741 if view is not None:
742 return view.get("query")
743
744 @view_query.setter
745 def view_query(self, value):
746 if not isinstance(value, six.string_types):
747 raise ValueError("Pass a string")
748 view = self._properties.get("view")
749 if view is None:
750 view = self._properties["view"] = {}
751 view["query"] = value
752 # The service defaults useLegacySql to True, but this
753 # client uses Standard SQL by default.
754 if view.get("useLegacySql") is None:
755 view["useLegacySql"] = False
756
757 @view_query.deleter
758 def view_query(self):
759 """Delete SQL query defining the table as a view."""
760 self._properties.pop("view", None)
761
762 view_use_legacy_sql = property(_view_use_legacy_sql_getter)
763
764 @view_use_legacy_sql.setter
765 def view_use_legacy_sql(self, value):
766 if not isinstance(value, bool):
767 raise ValueError("Pass a boolean")
768 if self._properties.get("view") is None:
769 self._properties["view"] = {}
770 self._properties["view"]["useLegacySql"] = value
771
772 @property
773 def streaming_buffer(self):
774 """google.cloud.bigquery.StreamingBuffer: Information about a table's
775 streaming buffer.
776 """
777 sb = self._properties.get("streamingBuffer")
778 if sb is not None:
779 return StreamingBuffer(sb)
780
781 @property
782 def external_data_configuration(self):
783 """Union[google.cloud.bigquery.ExternalConfig, None]: Configuration for
784 an external data source (defaults to :data:`None`).
785
786 Raises:
787 ValueError: For invalid value types.
788 """
789 prop = self._properties.get("externalDataConfiguration")
790 if prop is not None:
791 prop = ExternalConfig.from_api_repr(prop)
792 return prop
793
794 @external_data_configuration.setter
795 def external_data_configuration(self, value):
796 if not (value is None or isinstance(value, ExternalConfig)):
797 raise ValueError("Pass an ExternalConfig or None")
798 api_repr = value
799 if value is not None:
800 api_repr = value.to_api_repr()
801 self._properties["externalDataConfiguration"] = api_repr
802
803 @classmethod
804 def from_string(cls, full_table_id):
805 """Construct a table from fully-qualified table ID.
806
807 Args:
808 full_table_id (str):
809 A fully-qualified table ID in standard SQL format. Must
810 included a project ID, dataset ID, and table ID, each
811 separated by ``.``.
812
813 Returns:
814 Table: Table parsed from ``full_table_id``.
815
816 Examples:
817 >>> Table.from_string('my-project.mydataset.mytable')
818 Table(TableRef...(D...('my-project', 'mydataset'), 'mytable'))
819
820 Raises:
821 ValueError:
822 If ``full_table_id`` is not a fully-qualified table ID in
823 standard SQL format.
824 """
825 return cls(TableReference.from_string(full_table_id))
826
827 @classmethod
828 def from_api_repr(cls, resource):
829 """Factory: construct a table given its API representation
830
831 Args:
832 resource (Dict[str, object]):
833 Table resource representation from the API
834
835 Returns:
836 google.cloud.bigquery.table.Table: Table parsed from ``resource``.
837
838 Raises:
839 KeyError:
840 If the ``resource`` lacks the key ``'tableReference'``, or if
841 the ``dict`` stored within the key ``'tableReference'`` lacks
842 the keys ``'tableId'``, ``'projectId'``, or ``'datasetId'``.
843 """
844 from google.cloud.bigquery import dataset
845
846 if (
847 "tableReference" not in resource
848 or "tableId" not in resource["tableReference"]
849 ):
850 raise KeyError(
851 "Resource lacks required identity information:"
852 '["tableReference"]["tableId"]'
853 )
854 project_id = resource["tableReference"]["projectId"]
855 table_id = resource["tableReference"]["tableId"]
856 dataset_id = resource["tableReference"]["datasetId"]
857 dataset_ref = dataset.DatasetReference(project_id, dataset_id)
858
859 table = cls(dataset_ref.table(table_id))
860 table._properties = resource
861
862 return table
863
864 def to_api_repr(self):
865 """Constructs the API resource of this table
866
867 Returns:
868 Dict[str, object]: Table represented as an API resource
869 """
870 return copy.deepcopy(self._properties)
871
872 def to_bqstorage(self):
873 """Construct a BigQuery Storage API representation of this table.
874
875 Returns:
876 google.cloud.bigquery_storage_v1beta1.types.TableReference:
877 A reference to this table in the BigQuery Storage API.
878 """
879 return self.reference.to_bqstorage()
880
881 def _build_resource(self, filter_fields):
882 """Generate a resource for ``update``."""
883 return _helpers._build_resource_from_properties(self, filter_fields)
884
885 def __repr__(self):
886 return "Table({})".format(repr(self.reference))
887
888
889 class TableListItem(object):
890 """A read-only table resource from a list operation.
891
892 For performance reasons, the BigQuery API only includes some of the table
893 properties when listing tables. Notably,
894 :attr:`~google.cloud.bigquery.table.Table.schema` and
895 :attr:`~google.cloud.bigquery.table.Table.num_rows` are missing.
896
897 For a full list of the properties that the BigQuery API returns, see the
898 `REST documentation for tables.list
899 <https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/list>`_.
900
901
902 Args:
903 resource (Dict[str, object]):
904 A table-like resource object from a table list response. A
905 ``tableReference`` property is required.
906
907 Raises:
908 ValueError:
909 If ``tableReference`` or one of its required members is missing
910 from ``resource``.
911 """
912
913 def __init__(self, resource):
914 if "tableReference" not in resource:
915 raise ValueError("resource must contain a tableReference value")
916 if "projectId" not in resource["tableReference"]:
917 raise ValueError(
918 "resource['tableReference'] must contain a projectId value"
919 )
920 if "datasetId" not in resource["tableReference"]:
921 raise ValueError(
922 "resource['tableReference'] must contain a datasetId value"
923 )
924 if "tableId" not in resource["tableReference"]:
925 raise ValueError("resource['tableReference'] must contain a tableId value")
926
927 self._properties = resource
928
929 @property
930 def created(self):
931 """Union[datetime.datetime, None]: Datetime at which the table was
932 created (:data:`None` until set from the server).
933 """
934 creation_time = self._properties.get("creationTime")
935 if creation_time is not None:
936 # creation_time will be in milliseconds.
937 return google.cloud._helpers._datetime_from_microseconds(
938 1000.0 * float(creation_time)
939 )
940
941 @property
942 def expires(self):
943 """Union[datetime.datetime, None]: Datetime at which the table will be
944 deleted.
945 """
946 expiration_time = self._properties.get("expirationTime")
947 if expiration_time is not None:
948 # expiration_time will be in milliseconds.
949 return google.cloud._helpers._datetime_from_microseconds(
950 1000.0 * float(expiration_time)
951 )
952
953 @property
954 def project(self):
955 """str: Project bound to the table."""
956 return self._properties["tableReference"]["projectId"]
957
958 @property
959 def dataset_id(self):
960 """str: ID of dataset containing the table."""
961 return self._properties["tableReference"]["datasetId"]
962
963 @property
964 def table_id(self):
965 """str: ID of the table."""
966 return self._properties["tableReference"]["tableId"]
967
968 reference = property(_reference_getter)
969
970 @property
971 def labels(self):
972 """Dict[str, str]: Labels for the table.
973
974 This method always returns a dict. To change a table's labels,
975 modify the dict, then call ``Client.update_table``. To delete a
976 label, set its value to :data:`None` before updating.
977 """
978 return self._properties.setdefault("labels", {})
979
980 @property
981 def full_table_id(self):
982 """Union[str, None]: ID for the table (:data:`None` until set from the
983 server).
984
985 In the format ``project_id:dataset_id.table_id``.
986 """
987 return self._properties.get("id")
988
989 @property
990 def table_type(self):
991 """Union[str, None]: The type of the table (:data:`None` until set from
992 the server).
993
994 Possible values are ``'TABLE'``, ``'VIEW'``, or ``'EXTERNAL'``.
995 """
996 return self._properties.get("type")
997
998 @property
999 def time_partitioning(self):
1000 """google.cloud.bigquery.table.TimePartitioning: Configures time-based
1001 partitioning for a table.
1002 """
1003 prop = self._properties.get("timePartitioning")
1004 if prop is not None:
1005 return TimePartitioning.from_api_repr(prop)
1006
1007 @property
1008 def partitioning_type(self):
1009 """Union[str, None]: Time partitioning of the table if it is
1010 partitioned (Defaults to :data:`None`).
1011 """
1012 warnings.warn(
1013 "This method will be deprecated in future versions. Please use "
1014 "TableListItem.time_partitioning.type_ instead.",
1015 PendingDeprecationWarning,
1016 stacklevel=2,
1017 )
1018 if self.time_partitioning is not None:
1019 return self.time_partitioning.type_
1020
1021 @property
1022 def partition_expiration(self):
1023 """Union[int, None]: Expiration time in milliseconds for a partition.
1024
1025 If this property is set and :attr:`type_` is not set, :attr:`type_`
1026 will default to :attr:`TimePartitioningType.DAY`.
1027 """
1028 warnings.warn(
1029 "This method will be deprecated in future versions. Please use "
1030 "TableListItem.time_partitioning.expiration_ms instead.",
1031 PendingDeprecationWarning,
1032 stacklevel=2,
1033 )
1034 if self.time_partitioning is not None:
1035 return self.time_partitioning.expiration_ms
1036
1037 @property
1038 def friendly_name(self):
1039 """Union[str, None]: Title of the table (defaults to :data:`None`)."""
1040 return self._properties.get("friendlyName")
1041
1042 view_use_legacy_sql = property(_view_use_legacy_sql_getter)
1043
1044 @property
1045 def clustering_fields(self):
1046 """Union[List[str], None]: Fields defining clustering for the table
1047
1048 (Defaults to :data:`None`).
1049
1050 Clustering fields are immutable after table creation.
1051
1052 .. note::
1053
1054 As of 2018-06-29, clustering fields cannot be set on a table
1055 which does not also have time partioning defined.
1056 """
1057 prop = self._properties.get("clustering")
1058 if prop is not None:
1059 return list(prop.get("fields", ()))
1060
1061 @classmethod
1062 def from_string(cls, full_table_id):
1063 """Construct a table from fully-qualified table ID.
1064
1065 Args:
1066 full_table_id (str):
1067 A fully-qualified table ID in standard SQL format. Must
1068 included a project ID, dataset ID, and table ID, each
1069 separated by ``.``.
1070
1071 Returns:
1072 Table: Table parsed from ``full_table_id``.
1073
1074 Examples:
1075 >>> Table.from_string('my-project.mydataset.mytable')
1076 Table(TableRef...(D...('my-project', 'mydataset'), 'mytable'))
1077
1078 Raises:
1079 ValueError:
1080 If ``full_table_id`` is not a fully-qualified table ID in
1081 standard SQL format.
1082 """
1083 return cls(
1084 {"tableReference": TableReference.from_string(full_table_id).to_api_repr()}
1085 )
1086
1087 def to_bqstorage(self):
1088 """Construct a BigQuery Storage API representation of this table.
1089
1090 Returns:
1091 google.cloud.bigquery_storage_v1beta1.types.TableReference:
1092 A reference to this table in the BigQuery Storage API.
1093 """
1094 return self.reference.to_bqstorage()
1095
1096
1097 def _row_from_mapping(mapping, schema):
1098 """Convert a mapping to a row tuple using the schema.
1099
1100 Args:
1101 mapping (Dict[str, object])
1102 Mapping of row data: must contain keys for all required fields in
1103 the schema. Keys which do not correspond to a field in the schema
1104 are ignored.
1105 schema (List[google.cloud.bigquery.schema.SchemaField]):
1106 The schema of the table destination for the rows
1107
1108 Returns:
1109 Tuple[object]:
1110 Tuple whose elements are ordered according to the schema.
1111
1112 Raises:
1113 ValueError: If schema is empty.
1114 """
1115 if len(schema) == 0:
1116 raise ValueError(_TABLE_HAS_NO_SCHEMA)
1117
1118 row = []
1119 for field in schema:
1120 if field.mode == "REQUIRED":
1121 row.append(mapping[field.name])
1122 elif field.mode == "REPEATED":
1123 row.append(mapping.get(field.name, ()))
1124 elif field.mode == "NULLABLE":
1125 row.append(mapping.get(field.name))
1126 else:
1127 raise ValueError("Unknown field mode: {}".format(field.mode))
1128 return tuple(row)
1129
1130
1131 class StreamingBuffer(object):
1132 """Information about a table's streaming buffer.
1133
1134 See https://cloud.google.com/bigquery/streaming-data-into-bigquery.
1135
1136 Args:
1137 resource (Dict[str, object]):
1138 streaming buffer representation returned from the API
1139 """
1140
1141 def __init__(self, resource):
1142 self.estimated_bytes = int(resource["estimatedBytes"])
1143 self.estimated_rows = int(resource["estimatedRows"])
1144 # time is in milliseconds since the epoch.
1145 self.oldest_entry_time = google.cloud._helpers._datetime_from_microseconds(
1146 1000.0 * int(resource["oldestEntryTime"])
1147 )
1148
1149
1150 class Row(object):
1151 """A BigQuery row.
1152
1153 Values can be accessed by position (index), by key like a dict,
1154 or as properties.
1155
1156 Args:
1157 values (Sequence[object]): The row values
1158 field_to_index (Dict[str, int]):
1159 A mapping from schema field names to indexes
1160 """
1161
1162 # Choose unusual field names to try to avoid conflict with schema fields.
1163 __slots__ = ("_xxx_values", "_xxx_field_to_index")
1164
1165 def __init__(self, values, field_to_index):
1166 self._xxx_values = values
1167 self._xxx_field_to_index = field_to_index
1168
1169 def values(self):
1170 """Return the values included in this row.
1171
1172 Returns:
1173 Sequence[object]: A sequence of length ``len(row)``.
1174 """
1175 return copy.deepcopy(self._xxx_values)
1176
1177 def keys(self):
1178 """Return the keys for using a row as a dict.
1179
1180 Returns:
1181 Iterable[str]: The keys corresponding to the columns of a row
1182
1183 Examples:
1184
1185 >>> list(Row(('a', 'b'), {'x': 0, 'y': 1}).keys())
1186 ['x', 'y']
1187 """
1188 return six.iterkeys(self._xxx_field_to_index)
1189
1190 def items(self):
1191 """Return items as ``(key, value)`` pairs.
1192
1193 Returns:
1194 Iterable[Tuple[str, object]]:
1195 The ``(key, value)`` pairs representing this row.
1196
1197 Examples:
1198
1199 >>> list(Row(('a', 'b'), {'x': 0, 'y': 1}).items())
1200 [('x', 'a'), ('y', 'b')]
1201 """
1202 for key, index in six.iteritems(self._xxx_field_to_index):
1203 yield (key, copy.deepcopy(self._xxx_values[index]))
1204
1205 def get(self, key, default=None):
1206 """Return a value for key, with a default value if it does not exist.
1207
1208 Args:
1209 key (str): The key of the column to access
1210 default (object):
1211 The default value to use if the key does not exist. (Defaults
1212 to :data:`None`.)
1213
1214 Returns:
1215 object:
1216 The value associated with the provided key, or a default value.
1217
1218 Examples:
1219 When the key exists, the value associated with it is returned.
1220
1221 >>> Row(('a', 'b'), {'x': 0, 'y': 1}).get('x')
1222 'a'
1223
1224 The default value is :data:`None` when the key does not exist.
1225
1226 >>> Row(('a', 'b'), {'x': 0, 'y': 1}).get('z')
1227 None
1228
1229 The default value can be overrided with the ``default`` parameter.
1230
1231 >>> Row(('a', 'b'), {'x': 0, 'y': 1}).get('z', '')
1232 ''
1233
1234 >>> Row(('a', 'b'), {'x': 0, 'y': 1}).get('z', default = '')
1235 ''
1236 """
1237 index = self._xxx_field_to_index.get(key)
1238 if index is None:
1239 return default
1240 return self._xxx_values[index]
1241
1242 def __getattr__(self, name):
1243 value = self._xxx_field_to_index.get(name)
1244 if value is None:
1245 raise AttributeError("no row field {!r}".format(name))
1246 return self._xxx_values[value]
1247
1248 def __len__(self):
1249 return len(self._xxx_values)
1250
1251 def __getitem__(self, key):
1252 if isinstance(key, six.string_types):
1253 value = self._xxx_field_to_index.get(key)
1254 if value is None:
1255 raise KeyError("no row field {!r}".format(key))
1256 key = value
1257 return self._xxx_values[key]
1258
1259 def __eq__(self, other):
1260 if not isinstance(other, Row):
1261 return NotImplemented
1262 return (
1263 self._xxx_values == other._xxx_values
1264 and self._xxx_field_to_index == other._xxx_field_to_index
1265 )
1266
1267 def __ne__(self, other):
1268 return not self == other
1269
1270 def __repr__(self):
1271 # sort field dict by value, for determinism
1272 items = sorted(self._xxx_field_to_index.items(), key=operator.itemgetter(1))
1273 f2i = "{" + ", ".join("%r: %d" % item for item in items) + "}"
1274 return "Row({}, {})".format(self._xxx_values, f2i)
1275
1276
1277 class _NoopProgressBarQueue(object):
1278 """A fake Queue class that does nothing.
1279
1280 This is used when there is no progress bar to send updates to.
1281 """
1282
1283 def put_nowait(self, item):
1284 """Don't actually do anything with the item."""
1285
1286
1287 class RowIterator(HTTPIterator):
1288 """A class for iterating through HTTP/JSON API row list responses.
1289
1290 Args:
1291 client (google.cloud.bigquery.Client): The API client.
1292 api_request (Callable[google.cloud._http.JSONConnection.api_request]):
1293 The function to use to make API requests.
1294 path (str): The method path to query for the list of items.
1295 schema (Sequence[Union[ \
1296 :class:`~google.cloud.bigquery.schema.SchemaField`, \
1297 Mapping[str, Any] \
1298 ]]):
1299 The table's schema. If any item is a mapping, its content must be
1300 compatible with
1301 :meth:`~google.cloud.bigquery.schema.SchemaField.from_api_repr`.
1302 page_token (str): A token identifying a page in a result set to start
1303 fetching results from.
1304 max_results (int, optional): The maximum number of results to fetch.
1305 page_size (int, optional): The maximum number of rows in each page
1306 of results from this request. Non-positive values are ignored.
1307 Defaults to a sensible value set by the API.
1308 extra_params (Dict[str, object]):
1309 Extra query string parameters for the API call.
1310 table (Union[ \
1311 google.cloud.bigquery.table.Table, \
1312 google.cloud.bigquery.table.TableReference, \
1313 ]):
1314 Optional. The table which these rows belong to, or a reference to
1315 it. Used to call the BigQuery Storage API to fetch rows.
1316 selected_fields (Sequence[google.cloud.bigquery.schema.SchemaField]):
1317 Optional. A subset of columns to select from this table.
1318
1319 """
1320
1321 def __init__(
1322 self,
1323 client,
1324 api_request,
1325 path,
1326 schema,
1327 page_token=None,
1328 max_results=None,
1329 page_size=None,
1330 extra_params=None,
1331 table=None,
1332 selected_fields=None,
1333 ):
1334 super(RowIterator, self).__init__(
1335 client,
1336 api_request,
1337 path,
1338 item_to_value=_item_to_row,
1339 items_key="rows",
1340 page_token=page_token,
1341 max_results=max_results,
1342 extra_params=extra_params,
1343 page_start=_rows_page_start,
1344 next_token="pageToken",
1345 )
1346 schema = _to_schema_fields(schema)
1347 self._field_to_index = _helpers._field_to_index_mapping(schema)
1348 self._page_size = page_size
1349 self._preserve_order = False
1350 self._project = client.project
1351 self._schema = schema
1352 self._selected_fields = selected_fields
1353 self._table = table
1354 self._total_rows = getattr(table, "num_rows", None)
1355
1356 def _get_next_page_response(self):
1357 """Requests the next page from the path provided.
1358
1359 Returns:
1360 Dict[str, object]:
1361 The parsed JSON response of the next page's contents.
1362 """
1363 params = self._get_query_params()
1364 if self._page_size is not None:
1365 params["maxResults"] = self._page_size
1366 return self.api_request(
1367 method=self._HTTP_METHOD, path=self.path, query_params=params
1368 )
1369
1370 @property
1371 def schema(self):
1372 """List[google.cloud.bigquery.schema.SchemaField]: The subset of
1373 columns to be read from the table."""
1374 return list(self._schema)
1375
1376 @property
1377 def total_rows(self):
1378 """int: The total number of rows in the table."""
1379 return self._total_rows
1380
1381 def _get_progress_bar(self, progress_bar_type):
1382 """Construct a tqdm progress bar object, if tqdm is installed."""
1383 if tqdm is None:
1384 if progress_bar_type is not None:
1385 warnings.warn(_NO_TQDM_ERROR, UserWarning, stacklevel=3)
1386 return None
1387
1388 description = "Downloading"
1389 unit = "rows"
1390
1391 try:
1392 if progress_bar_type == "tqdm":
1393 return tqdm.tqdm(desc=description, total=self.total_rows, unit=unit)
1394 elif progress_bar_type == "tqdm_notebook":
1395 return tqdm.tqdm_notebook(
1396 desc=description, total=self.total_rows, unit=unit
1397 )
1398 elif progress_bar_type == "tqdm_gui":
1399 return tqdm.tqdm_gui(desc=description, total=self.total_rows, unit=unit)
1400 except (KeyError, TypeError):
1401 # Protect ourselves from any tqdm errors. In case of
1402 # unexpected tqdm behavior, just fall back to showing
1403 # no progress bar.
1404 warnings.warn(_NO_TQDM_ERROR, UserWarning, stacklevel=3)
1405 return None
1406
1407 def _to_page_iterable(
1408 self, bqstorage_download, tabledata_list_download, bqstorage_client=None
1409 ):
1410 if bqstorage_client is not None:
1411 try:
1412 # Iterate over the stream so that read errors are raised (and
1413 # the method can then fallback to tabledata.list).
1414 for item in bqstorage_download():
1415 yield item
1416 return
1417 except google.api_core.exceptions.Forbidden:
1418 # Don't hide errors such as insufficient permissions to create
1419 # a read session, or the API is not enabled. Both of those are
1420 # clearly problems if the developer has explicitly asked for
1421 # BigQuery Storage API support.
1422 raise
1423 except google.api_core.exceptions.GoogleAPICallError:
1424 # There is a known issue with reading from small anonymous
1425 # query results tables, so some errors are expected. Rather
1426 # than throw those errors, try reading the DataFrame again, but
1427 # with the tabledata.list API.
1428 pass
1429
1430 _LOGGER.debug(
1431 "Started reading table '{}.{}.{}' with tabledata.list.".format(
1432 self._table.project, self._table.dataset_id, self._table.table_id
1433 )
1434 )
1435 for item in tabledata_list_download():
1436 yield item
1437
1438 def _to_arrow_iterable(self, bqstorage_client=None):
1439 """Create an iterable of arrow RecordBatches, to process the table as a stream."""
1440 bqstorage_download = functools.partial(
1441 _pandas_helpers.download_arrow_bqstorage,
1442 self._project,
1443 self._table,
1444 bqstorage_client,
1445 preserve_order=self._preserve_order,
1446 selected_fields=self._selected_fields,
1447 )
1448 tabledata_list_download = functools.partial(
1449 _pandas_helpers.download_arrow_tabledata_list, iter(self.pages), self.schema
1450 )
1451 return self._to_page_iterable(
1452 bqstorage_download,
1453 tabledata_list_download,
1454 bqstorage_client=bqstorage_client,
1455 )
1456
1457 # If changing the signature of this method, make sure to apply the same
1458 # changes to job.QueryJob.to_arrow()
1459 def to_arrow(
1460 self,
1461 progress_bar_type=None,
1462 bqstorage_client=None,
1463 create_bqstorage_client=False,
1464 ):
1465 """[Beta] Create a class:`pyarrow.Table` by loading all pages of a
1466 table or query.
1467
1468 Args:
1469 progress_bar_type (Optional[str]):
1470 If set, use the `tqdm <https://tqdm.github.io/>`_ library to
1471 display a progress bar while the data downloads. Install the
1472 ``tqdm`` package to use this feature.
1473
1474 Possible values of ``progress_bar_type`` include:
1475
1476 ``None``
1477 No progress bar.
1478 ``'tqdm'``
1479 Use the :func:`tqdm.tqdm` function to print a progress bar
1480 to :data:`sys.stderr`.
1481 ``'tqdm_notebook'``
1482 Use the :func:`tqdm.tqdm_notebook` function to display a
1483 progress bar as a Jupyter notebook widget.
1484 ``'tqdm_gui'``
1485 Use the :func:`tqdm.tqdm_gui` function to display a
1486 progress bar as a graphical dialog box.
1487 bqstorage_client (google.cloud.bigquery_storage_v1beta1.BigQueryStorageClient):
1488 **Beta Feature** Optional. A BigQuery Storage API client. If
1489 supplied, use the faster BigQuery Storage API to fetch rows
1490 from BigQuery. This API is a billable API.
1491
1492 This method requires the ``pyarrow`` and
1493 ``google-cloud-bigquery-storage`` libraries.
1494
1495 Reading from a specific partition or snapshot is not
1496 currently supported by this method.
1497 create_bqstorage_client (bool):
1498 **Beta Feature** Optional. If ``True``, create a BigQuery
1499 Storage API client using the default API settings. The
1500 BigQuery Storage API is a faster way to fetch rows from
1501 BigQuery. See the ``bqstorage_client`` parameter for more
1502 information.
1503
1504 This argument does nothing if ``bqstorage_client`` is supplied.
1505
1506 ..versionadded:: 1.24.0
1507
1508 Returns:
1509 pyarrow.Table
1510 A :class:`pyarrow.Table` populated with row data and column
1511 headers from the query results. The column headers are derived
1512 from the destination table's schema.
1513
1514 Raises:
1515 ValueError: If the :mod:`pyarrow` library cannot be imported.
1516
1517 ..versionadded:: 1.17.0
1518 """
1519 if pyarrow is None:
1520 raise ValueError(_NO_PYARROW_ERROR)
1521
1522 if (
1523 bqstorage_client or create_bqstorage_client
1524 ) and self.max_results is not None:
1525 warnings.warn(
1526 "Cannot use bqstorage_client if max_results is set, "
1527 "reverting to fetching data with the tabledata.list endpoint.",
1528 stacklevel=2,
1529 )
1530 create_bqstorage_client = False
1531 bqstorage_client = None
1532
1533 owns_bqstorage_client = False
1534 if not bqstorage_client and create_bqstorage_client:
1535 owns_bqstorage_client = True
1536 bqstorage_client = self.client._create_bqstorage_client()
1537
1538 try:
1539 progress_bar = self._get_progress_bar(progress_bar_type)
1540
1541 record_batches = []
1542 for record_batch in self._to_arrow_iterable(
1543 bqstorage_client=bqstorage_client
1544 ):
1545 record_batches.append(record_batch)
1546
1547 if progress_bar is not None:
1548 # In some cases, the number of total rows is not populated
1549 # until the first page of rows is fetched. Update the
1550 # progress bar's total to keep an accurate count.
1551 progress_bar.total = progress_bar.total or self.total_rows
1552 progress_bar.update(record_batch.num_rows)
1553
1554 if progress_bar is not None:
1555 # Indicate that the download has finished.
1556 progress_bar.close()
1557 finally:
1558 if owns_bqstorage_client:
1559 bqstorage_client.transport.channel.close()
1560
1561 if record_batches:
1562 return pyarrow.Table.from_batches(record_batches)
1563 else:
1564 # No records, use schema based on BigQuery schema.
1565 arrow_schema = _pandas_helpers.bq_to_arrow_schema(self._schema)
1566 return pyarrow.Table.from_batches(record_batches, schema=arrow_schema)
1567
1568 def to_dataframe_iterable(self, bqstorage_client=None, dtypes=None):
1569 """Create an iterable of pandas DataFrames, to process the table as a stream.
1570
1571 Args:
1572 bqstorage_client (google.cloud.bigquery_storage_v1beta1.BigQueryStorageClient):
1573 **Beta Feature** Optional. A BigQuery Storage API client. If
1574 supplied, use the faster BigQuery Storage API to fetch rows
1575 from BigQuery.
1576
1577 This method requires the ``pyarrow`` and
1578 ``google-cloud-bigquery-storage`` libraries.
1579
1580 Reading from a specific partition or snapshot is not
1581 currently supported by this method.
1582
1583 **Caution**: There is a known issue reading small anonymous
1584 query result tables with the BQ Storage API. When a problem
1585 is encountered reading a table, the tabledata.list method
1586 from the BigQuery API is used, instead.
1587 dtypes (Map[str, Union[str, pandas.Series.dtype]]):
1588 Optional. A dictionary of column names pandas ``dtype``s. The
1589 provided ``dtype`` is used when constructing the series for
1590 the column specified. Otherwise, the default pandas behavior
1591 is used.
1592
1593 Returns:
1594 pandas.DataFrame:
1595 A generator of :class:`~pandas.DataFrame`.
1596
1597 Raises:
1598 ValueError:
1599 If the :mod:`pandas` library cannot be imported.
1600 """
1601 if pandas is None:
1602 raise ValueError(_NO_PANDAS_ERROR)
1603 if dtypes is None:
1604 dtypes = {}
1605
1606 column_names = [field.name for field in self._schema]
1607 bqstorage_download = functools.partial(
1608 _pandas_helpers.download_dataframe_bqstorage,
1609 self._project,
1610 self._table,
1611 bqstorage_client,
1612 column_names,
1613 dtypes,
1614 preserve_order=self._preserve_order,
1615 selected_fields=self._selected_fields,
1616 )
1617 tabledata_list_download = functools.partial(
1618 _pandas_helpers.download_dataframe_tabledata_list,
1619 iter(self.pages),
1620 self.schema,
1621 dtypes,
1622 )
1623 return self._to_page_iterable(
1624 bqstorage_download,
1625 tabledata_list_download,
1626 bqstorage_client=bqstorage_client,
1627 )
1628
1629 # If changing the signature of this method, make sure to apply the same
1630 # changes to job.QueryJob.to_dataframe()
1631 def to_dataframe(
1632 self,
1633 bqstorage_client=None,
1634 dtypes=None,
1635 progress_bar_type=None,
1636 create_bqstorage_client=False,
1637 ):
1638 """Create a pandas DataFrame by loading all pages of a query.
1639
1640 Args:
1641 bqstorage_client (google.cloud.bigquery_storage_v1beta1.BigQueryStorageClient):
1642 **Beta Feature** Optional. A BigQuery Storage API client. If
1643 supplied, use the faster BigQuery Storage API to fetch rows
1644 from BigQuery.
1645
1646 This method requires the ``pyarrow`` and
1647 ``google-cloud-bigquery-storage`` libraries.
1648
1649 Reading from a specific partition or snapshot is not
1650 currently supported by this method.
1651
1652 **Caution**: There is a known issue reading small anonymous
1653 query result tables with the BQ Storage API. When a problem
1654 is encountered reading a table, the tabledata.list method
1655 from the BigQuery API is used, instead.
1656 dtypes (Map[str, Union[str, pandas.Series.dtype]]):
1657 Optional. A dictionary of column names pandas ``dtype``s. The
1658 provided ``dtype`` is used when constructing the series for
1659 the column specified. Otherwise, the default pandas behavior
1660 is used.
1661 progress_bar_type (Optional[str]):
1662 If set, use the `tqdm <https://tqdm.github.io/>`_ library to
1663 display a progress bar while the data downloads. Install the
1664 ``tqdm`` package to use this feature.
1665
1666 Possible values of ``progress_bar_type`` include:
1667
1668 ``None``
1669 No progress bar.
1670 ``'tqdm'``
1671 Use the :func:`tqdm.tqdm` function to print a progress bar
1672 to :data:`sys.stderr`.
1673 ``'tqdm_notebook'``
1674 Use the :func:`tqdm.tqdm_notebook` function to display a
1675 progress bar as a Jupyter notebook widget.
1676 ``'tqdm_gui'``
1677 Use the :func:`tqdm.tqdm_gui` function to display a
1678 progress bar as a graphical dialog box.
1679
1680 ..versionadded:: 1.11.0
1681 create_bqstorage_client (bool):
1682 **Beta Feature** Optional. If ``True``, create a BigQuery
1683 Storage API client using the default API settings. The
1684 BigQuery Storage API is a faster way to fetch rows from
1685 BigQuery. See the ``bqstorage_client`` parameter for more
1686 information.
1687
1688 This argument does nothing if ``bqstorage_client`` is supplied.
1689
1690 ..versionadded:: 1.24.0
1691
1692 Returns:
1693 pandas.DataFrame:
1694 A :class:`~pandas.DataFrame` populated with row data and column
1695 headers from the query results. The column headers are derived
1696 from the destination table's schema.
1697
1698 Raises:
1699 ValueError:
1700 If the :mod:`pandas` library cannot be imported, or the
1701 :mod:`google.cloud.bigquery_storage_v1beta1` module is
1702 required but cannot be imported.
1703
1704 """
1705 if pandas is None:
1706 raise ValueError(_NO_PANDAS_ERROR)
1707 if dtypes is None:
1708 dtypes = {}
1709
1710 if (
1711 bqstorage_client or create_bqstorage_client
1712 ) and self.max_results is not None:
1713 warnings.warn(
1714 "Cannot use bqstorage_client if max_results is set, "
1715 "reverting to fetching data with the tabledata.list endpoint.",
1716 stacklevel=2,
1717 )
1718 create_bqstorage_client = False
1719 bqstorage_client = None
1720
1721 if pyarrow is not None:
1722 # If pyarrow is available, calling to_arrow, then converting to a
1723 # pandas dataframe is about 2x faster. This is because pandas.concat is
1724 # rarely no-copy, whereas pyarrow.Table.from_batches + to_pandas is
1725 # usually no-copy.
1726 record_batch = self.to_arrow(
1727 progress_bar_type=progress_bar_type,
1728 bqstorage_client=bqstorage_client,
1729 create_bqstorage_client=create_bqstorage_client,
1730 )
1731 df = record_batch.to_pandas()
1732 for column in dtypes:
1733 df[column] = pandas.Series(df[column], dtype=dtypes[column])
1734 return df
1735
1736 # The bqstorage_client is only used if pyarrow is available, so the
1737 # rest of this method only needs to account for tabledata.list.
1738 progress_bar = self._get_progress_bar(progress_bar_type)
1739
1740 frames = []
1741 for frame in self.to_dataframe_iterable(dtypes=dtypes):
1742 frames.append(frame)
1743
1744 if progress_bar is not None:
1745 # In some cases, the number of total rows is not populated
1746 # until the first page of rows is fetched. Update the
1747 # progress bar's total to keep an accurate count.
1748 progress_bar.total = progress_bar.total or self.total_rows
1749 progress_bar.update(len(frame))
1750
1751 if progress_bar is not None:
1752 # Indicate that the download has finished.
1753 progress_bar.close()
1754
1755 # Avoid concatting an empty list.
1756 if not frames:
1757 column_names = [field.name for field in self._schema]
1758 return pandas.DataFrame(columns=column_names)
1759 return pandas.concat(frames, ignore_index=True)
1760
1761
1762 class _EmptyRowIterator(object):
1763 """An empty row iterator.
1764
1765 This class prevents API requests when there are no rows to fetch or rows
1766 are impossible to fetch, such as with query results for DDL CREATE VIEW
1767 statements.
1768 """
1769
1770 schema = ()
1771 pages = ()
1772 total_rows = 0
1773
1774 def to_arrow(
1775 self,
1776 progress_bar_type=None,
1777 bqstorage_client=None,
1778 create_bqstorage_client=False,
1779 ):
1780 """[Beta] Create an empty class:`pyarrow.Table`.
1781
1782 Args:
1783 progress_bar_type (Optional[str]): Ignored. Added for compatibility with RowIterator.
1784 bqstorage_client (Any): Ignored. Added for compatibility with RowIterator.
1785 create_bqstorage_client (bool): Ignored. Added for compatibility with RowIterator.
1786
1787 Returns:
1788 pyarrow.Table: An empty :class:`pyarrow.Table`.
1789 """
1790 if pyarrow is None:
1791 raise ValueError(_NO_PYARROW_ERROR)
1792 return pyarrow.Table.from_arrays(())
1793
1794 def to_dataframe(
1795 self,
1796 bqstorage_client=None,
1797 dtypes=None,
1798 progress_bar_type=None,
1799 create_bqstorage_client=False,
1800 ):
1801 """Create an empty dataframe.
1802
1803 Args:
1804 bqstorage_client (Any): Ignored. Added for compatibility with RowIterator.
1805 dtypes (Any): Ignored. Added for compatibility with RowIterator.
1806 progress_bar_type (Any): Ignored. Added for compatibility with RowIterator.
1807 create_bqstorage_client (bool): Ignored. Added for compatibility with RowIterator.
1808
1809 Returns:
1810 pandas.DataFrame: An empty :class:`~pandas.DataFrame`.
1811 """
1812 if pandas is None:
1813 raise ValueError(_NO_PANDAS_ERROR)
1814 return pandas.DataFrame()
1815
1816 def __iter__(self):
1817 return iter(())
1818
1819
1820 class PartitionRange(object):
1821 """Definition of the ranges for range partitioning.
1822
1823 .. note::
1824 **Beta**. The integer range partitioning feature is in a pre-release
1825 state and might change or have limited support.
1826
1827 Args:
1828 start (Optional[int]):
1829 Sets the
1830 :attr:`~google.cloud.bigquery.table.PartitionRange.start`
1831 property.
1832 end (Optional[int]):
1833 Sets the
1834 :attr:`~google.cloud.bigquery.table.PartitionRange.end`
1835 property.
1836 interval (Optional[int]):
1837 Sets the
1838 :attr:`~google.cloud.bigquery.table.PartitionRange.interval`
1839 property.
1840 _properties (Optional[dict]):
1841 Private. Used to construct object from API resource.
1842 """
1843
1844 def __init__(self, start=None, end=None, interval=None, _properties=None):
1845 if _properties is None:
1846 _properties = {}
1847 self._properties = _properties
1848
1849 if start is not None:
1850 self.start = start
1851 if end is not None:
1852 self.end = end
1853 if interval is not None:
1854 self.interval = interval
1855
1856 @property
1857 def start(self):
1858 """int: The start of range partitioning, inclusive."""
1859 return _helpers._int_or_none(self._properties.get("start"))
1860
1861 @start.setter
1862 def start(self, value):
1863 self._properties["start"] = _helpers._str_or_none(value)
1864
1865 @property
1866 def end(self):
1867 """int: The end of range partitioning, exclusive."""
1868 return _helpers._int_or_none(self._properties.get("end"))
1869
1870 @end.setter
1871 def end(self, value):
1872 self._properties["end"] = _helpers._str_or_none(value)
1873
1874 @property
1875 def interval(self):
1876 """int: The width of each interval."""
1877 return _helpers._int_or_none(self._properties.get("interval"))
1878
1879 @interval.setter
1880 def interval(self, value):
1881 self._properties["interval"] = _helpers._str_or_none(value)
1882
1883 def _key(self):
1884 return tuple(sorted(self._properties.items()))
1885
1886 def __repr__(self):
1887 key_vals = ["{}={}".format(key, val) for key, val in self._key()]
1888 return "PartitionRange({})".format(", ".join(key_vals))
1889
1890
1891 class RangePartitioning(object):
1892 """Range-based partitioning configuration for a table.
1893
1894 .. note::
1895 **Beta**. The integer range partitioning feature is in a pre-release
1896 state and might change or have limited support.
1897
1898 Args:
1899 range_ (Optional[google.cloud.bigquery.table.PartitionRange]):
1900 Sets the
1901 :attr:`google.cloud.bigquery.table.RangePartitioning.range_`
1902 property.
1903 field (Optional[str]):
1904 Sets the
1905 :attr:`google.cloud.bigquery.table.RangePartitioning.field`
1906 property.
1907 _properties (Optional[dict]):
1908 Private. Used to construct object from API resource.
1909 """
1910
1911 def __init__(self, range_=None, field=None, _properties=None):
1912 if _properties is None:
1913 _properties = {}
1914 self._properties = _properties
1915
1916 if range_ is not None:
1917 self.range_ = range_
1918 if field is not None:
1919 self.field = field
1920
1921 # Trailing underscore to prevent conflict with built-in range() function.
1922 @property
1923 def range_(self):
1924 """google.cloud.bigquery.table.PartitionRange: Defines the
1925 ranges for range partitioning.
1926
1927 Raises:
1928 ValueError:
1929 If the value is not a :class:`PartitionRange`.
1930 """
1931 range_properties = self._properties.setdefault("range", {})
1932 return PartitionRange(_properties=range_properties)
1933
1934 @range_.setter
1935 def range_(self, value):
1936 if not isinstance(value, PartitionRange):
1937 raise ValueError("Expected a PartitionRange, but got {}.".format(value))
1938 self._properties["range"] = value._properties
1939
1940 @property
1941 def field(self):
1942 """str: The table is partitioned by this field.
1943
1944 The field must be a top-level ``NULLABLE`` / ``REQUIRED`` field. The
1945 only supported type is ``INTEGER`` / ``INT64``.
1946 """
1947 return self._properties.get("field")
1948
1949 @field.setter
1950 def field(self, value):
1951 self._properties["field"] = value
1952
1953 def _key(self):
1954 return (("field", self.field), ("range_", self.range_))
1955
1956 def __repr__(self):
1957 key_vals = ["{}={}".format(key, repr(val)) for key, val in self._key()]
1958 return "RangePartitioning({})".format(", ".join(key_vals))
1959
1960
1961 class TimePartitioningType(object):
1962 """Specifies the type of time partitioning to perform."""
1963
1964 DAY = "DAY"
1965 """str: Generates one partition per day."""
1966
1967
1968 class TimePartitioning(object):
1969 """Configures time-based partitioning for a table.
1970
1971 Args:
1972 type_ (google.cloud.bigquery.table.TimePartitioningType, optional):
1973 Specifies the type of time partitioning to perform. Defaults to
1974 :attr:`~google.cloud.bigquery.table.TimePartitioningType.DAY`,
1975 which is the only currently supported type.
1976 field (str, optional):
1977 If set, the table is partitioned by this field. If not set, the
1978 table is partitioned by pseudo column ``_PARTITIONTIME``. The field
1979 must be a top-level ``TIMESTAMP`` or ``DATE`` field. Its mode must
1980 be ``NULLABLE`` or ``REQUIRED``.
1981 expiration_ms(int, optional):
1982 Number of milliseconds for which to keep the storage for a
1983 partition.
1984 require_partition_filter (bool, optional):
1985 DEPRECATED: Use
1986 :attr:`~google.cloud.bigquery.table.Table.require_partition_filter`,
1987 instead.
1988 """
1989
1990 def __init__(
1991 self, type_=None, field=None, expiration_ms=None, require_partition_filter=None
1992 ):
1993 self._properties = {}
1994 if type_ is None:
1995 self.type_ = TimePartitioningType.DAY
1996 else:
1997 self.type_ = type_
1998 if field is not None:
1999 self.field = field
2000 if expiration_ms is not None:
2001 self.expiration_ms = expiration_ms
2002 if require_partition_filter is not None:
2003 self.require_partition_filter = require_partition_filter
2004
2005 @property
2006 def type_(self):
2007 """google.cloud.bigquery.table.TimePartitioningType: The type of time
2008 partitioning to use.
2009 """
2010 return self._properties.get("type")
2011
2012 @type_.setter
2013 def type_(self, value):
2014 self._properties["type"] = value
2015
2016 @property
2017 def field(self):
2018 """str: Field in the table to use for partitioning"""
2019 return self._properties.get("field")
2020
2021 @field.setter
2022 def field(self, value):
2023 self._properties["field"] = value
2024
2025 @property
2026 def expiration_ms(self):
2027 """int: Number of milliseconds to keep the storage for a partition."""
2028 return _helpers._int_or_none(self._properties.get("expirationMs"))
2029
2030 @expiration_ms.setter
2031 def expiration_ms(self, value):
2032 if value is not None:
2033 # Allow explicitly setting the expiration to None.
2034 value = str(value)
2035 self._properties["expirationMs"] = value
2036
2037 @property
2038 def require_partition_filter(self):
2039 """bool: Specifies whether partition filters are required for queries
2040
2041 DEPRECATED: Use
2042 :attr:`~google.cloud.bigquery.table.Table.require_partition_filter`,
2043 instead.
2044 """
2045 warnings.warn(
2046 (
2047 "TimePartitioning.require_partition_filter will be removed in "
2048 "future versions. Please use Table.require_partition_filter "
2049 "instead."
2050 ),
2051 PendingDeprecationWarning,
2052 stacklevel=2,
2053 )
2054 return self._properties.get("requirePartitionFilter")
2055
2056 @require_partition_filter.setter
2057 def require_partition_filter(self, value):
2058 warnings.warn(
2059 (
2060 "TimePartitioning.require_partition_filter will be removed in "
2061 "future versions. Please use Table.require_partition_filter "
2062 "instead."
2063 ),
2064 PendingDeprecationWarning,
2065 stacklevel=2,
2066 )
2067 self._properties["requirePartitionFilter"] = value
2068
2069 @classmethod
2070 def from_api_repr(cls, api_repr):
2071 """Return a :class:`TimePartitioning` object deserialized from a dict.
2072
2073 This method creates a new ``TimePartitioning`` instance that points to
2074 the ``api_repr`` parameter as its internal properties dict. This means
2075 that when a ``TimePartitioning`` instance is stored as a property of
2076 another object, any changes made at the higher level will also appear
2077 here::
2078
2079 >>> time_partitioning = TimePartitioning()
2080 >>> table.time_partitioning = time_partitioning
2081 >>> table.time_partitioning.field = 'timecolumn'
2082 >>> time_partitioning.field
2083 'timecolumn'
2084
2085 Args:
2086 api_repr (Mapping[str, str]):
2087 The serialized representation of the TimePartitioning, such as
2088 what is output by :meth:`to_api_repr`.
2089
2090 Returns:
2091 google.cloud.bigquery.table.TimePartitioning:
2092 The ``TimePartitioning`` object.
2093 """
2094 instance = cls()
2095 instance._properties = api_repr
2096 return instance
2097
2098 def to_api_repr(self):
2099 """Return a dictionary representing this object.
2100
2101 This method returns the properties dict of the ``TimePartitioning``
2102 instance rather than making a copy. This means that when a
2103 ``TimePartitioning`` instance is stored as a property of another
2104 object, any changes made at the higher level will also appear here.
2105
2106 Returns:
2107 dict:
2108 A dictionary representing the TimePartitioning object in
2109 serialized form.
2110 """
2111 return self._properties
2112
2113 def _key(self):
2114 return tuple(sorted(self._properties.items()))
2115
2116 def __eq__(self, other):
2117 if not isinstance(other, TimePartitioning):
2118 return NotImplemented
2119 return self._key() == other._key()
2120
2121 def __ne__(self, other):
2122 return not self == other
2123
2124 def __hash__(self):
2125 return hash(self._key())
2126
2127 def __repr__(self):
2128 key_vals = ["{}={}".format(key, val) for key, val in self._key()]
2129 return "TimePartitioning({})".format(",".join(key_vals))
2130
2131
2132 def _item_to_row(iterator, resource):
2133 """Convert a JSON row to the native object.
2134
2135 .. note::
2136
2137 This assumes that the ``schema`` attribute has been
2138 added to the iterator after being created, which
2139 should be done by the caller.
2140
2141 Args:
2142 iterator (google.api_core.page_iterator.Iterator): The iterator that is currently in use.
2143 resource (Dict): An item to be converted to a row.
2144
2145 Returns:
2146 google.cloud.bigquery.table.Row: The next row in the page.
2147 """
2148 return Row(
2149 _helpers._row_tuple_from_json(resource, iterator.schema),
2150 iterator._field_to_index,
2151 )
2152
2153
2154 def _tabledata_list_page_columns(schema, response):
2155 """Make a generator of all the columns in a page from tabledata.list.
2156
2157 This enables creating a :class:`pandas.DataFrame` and other
2158 column-oriented data structures such as :class:`pyarrow.RecordBatch`
2159 """
2160 columns = []
2161 rows = response.get("rows", [])
2162
2163 def get_column_data(field_index, field):
2164 for row in rows:
2165 yield _helpers._field_from_json(row["f"][field_index]["v"], field)
2166
2167 for field_index, field in enumerate(schema):
2168 columns.append(get_column_data(field_index, field))
2169
2170 return columns
2171
2172
2173 # pylint: disable=unused-argument
2174 def _rows_page_start(iterator, page, response):
2175 """Grab total rows when :class:`~google.cloud.iterator.Page` starts.
2176
2177 Args:
2178 iterator (google.api_core.page_iterator.Iterator): The iterator that is currently in use.
2179 page (google.api_core.page_iterator.Page): The page that was just created.
2180 response (Dict): The JSON API response for a page of rows in a table.
2181 """
2182 # Make a (lazy) copy of the page in column-oriented format for use in data
2183 # science packages.
2184 page._columns = _tabledata_list_page_columns(iterator._schema, response)
2185
2186 total_rows = response.get("totalRows")
2187 if total_rows is not None:
2188 total_rows = int(total_rows)
2189 iterator._total_rows = total_rows
2190
2191
2192 # pylint: enable=unused-argument
2193
2194
2195 def _table_arg_to_table_ref(value, default_project=None):
2196 """Helper to convert a string or Table to TableReference.
2197
2198 This function keeps TableReference and other kinds of objects unchanged.
2199 """
2200 if isinstance(value, six.string_types):
2201 value = TableReference.from_string(value, default_project=default_project)
2202 if isinstance(value, (Table, TableListItem)):
2203 value = value.reference
2204 return value
2205
2206
2207 def _table_arg_to_table(value, default_project=None):
2208 """Helper to convert a string or TableReference to a Table.
2209
2210 This function keeps Table and other kinds of objects unchanged.
2211 """
2212 if isinstance(value, six.string_types):
2213 value = TableReference.from_string(value, default_project=default_project)
2214 if isinstance(value, TableReference):
2215 value = Table(value)
2216 if isinstance(value, TableListItem):
2217 newvalue = Table(value.reference)
2218 newvalue._properties = value._properties
2219 value = newvalue
2220
2221 return value
2222
[end of bigquery/google/cloud/bigquery/table.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| googleapis/google-cloud-python | b492bdcc2d288022b5c81e90aea993432eec078a | BigQuery: raise a `TypeError` if a dictionary is passed to `insert_rows_json`
**Is your feature request related to a problem? Please describe.**
If I want to only insert a single row at a time into a table, it's easy to accidentally try something like:
```python
json_row = {"col1": "hello", "col2": "world"}
errors = client.insert_rows_json(
table,
json_row
)
```
This results in a `400 BadRequest` error from the API, because it expects a list of rows, not a single row.
**Describe the solution you'd like**
It's difficult to debug this situation from the API response, so it'd be better if we raised a client-side error for passing in the wrong type for `json_rows`.
**Describe alternatives you've considered**
Leave as-is and request a better server-side message. This may be difficult to do, as the error happens at a level above BigQuery, which translates JSON to Protobuf for internal use.
**Additional context**
This issue was encountered by a customer engineer, and it took me a bit of debugging to figure out the actual issue. I expect other customers will encounter this problem as well.
| 2020-01-16T13:04:56Z | <patch>
diff --git a/bigquery/google/cloud/bigquery/client.py b/bigquery/google/cloud/bigquery/client.py
--- a/bigquery/google/cloud/bigquery/client.py
+++ b/bigquery/google/cloud/bigquery/client.py
@@ -2506,6 +2506,8 @@ def insert_rows_json(
identifies the row, and the "errors" key contains a list of
the mappings describing one or more problems with the row.
"""
+ if not isinstance(json_rows, collections_abc.Sequence):
+ raise TypeError("json_rows argument should be a sequence of dicts")
# Convert table to just a reference because unlike insert_rows,
# insert_rows_json doesn't need the table schema. It's not doing any
# type conversions.
</patch> | [] | [] | ||||
numpy__numpy-14074 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NumPy 1.17 RC fails to compile with Intel C Compile 2016
<!-- Please describe the issue in detail here, and fill in the fields below -->
Compiling NumPy 1.17.0rc2 sources with Intel C Compiler 2016, which does not yet implement `__builtin_cpu_supports("avx512f")` fails with compilation error:
```
icc: numpy/core/src/umath/cpuid.c
numpy/core/src/umath/cpuid.c(63): catastrophic error: invalid use of '__builtin_cpu_supports'
compilation aborted for numpy/core/src/umath/cpuid.c (code 1)
```
Recent Intel C compiler (2019) proceeds just fine.
There is config test to probe compiler for support of `__builtin_cpu_supports`, but the the test does not discriminate between supported arguments.
</issue>
<code>
[start of README.md]
1 # <img alt="NumPy" src="https://cdn.rawgit.com/numpy/numpy/master/branding/icons/numpylogo.svg" height="60">
2
3 [![Travis](https://img.shields.io/travis/numpy/numpy/master.svg?label=Travis%20CI)](
4 https://travis-ci.org/numpy/numpy)
5 [![AppVeyor](https://img.shields.io/appveyor/ci/charris/numpy/master.svg?label=AppVeyor)](
6 https://ci.appveyor.com/project/charris/numpy)
7 [![Azure](https://dev.azure.com/numpy/numpy/_apis/build/status/azure-pipeline%20numpy.numpy)](
8 https://dev.azure.com/numpy/numpy/_build/latest?definitionId=5)
9 [![codecov](https://codecov.io/gh/numpy/numpy/branch/master/graph/badge.svg)](
10 https://codecov.io/gh/numpy/numpy)
11
12 NumPy is the fundamental package needed for scientific computing with Python.
13
14 - **Website:** https://www.numpy.org
15 - **Documentation:** http://docs.scipy.org/
16 - **Mailing list:** https://mail.python.org/mailman/listinfo/numpy-discussion
17 - **Source code:** https://github.com/numpy/numpy
18 - **Contributing:** https://www.numpy.org/devdocs/dev/index.html
19 - **Bug reports:** https://github.com/numpy/numpy/issues
20 - **Report a security vulnerability:** https://tidelift.com/docs/security
21
22 It provides:
23
24 - a powerful N-dimensional array object
25 - sophisticated (broadcasting) functions
26 - tools for integrating C/C++ and Fortran code
27 - useful linear algebra, Fourier transform, and random number capabilities
28
29 Testing:
30
31 - NumPy versions ≥ 1.15 require `pytest`
32 - NumPy versions < 1.15 require `nose`
33
34 Tests can then be run after installation with:
35
36 python -c 'import numpy; numpy.test()'
37
38
39 Call for Contributions
40 ----------------------
41
42 NumPy appreciates help from a wide range of different backgrounds.
43 Work such as high level documentation or website improvements are valuable
44 and we would like to grow our team with people filling these roles.
45 Small improvements or fixes are always appreciated and issues labeled as easy
46 may be a good starting point.
47 If you are considering larger contributions outside the traditional coding work,
48 please contact us through the mailing list.
49
50
51 [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://numfocus.org)
52
[end of README.md]
[start of numpy/core/__init__.py]
1 from __future__ import division, absolute_import, print_function
2
3 from .info import __doc__
4 from numpy.version import version as __version__
5
6 import os
7
8 # disables OpenBLAS affinity setting of the main thread that limits
9 # python threads or processes to one core
10 env_added = []
11 for envkey in ['OPENBLAS_MAIN_FREE', 'GOTOBLAS_MAIN_FREE']:
12 if envkey not in os.environ:
13 os.environ[envkey] = '1'
14 env_added.append(envkey)
15
16 try:
17 from . import multiarray
18 except ImportError as exc:
19 import sys
20 msg = """
21
22 IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
23
24 Importing the numpy c-extensions failed.
25 - Try uninstalling and reinstalling numpy.
26 - If you have already done that, then:
27 1. Check that you expected to use Python%d.%d from "%s",
28 and that you have no directories in your PATH or PYTHONPATH that can
29 interfere with the Python and numpy version "%s" you're trying to use.
30 2. If (1) looks fine, you can open a new issue at
31 https://github.com/numpy/numpy/issues. Please include details on:
32 - how you installed Python
33 - how you installed numpy
34 - your operating system
35 - whether or not you have multiple versions of Python installed
36 - if you built from source, your compiler versions and ideally a build log
37
38 - If you're working with a numpy git repository, try `git clean -xdf`
39 (removes all files not under version control) and rebuild numpy.
40
41 Note: this error has many possible causes, so please don't comment on
42 an existing issue about this - open a new one instead.
43
44 Original error was: %s
45 """ % (sys.version_info[0], sys.version_info[1], sys.executable,
46 __version__, exc)
47 raise ImportError(msg)
48 finally:
49 for envkey in env_added:
50 del os.environ[envkey]
51 del envkey
52 del env_added
53 del os
54
55 from . import umath
56
57 # Check that multiarray,umath are pure python modules wrapping
58 # _multiarray_umath and not either of the old c-extension modules
59 if not (hasattr(multiarray, '_multiarray_umath') and
60 hasattr(umath, '_multiarray_umath')):
61 import sys
62 path = sys.modules['numpy'].__path__
63 msg = ("Something is wrong with the numpy installation. "
64 "While importing we detected an older version of "
65 "numpy in {}. One method of fixing this is to repeatedly uninstall "
66 "numpy until none is found, then reinstall this version.")
67 raise ImportError(msg.format(path))
68
69 from . import numerictypes as nt
70 multiarray.set_typeDict(nt.sctypeDict)
71 from . import numeric
72 from .numeric import *
73 from . import fromnumeric
74 from .fromnumeric import *
75 from . import defchararray as char
76 from . import records as rec
77 from .records import *
78 from .memmap import *
79 from .defchararray import chararray
80 from . import function_base
81 from .function_base import *
82 from . import machar
83 from .machar import *
84 from . import getlimits
85 from .getlimits import *
86 from . import shape_base
87 from .shape_base import *
88 from . import einsumfunc
89 from .einsumfunc import *
90 del nt
91
92 from .fromnumeric import amax as max, amin as min, round_ as round
93 from .numeric import absolute as abs
94
95 # do this after everything else, to minimize the chance of this misleadingly
96 # appearing in an import-time traceback
97 from . import _add_newdocs
98 # add these for module-freeze analysis (like PyInstaller)
99 from . import _dtype_ctypes
100 from . import _internal
101 from . import _dtype
102 from . import _methods
103
104 __all__ = ['char', 'rec', 'memmap']
105 __all__ += numeric.__all__
106 __all__ += fromnumeric.__all__
107 __all__ += rec.__all__
108 __all__ += ['chararray']
109 __all__ += function_base.__all__
110 __all__ += machar.__all__
111 __all__ += getlimits.__all__
112 __all__ += shape_base.__all__
113 __all__ += einsumfunc.__all__
114
115 # Make it possible so that ufuncs can be pickled
116 # Here are the loading and unloading functions
117 # The name numpy.core._ufunc_reconstruct must be
118 # available for unpickling to work.
119 def _ufunc_reconstruct(module, name):
120 # The `fromlist` kwarg is required to ensure that `mod` points to the
121 # inner-most module rather than the parent package when module name is
122 # nested. This makes it possible to pickle non-toplevel ufuncs such as
123 # scipy.special.expit for instance.
124 mod = __import__(module, fromlist=[name])
125 return getattr(mod, name)
126
127 def _ufunc_reduce(func):
128 from pickle import whichmodule
129 name = func.__name__
130 return _ufunc_reconstruct, (whichmodule(func, name), name)
131
132
133 import sys
134 if sys.version_info[0] >= 3:
135 import copyreg
136 else:
137 import copy_reg as copyreg
138
139 copyreg.pickle(ufunc, _ufunc_reduce, _ufunc_reconstruct)
140 # Unclutter namespace (must keep _ufunc_reconstruct for unpickling)
141 del copyreg
142 del sys
143 del _ufunc_reduce
144
145 from numpy._pytesttester import PytestTester
146 test = PytestTester(__name__)
147 del PytestTester
148
[end of numpy/core/__init__.py]
[start of numpy/core/setup.py]
1 from __future__ import division, print_function
2
3 import os
4 import sys
5 import pickle
6 import copy
7 import warnings
8 import platform
9 import textwrap
10 from os.path import join
11
12 from numpy.distutils import log
13 from distutils.dep_util import newer
14 from distutils.sysconfig import get_config_var
15 from numpy._build_utils.apple_accelerate import (
16 uses_accelerate_framework, get_sgemv_fix
17 )
18 from numpy.compat import npy_load_module
19 from setup_common import *
20
21 # Set to True to enable relaxed strides checking. This (mostly) means
22 # that `strides[dim]` is ignored if `shape[dim] == 1` when setting flags.
23 NPY_RELAXED_STRIDES_CHECKING = (os.environ.get('NPY_RELAXED_STRIDES_CHECKING', "1") != "0")
24
25 # Put NPY_RELAXED_STRIDES_DEBUG=1 in the environment if you want numpy to use a
26 # bogus value for affected strides in order to help smoke out bad stride usage
27 # when relaxed stride checking is enabled.
28 NPY_RELAXED_STRIDES_DEBUG = (os.environ.get('NPY_RELAXED_STRIDES_DEBUG', "0") != "0")
29 NPY_RELAXED_STRIDES_DEBUG = NPY_RELAXED_STRIDES_DEBUG and NPY_RELAXED_STRIDES_CHECKING
30
31 # XXX: ugly, we use a class to avoid calling twice some expensive functions in
32 # config.h/numpyconfig.h. I don't see a better way because distutils force
33 # config.h generation inside an Extension class, and as such sharing
34 # configuration information between extensions is not easy.
35 # Using a pickled-based memoize does not work because config_cmd is an instance
36 # method, which cPickle does not like.
37 #
38 # Use pickle in all cases, as cPickle is gone in python3 and the difference
39 # in time is only in build. -- Charles Harris, 2013-03-30
40
41 class CallOnceOnly(object):
42 def __init__(self):
43 self._check_types = None
44 self._check_ieee_macros = None
45 self._check_complex = None
46
47 def check_types(self, *a, **kw):
48 if self._check_types is None:
49 out = check_types(*a, **kw)
50 self._check_types = pickle.dumps(out)
51 else:
52 out = copy.deepcopy(pickle.loads(self._check_types))
53 return out
54
55 def check_ieee_macros(self, *a, **kw):
56 if self._check_ieee_macros is None:
57 out = check_ieee_macros(*a, **kw)
58 self._check_ieee_macros = pickle.dumps(out)
59 else:
60 out = copy.deepcopy(pickle.loads(self._check_ieee_macros))
61 return out
62
63 def check_complex(self, *a, **kw):
64 if self._check_complex is None:
65 out = check_complex(*a, **kw)
66 self._check_complex = pickle.dumps(out)
67 else:
68 out = copy.deepcopy(pickle.loads(self._check_complex))
69 return out
70
71 def pythonlib_dir():
72 """return path where libpython* is."""
73 if sys.platform == 'win32':
74 return os.path.join(sys.prefix, "libs")
75 else:
76 return get_config_var('LIBDIR')
77
78 def is_npy_no_signal():
79 """Return True if the NPY_NO_SIGNAL symbol must be defined in configuration
80 header."""
81 return sys.platform == 'win32'
82
83 def is_npy_no_smp():
84 """Return True if the NPY_NO_SMP symbol must be defined in public
85 header (when SMP support cannot be reliably enabled)."""
86 # Perhaps a fancier check is in order here.
87 # so that threads are only enabled if there
88 # are actually multiple CPUS? -- but
89 # threaded code can be nice even on a single
90 # CPU so that long-calculating code doesn't
91 # block.
92 return 'NPY_NOSMP' in os.environ
93
94 def win32_checks(deflist):
95 from numpy.distutils.misc_util import get_build_architecture
96 a = get_build_architecture()
97
98 # Distutils hack on AMD64 on windows
99 print('BUILD_ARCHITECTURE: %r, os.name=%r, sys.platform=%r' %
100 (a, os.name, sys.platform))
101 if a == 'AMD64':
102 deflist.append('DISTUTILS_USE_SDK')
103
104 # On win32, force long double format string to be 'g', not
105 # 'Lg', since the MS runtime does not support long double whose
106 # size is > sizeof(double)
107 if a == "Intel" or a == "AMD64":
108 deflist.append('FORCE_NO_LONG_DOUBLE_FORMATTING')
109
110 def check_math_capabilities(config, moredefs, mathlibs):
111 def check_func(func_name):
112 return config.check_func(func_name, libraries=mathlibs,
113 decl=True, call=True)
114
115 def check_funcs_once(funcs_name):
116 decl = dict([(f, True) for f in funcs_name])
117 st = config.check_funcs_once(funcs_name, libraries=mathlibs,
118 decl=decl, call=decl)
119 if st:
120 moredefs.extend([(fname2def(f), 1) for f in funcs_name])
121 return st
122
123 def check_funcs(funcs_name):
124 # Use check_funcs_once first, and if it does not work, test func per
125 # func. Return success only if all the functions are available
126 if not check_funcs_once(funcs_name):
127 # Global check failed, check func per func
128 for f in funcs_name:
129 if check_func(f):
130 moredefs.append((fname2def(f), 1))
131 return 0
132 else:
133 return 1
134
135 #use_msvc = config.check_decl("_MSC_VER")
136
137 if not check_funcs_once(MANDATORY_FUNCS):
138 raise SystemError("One of the required function to build numpy is not"
139 " available (the list is %s)." % str(MANDATORY_FUNCS))
140
141 # Standard functions which may not be available and for which we have a
142 # replacement implementation. Note that some of these are C99 functions.
143
144 # XXX: hack to circumvent cpp pollution from python: python put its
145 # config.h in the public namespace, so we have a clash for the common
146 # functions we test. We remove every function tested by python's
147 # autoconf, hoping their own test are correct
148 for f in OPTIONAL_STDFUNCS_MAYBE:
149 if config.check_decl(fname2def(f),
150 headers=["Python.h", "math.h"]):
151 OPTIONAL_STDFUNCS.remove(f)
152
153 check_funcs(OPTIONAL_STDFUNCS)
154
155 for h in OPTIONAL_HEADERS:
156 if config.check_func("", decl=False, call=False, headers=[h]):
157 h = h.replace(".", "_").replace(os.path.sep, "_")
158 moredefs.append((fname2def(h), 1))
159
160 for tup in OPTIONAL_INTRINSICS:
161 headers = None
162 if len(tup) == 2:
163 f, args, m = tup[0], tup[1], fname2def(tup[0])
164 elif len(tup) == 3:
165 f, args, headers, m = tup[0], tup[1], [tup[2]], fname2def(tup[0])
166 else:
167 f, args, headers, m = tup[0], tup[1], [tup[2]], fname2def(tup[3])
168 if config.check_func(f, decl=False, call=True, call_args=args,
169 headers=headers):
170 moredefs.append((m, 1))
171
172 for dec, fn in OPTIONAL_FUNCTION_ATTRIBUTES:
173 if config.check_gcc_function_attribute(dec, fn):
174 moredefs.append((fname2def(fn), 1))
175
176 for dec, fn, code, header in OPTIONAL_FUNCTION_ATTRIBUTES_WITH_INTRINSICS:
177 if config.check_gcc_function_attribute_with_intrinsics(dec, fn, code,
178 header):
179 moredefs.append((fname2def(fn), 1))
180
181 for fn in OPTIONAL_VARIABLE_ATTRIBUTES:
182 if config.check_gcc_variable_attribute(fn):
183 m = fn.replace("(", "_").replace(")", "_")
184 moredefs.append((fname2def(m), 1))
185
186 # C99 functions: float and long double versions
187 check_funcs(C99_FUNCS_SINGLE)
188 check_funcs(C99_FUNCS_EXTENDED)
189
190 def check_complex(config, mathlibs):
191 priv = []
192 pub = []
193
194 try:
195 if os.uname()[0] == "Interix":
196 warnings.warn("Disabling broken complex support. See #1365", stacklevel=2)
197 return priv, pub
198 except Exception:
199 # os.uname not available on all platforms. blanket except ugly but safe
200 pass
201
202 # Check for complex support
203 st = config.check_header('complex.h')
204 if st:
205 priv.append(('HAVE_COMPLEX_H', 1))
206 pub.append(('NPY_USE_C99_COMPLEX', 1))
207
208 for t in C99_COMPLEX_TYPES:
209 st = config.check_type(t, headers=["complex.h"])
210 if st:
211 pub.append(('NPY_HAVE_%s' % type2def(t), 1))
212
213 def check_prec(prec):
214 flist = [f + prec for f in C99_COMPLEX_FUNCS]
215 decl = dict([(f, True) for f in flist])
216 if not config.check_funcs_once(flist, call=decl, decl=decl,
217 libraries=mathlibs):
218 for f in flist:
219 if config.check_func(f, call=True, decl=True,
220 libraries=mathlibs):
221 priv.append((fname2def(f), 1))
222 else:
223 priv.extend([(fname2def(f), 1) for f in flist])
224
225 check_prec('')
226 check_prec('f')
227 check_prec('l')
228
229 return priv, pub
230
231 def check_ieee_macros(config):
232 priv = []
233 pub = []
234
235 macros = []
236
237 def _add_decl(f):
238 priv.append(fname2def("decl_%s" % f))
239 pub.append('NPY_%s' % fname2def("decl_%s" % f))
240
241 # XXX: hack to circumvent cpp pollution from python: python put its
242 # config.h in the public namespace, so we have a clash for the common
243 # functions we test. We remove every function tested by python's
244 # autoconf, hoping their own test are correct
245 _macros = ["isnan", "isinf", "signbit", "isfinite"]
246 for f in _macros:
247 py_symbol = fname2def("decl_%s" % f)
248 already_declared = config.check_decl(py_symbol,
249 headers=["Python.h", "math.h"])
250 if already_declared:
251 if config.check_macro_true(py_symbol,
252 headers=["Python.h", "math.h"]):
253 pub.append('NPY_%s' % fname2def("decl_%s" % f))
254 else:
255 macros.append(f)
256 # Normally, isnan and isinf are macro (C99), but some platforms only have
257 # func, or both func and macro version. Check for macro only, and define
258 # replacement ones if not found.
259 # Note: including Python.h is necessary because it modifies some math.h
260 # definitions
261 for f in macros:
262 st = config.check_decl(f, headers=["Python.h", "math.h"])
263 if st:
264 _add_decl(f)
265
266 return priv, pub
267
268 def check_types(config_cmd, ext, build_dir):
269 private_defines = []
270 public_defines = []
271
272 # Expected size (in number of bytes) for each type. This is an
273 # optimization: those are only hints, and an exhaustive search for the size
274 # is done if the hints are wrong.
275 expected = {'short': [2], 'int': [4], 'long': [8, 4],
276 'float': [4], 'double': [8], 'long double': [16, 12, 8],
277 'Py_intptr_t': [8, 4], 'PY_LONG_LONG': [8], 'long long': [8],
278 'off_t': [8, 4]}
279
280 # Check we have the python header (-dev* packages on Linux)
281 result = config_cmd.check_header('Python.h')
282 if not result:
283 python = 'python'
284 if '__pypy__' in sys.builtin_module_names:
285 python = 'pypy'
286 raise SystemError(
287 "Cannot compile 'Python.h'. Perhaps you need to "
288 "install {0}-dev|{0}-devel.".format(python))
289 res = config_cmd.check_header("endian.h")
290 if res:
291 private_defines.append(('HAVE_ENDIAN_H', 1))
292 public_defines.append(('NPY_HAVE_ENDIAN_H', 1))
293 res = config_cmd.check_header("sys/endian.h")
294 if res:
295 private_defines.append(('HAVE_SYS_ENDIAN_H', 1))
296 public_defines.append(('NPY_HAVE_SYS_ENDIAN_H', 1))
297
298 # Check basic types sizes
299 for type in ('short', 'int', 'long'):
300 res = config_cmd.check_decl("SIZEOF_%s" % sym2def(type), headers=["Python.h"])
301 if res:
302 public_defines.append(('NPY_SIZEOF_%s' % sym2def(type), "SIZEOF_%s" % sym2def(type)))
303 else:
304 res = config_cmd.check_type_size(type, expected=expected[type])
305 if res >= 0:
306 public_defines.append(('NPY_SIZEOF_%s' % sym2def(type), '%d' % res))
307 else:
308 raise SystemError("Checking sizeof (%s) failed !" % type)
309
310 for type in ('float', 'double', 'long double'):
311 already_declared = config_cmd.check_decl("SIZEOF_%s" % sym2def(type),
312 headers=["Python.h"])
313 res = config_cmd.check_type_size(type, expected=expected[type])
314 if res >= 0:
315 public_defines.append(('NPY_SIZEOF_%s' % sym2def(type), '%d' % res))
316 if not already_declared and not type == 'long double':
317 private_defines.append(('SIZEOF_%s' % sym2def(type), '%d' % res))
318 else:
319 raise SystemError("Checking sizeof (%s) failed !" % type)
320
321 # Compute size of corresponding complex type: used to check that our
322 # definition is binary compatible with C99 complex type (check done at
323 # build time in npy_common.h)
324 complex_def = "struct {%s __x; %s __y;}" % (type, type)
325 res = config_cmd.check_type_size(complex_def,
326 expected=[2 * x for x in expected[type]])
327 if res >= 0:
328 public_defines.append(('NPY_SIZEOF_COMPLEX_%s' % sym2def(type), '%d' % res))
329 else:
330 raise SystemError("Checking sizeof (%s) failed !" % complex_def)
331
332 for type in ('Py_intptr_t', 'off_t'):
333 res = config_cmd.check_type_size(type, headers=["Python.h"],
334 library_dirs=[pythonlib_dir()],
335 expected=expected[type])
336
337 if res >= 0:
338 private_defines.append(('SIZEOF_%s' % sym2def(type), '%d' % res))
339 public_defines.append(('NPY_SIZEOF_%s' % sym2def(type), '%d' % res))
340 else:
341 raise SystemError("Checking sizeof (%s) failed !" % type)
342
343 # We check declaration AND type because that's how distutils does it.
344 if config_cmd.check_decl('PY_LONG_LONG', headers=['Python.h']):
345 res = config_cmd.check_type_size('PY_LONG_LONG', headers=['Python.h'],
346 library_dirs=[pythonlib_dir()],
347 expected=expected['PY_LONG_LONG'])
348 if res >= 0:
349 private_defines.append(('SIZEOF_%s' % sym2def('PY_LONG_LONG'), '%d' % res))
350 public_defines.append(('NPY_SIZEOF_%s' % sym2def('PY_LONG_LONG'), '%d' % res))
351 else:
352 raise SystemError("Checking sizeof (%s) failed !" % 'PY_LONG_LONG')
353
354 res = config_cmd.check_type_size('long long',
355 expected=expected['long long'])
356 if res >= 0:
357 #private_defines.append(('SIZEOF_%s' % sym2def('long long'), '%d' % res))
358 public_defines.append(('NPY_SIZEOF_%s' % sym2def('long long'), '%d' % res))
359 else:
360 raise SystemError("Checking sizeof (%s) failed !" % 'long long')
361
362 if not config_cmd.check_decl('CHAR_BIT', headers=['Python.h']):
363 raise RuntimeError(
364 "Config wo CHAR_BIT is not supported"
365 ", please contact the maintainers")
366
367 return private_defines, public_defines
368
369 def check_mathlib(config_cmd):
370 # Testing the C math library
371 mathlibs = []
372 mathlibs_choices = [[], ['m'], ['cpml']]
373 mathlib = os.environ.get('MATHLIB')
374 if mathlib:
375 mathlibs_choices.insert(0, mathlib.split(','))
376 for libs in mathlibs_choices:
377 if config_cmd.check_func("exp", libraries=libs, decl=True, call=True):
378 mathlibs = libs
379 break
380 else:
381 raise EnvironmentError("math library missing; rerun "
382 "setup.py after setting the "
383 "MATHLIB env variable")
384 return mathlibs
385
386 def visibility_define(config):
387 """Return the define value to use for NPY_VISIBILITY_HIDDEN (may be empty
388 string)."""
389 hide = '__attribute__((visibility("hidden")))'
390 if config.check_gcc_function_attribute(hide, 'hideme'):
391 return hide
392 else:
393 return ''
394
395 def configuration(parent_package='',top_path=None):
396 from numpy.distutils.misc_util import Configuration, dot_join
397 from numpy.distutils.system_info import get_info
398
399 config = Configuration('core', parent_package, top_path)
400 local_dir = config.local_path
401 codegen_dir = join(local_dir, 'code_generators')
402
403 if is_released(config):
404 warnings.simplefilter('error', MismatchCAPIWarning)
405
406 # Check whether we have a mismatch between the set C API VERSION and the
407 # actual C API VERSION
408 check_api_version(C_API_VERSION, codegen_dir)
409
410 generate_umath_py = join(codegen_dir, 'generate_umath.py')
411 n = dot_join(config.name, 'generate_umath')
412 generate_umath = npy_load_module('_'.join(n.split('.')),
413 generate_umath_py, ('.py', 'U', 1))
414
415 header_dir = 'include/numpy' # this is relative to config.path_in_package
416
417 cocache = CallOnceOnly()
418
419 def generate_config_h(ext, build_dir):
420 target = join(build_dir, header_dir, 'config.h')
421 d = os.path.dirname(target)
422 if not os.path.exists(d):
423 os.makedirs(d)
424
425 if newer(__file__, target):
426 config_cmd = config.get_config_cmd()
427 log.info('Generating %s', target)
428
429 # Check sizeof
430 moredefs, ignored = cocache.check_types(config_cmd, ext, build_dir)
431
432 # Check math library and C99 math funcs availability
433 mathlibs = check_mathlib(config_cmd)
434 moredefs.append(('MATHLIB', ','.join(mathlibs)))
435
436 check_math_capabilities(config_cmd, moredefs, mathlibs)
437 moredefs.extend(cocache.check_ieee_macros(config_cmd)[0])
438 moredefs.extend(cocache.check_complex(config_cmd, mathlibs)[0])
439
440 # Signal check
441 if is_npy_no_signal():
442 moredefs.append('__NPY_PRIVATE_NO_SIGNAL')
443
444 # Windows checks
445 if sys.platform == 'win32' or os.name == 'nt':
446 win32_checks(moredefs)
447
448 # C99 restrict keyword
449 moredefs.append(('NPY_RESTRICT', config_cmd.check_restrict()))
450
451 # Inline check
452 inline = config_cmd.check_inline()
453
454 # Use relaxed stride checking
455 if NPY_RELAXED_STRIDES_CHECKING:
456 moredefs.append(('NPY_RELAXED_STRIDES_CHECKING', 1))
457
458 # Use bogus stride debug aid when relaxed strides are enabled
459 if NPY_RELAXED_STRIDES_DEBUG:
460 moredefs.append(('NPY_RELAXED_STRIDES_DEBUG', 1))
461
462 # Get long double representation
463 rep = check_long_double_representation(config_cmd)
464 moredefs.append(('HAVE_LDOUBLE_%s' % rep, 1))
465
466 # Py3K check
467 if sys.version_info[0] == 3:
468 moredefs.append(('NPY_PY3K', 1))
469
470 # Generate the config.h file from moredefs
471 with open(target, 'w') as target_f:
472 for d in moredefs:
473 if isinstance(d, str):
474 target_f.write('#define %s\n' % (d))
475 else:
476 target_f.write('#define %s %s\n' % (d[0], d[1]))
477
478 # define inline to our keyword, or nothing
479 target_f.write('#ifndef __cplusplus\n')
480 if inline == 'inline':
481 target_f.write('/* #undef inline */\n')
482 else:
483 target_f.write('#define inline %s\n' % inline)
484 target_f.write('#endif\n')
485
486 # add the guard to make sure config.h is never included directly,
487 # but always through npy_config.h
488 target_f.write(textwrap.dedent("""
489 #ifndef _NPY_NPY_CONFIG_H_
490 #error config.h should never be included directly, include npy_config.h instead
491 #endif
492 """))
493
494 print('File:', target)
495 with open(target) as target_f:
496 print(target_f.read())
497 print('EOF')
498 else:
499 mathlibs = []
500 with open(target) as target_f:
501 for line in target_f:
502 s = '#define MATHLIB'
503 if line.startswith(s):
504 value = line[len(s):].strip()
505 if value:
506 mathlibs.extend(value.split(','))
507
508 # Ugly: this can be called within a library and not an extension,
509 # in which case there is no libraries attributes (and none is
510 # needed).
511 if hasattr(ext, 'libraries'):
512 ext.libraries.extend(mathlibs)
513
514 incl_dir = os.path.dirname(target)
515 if incl_dir not in config.numpy_include_dirs:
516 config.numpy_include_dirs.append(incl_dir)
517
518 return target
519
520 def generate_numpyconfig_h(ext, build_dir):
521 """Depends on config.h: generate_config_h has to be called before !"""
522 # put common include directory in build_dir on search path
523 # allows using code generation in headers headers
524 config.add_include_dirs(join(build_dir, "src", "common"))
525 config.add_include_dirs(join(build_dir, "src", "npymath"))
526
527 target = join(build_dir, header_dir, '_numpyconfig.h')
528 d = os.path.dirname(target)
529 if not os.path.exists(d):
530 os.makedirs(d)
531 if newer(__file__, target):
532 config_cmd = config.get_config_cmd()
533 log.info('Generating %s', target)
534
535 # Check sizeof
536 ignored, moredefs = cocache.check_types(config_cmd, ext, build_dir)
537
538 if is_npy_no_signal():
539 moredefs.append(('NPY_NO_SIGNAL', 1))
540
541 if is_npy_no_smp():
542 moredefs.append(('NPY_NO_SMP', 1))
543 else:
544 moredefs.append(('NPY_NO_SMP', 0))
545
546 mathlibs = check_mathlib(config_cmd)
547 moredefs.extend(cocache.check_ieee_macros(config_cmd)[1])
548 moredefs.extend(cocache.check_complex(config_cmd, mathlibs)[1])
549
550 if NPY_RELAXED_STRIDES_CHECKING:
551 moredefs.append(('NPY_RELAXED_STRIDES_CHECKING', 1))
552
553 if NPY_RELAXED_STRIDES_DEBUG:
554 moredefs.append(('NPY_RELAXED_STRIDES_DEBUG', 1))
555
556 # Check whether we can use inttypes (C99) formats
557 if config_cmd.check_decl('PRIdPTR', headers=['inttypes.h']):
558 moredefs.append(('NPY_USE_C99_FORMATS', 1))
559
560 # visibility check
561 hidden_visibility = visibility_define(config_cmd)
562 moredefs.append(('NPY_VISIBILITY_HIDDEN', hidden_visibility))
563
564 # Add the C API/ABI versions
565 moredefs.append(('NPY_ABI_VERSION', '0x%.8X' % C_ABI_VERSION))
566 moredefs.append(('NPY_API_VERSION', '0x%.8X' % C_API_VERSION))
567
568 # Add moredefs to header
569 with open(target, 'w') as target_f:
570 for d in moredefs:
571 if isinstance(d, str):
572 target_f.write('#define %s\n' % (d))
573 else:
574 target_f.write('#define %s %s\n' % (d[0], d[1]))
575
576 # Define __STDC_FORMAT_MACROS
577 target_f.write(textwrap.dedent("""
578 #ifndef __STDC_FORMAT_MACROS
579 #define __STDC_FORMAT_MACROS 1
580 #endif
581 """))
582
583 # Dump the numpyconfig.h header to stdout
584 print('File: %s' % target)
585 with open(target) as target_f:
586 print(target_f.read())
587 print('EOF')
588 config.add_data_files((header_dir, target))
589 return target
590
591 def generate_api_func(module_name):
592 def generate_api(ext, build_dir):
593 script = join(codegen_dir, module_name + '.py')
594 sys.path.insert(0, codegen_dir)
595 try:
596 m = __import__(module_name)
597 log.info('executing %s', script)
598 h_file, c_file, doc_file = m.generate_api(os.path.join(build_dir, header_dir))
599 finally:
600 del sys.path[0]
601 config.add_data_files((header_dir, h_file),
602 (header_dir, doc_file))
603 return (h_file,)
604 return generate_api
605
606 generate_numpy_api = generate_api_func('generate_numpy_api')
607 generate_ufunc_api = generate_api_func('generate_ufunc_api')
608
609 config.add_include_dirs(join(local_dir, "src", "common"))
610 config.add_include_dirs(join(local_dir, "src"))
611 config.add_include_dirs(join(local_dir))
612
613 config.add_data_dir('include/numpy')
614 config.add_include_dirs(join('src', 'npymath'))
615 config.add_include_dirs(join('src', 'multiarray'))
616 config.add_include_dirs(join('src', 'umath'))
617 config.add_include_dirs(join('src', 'npysort'))
618
619 config.add_define_macros([("NPY_INTERNAL_BUILD", "1")]) # this macro indicates that Numpy build is in process
620 config.add_define_macros([("HAVE_NPY_CONFIG_H", "1")])
621 if sys.platform[:3] == "aix":
622 config.add_define_macros([("_LARGE_FILES", None)])
623 else:
624 config.add_define_macros([("_FILE_OFFSET_BITS", "64")])
625 config.add_define_macros([('_LARGEFILE_SOURCE', '1')])
626 config.add_define_macros([('_LARGEFILE64_SOURCE', '1')])
627
628 config.numpy_include_dirs.extend(config.paths('include'))
629
630 deps = [join('src', 'npymath', '_signbit.c'),
631 join('include', 'numpy', '*object.h'),
632 join(codegen_dir, 'genapi.py'),
633 ]
634
635 #######################################################################
636 # dummy module #
637 #######################################################################
638
639 # npymath needs the config.h and numpyconfig.h files to be generated, but
640 # build_clib cannot handle generate_config_h and generate_numpyconfig_h
641 # (don't ask). Because clib are generated before extensions, we have to
642 # explicitly add an extension which has generate_config_h and
643 # generate_numpyconfig_h as sources *before* adding npymath.
644
645 config.add_extension('_dummy',
646 sources=[join('src', 'dummymodule.c'),
647 generate_config_h,
648 generate_numpyconfig_h,
649 generate_numpy_api]
650 )
651
652 #######################################################################
653 # npymath library #
654 #######################################################################
655
656 subst_dict = dict([("sep", os.path.sep), ("pkgname", "numpy.core")])
657
658 def get_mathlib_info(*args):
659 # Another ugly hack: the mathlib info is known once build_src is run,
660 # but we cannot use add_installed_pkg_config here either, so we only
661 # update the substitution dictionary during npymath build
662 config_cmd = config.get_config_cmd()
663
664 # Check that the toolchain works, to fail early if it doesn't
665 # (avoid late errors with MATHLIB which are confusing if the
666 # compiler does not work).
667 st = config_cmd.try_link('int main(void) { return 0;}')
668 if not st:
669 raise RuntimeError("Broken toolchain: cannot link a simple C program")
670 mlibs = check_mathlib(config_cmd)
671
672 posix_mlib = ' '.join(['-l%s' % l for l in mlibs])
673 msvc_mlib = ' '.join(['%s.lib' % l for l in mlibs])
674 subst_dict["posix_mathlib"] = posix_mlib
675 subst_dict["msvc_mathlib"] = msvc_mlib
676
677 npymath_sources = [join('src', 'npymath', 'npy_math_internal.h.src'),
678 join('src', 'npymath', 'npy_math.c'),
679 join('src', 'npymath', 'ieee754.c.src'),
680 join('src', 'npymath', 'npy_math_complex.c.src'),
681 join('src', 'npymath', 'halffloat.c')
682 ]
683
684 # Must be true for CRT compilers but not MinGW/cygwin. See gh-9977.
685 # Intel and Clang also don't seem happy with /GL
686 is_msvc = (platform.platform().startswith('Windows') and
687 platform.python_compiler().startswith('MS'))
688 config.add_installed_library('npymath',
689 sources=npymath_sources + [get_mathlib_info],
690 install_dir='lib',
691 build_info={
692 'include_dirs' : [], # empty list required for creating npy_math_internal.h
693 'extra_compiler_args' : (['/GL-'] if is_msvc else []),
694 })
695 config.add_npy_pkg_config("npymath.ini.in", "lib/npy-pkg-config",
696 subst_dict)
697 config.add_npy_pkg_config("mlib.ini.in", "lib/npy-pkg-config",
698 subst_dict)
699
700 #######################################################################
701 # npysort library #
702 #######################################################################
703
704 # This library is created for the build but it is not installed
705 npysort_sources = [join('src', 'common', 'npy_sort.h.src'),
706 join('src', 'npysort', 'quicksort.c.src'),
707 join('src', 'npysort', 'mergesort.c.src'),
708 join('src', 'npysort', 'timsort.c.src'),
709 join('src', 'npysort', 'heapsort.c.src'),
710 join('src', 'npysort', 'radixsort.c.src'),
711 join('src', 'common', 'npy_partition.h.src'),
712 join('src', 'npysort', 'selection.c.src'),
713 join('src', 'common', 'npy_binsearch.h.src'),
714 join('src', 'npysort', 'binsearch.c.src'),
715 ]
716 config.add_library('npysort',
717 sources=npysort_sources,
718 include_dirs=[])
719
720 #######################################################################
721 # multiarray_tests module #
722 #######################################################################
723
724 config.add_extension('_multiarray_tests',
725 sources=[join('src', 'multiarray', '_multiarray_tests.c.src'),
726 join('src', 'common', 'mem_overlap.c')],
727 depends=[join('src', 'common', 'mem_overlap.h'),
728 join('src', 'common', 'npy_extint128.h')],
729 libraries=['npymath'])
730
731 #######################################################################
732 # _multiarray_umath module - common part #
733 #######################################################################
734
735 common_deps = [
736 join('src', 'common', 'array_assign.h'),
737 join('src', 'common', 'binop_override.h'),
738 join('src', 'common', 'cblasfuncs.h'),
739 join('src', 'common', 'lowlevel_strided_loops.h'),
740 join('src', 'common', 'mem_overlap.h'),
741 join('src', 'common', 'npy_cblas.h'),
742 join('src', 'common', 'npy_config.h'),
743 join('src', 'common', 'npy_ctypes.h'),
744 join('src', 'common', 'npy_extint128.h'),
745 join('src', 'common', 'npy_import.h'),
746 join('src', 'common', 'npy_longdouble.h'),
747 join('src', 'common', 'templ_common.h.src'),
748 join('src', 'common', 'ucsnarrow.h'),
749 join('src', 'common', 'ufunc_override.h'),
750 join('src', 'common', 'umathmodule.h'),
751 join('src', 'common', 'numpyos.h'),
752 ]
753
754 common_src = [
755 join('src', 'common', 'array_assign.c'),
756 join('src', 'common', 'mem_overlap.c'),
757 join('src', 'common', 'npy_longdouble.c'),
758 join('src', 'common', 'templ_common.h.src'),
759 join('src', 'common', 'ucsnarrow.c'),
760 join('src', 'common', 'ufunc_override.c'),
761 join('src', 'common', 'numpyos.c'),
762 ]
763
764 blas_info = get_info('blas_opt', 0)
765 if blas_info and ('HAVE_CBLAS', None) in blas_info.get('define_macros', []):
766 extra_info = blas_info
767 # These files are also in MANIFEST.in so that they are always in
768 # the source distribution independently of HAVE_CBLAS.
769 common_src.extend([join('src', 'common', 'cblasfuncs.c'),
770 join('src', 'common', 'python_xerbla.c'),
771 ])
772 if uses_accelerate_framework(blas_info):
773 common_src.extend(get_sgemv_fix())
774 else:
775 extra_info = {}
776
777 #######################################################################
778 # _multiarray_umath module - multiarray part #
779 #######################################################################
780
781 multiarray_deps = [
782 join('src', 'multiarray', 'arrayobject.h'),
783 join('src', 'multiarray', 'arraytypes.h'),
784 join('src', 'multiarray', 'arrayfunction_override.h'),
785 join('src', 'multiarray', 'buffer.h'),
786 join('src', 'multiarray', 'calculation.h'),
787 join('src', 'multiarray', 'common.h'),
788 join('src', 'multiarray', 'convert_datatype.h'),
789 join('src', 'multiarray', 'convert.h'),
790 join('src', 'multiarray', 'conversion_utils.h'),
791 join('src', 'multiarray', 'ctors.h'),
792 join('src', 'multiarray', 'descriptor.h'),
793 join('src', 'multiarray', 'dragon4.h'),
794 join('src', 'multiarray', 'getset.h'),
795 join('src', 'multiarray', 'hashdescr.h'),
796 join('src', 'multiarray', 'iterators.h'),
797 join('src', 'multiarray', 'mapping.h'),
798 join('src', 'multiarray', 'methods.h'),
799 join('src', 'multiarray', 'multiarraymodule.h'),
800 join('src', 'multiarray', 'nditer_impl.h'),
801 join('src', 'multiarray', 'number.h'),
802 join('src', 'multiarray', 'refcount.h'),
803 join('src', 'multiarray', 'scalartypes.h'),
804 join('src', 'multiarray', 'sequence.h'),
805 join('src', 'multiarray', 'shape.h'),
806 join('src', 'multiarray', 'strfuncs.h'),
807 join('src', 'multiarray', 'typeinfo.h'),
808 join('src', 'multiarray', 'usertypes.h'),
809 join('src', 'multiarray', 'vdot.h'),
810 join('include', 'numpy', 'arrayobject.h'),
811 join('include', 'numpy', '_neighborhood_iterator_imp.h'),
812 join('include', 'numpy', 'npy_endian.h'),
813 join('include', 'numpy', 'arrayscalars.h'),
814 join('include', 'numpy', 'noprefix.h'),
815 join('include', 'numpy', 'npy_interrupt.h'),
816 join('include', 'numpy', 'npy_3kcompat.h'),
817 join('include', 'numpy', 'npy_math.h'),
818 join('include', 'numpy', 'halffloat.h'),
819 join('include', 'numpy', 'npy_common.h'),
820 join('include', 'numpy', 'npy_os.h'),
821 join('include', 'numpy', 'utils.h'),
822 join('include', 'numpy', 'ndarrayobject.h'),
823 join('include', 'numpy', 'npy_cpu.h'),
824 join('include', 'numpy', 'numpyconfig.h'),
825 join('include', 'numpy', 'ndarraytypes.h'),
826 join('include', 'numpy', 'npy_1_7_deprecated_api.h'),
827 # add library sources as distuils does not consider libraries
828 # dependencies
829 ] + npysort_sources + npymath_sources
830
831 multiarray_src = [
832 join('src', 'multiarray', 'alloc.c'),
833 join('src', 'multiarray', 'arrayobject.c'),
834 join('src', 'multiarray', 'arraytypes.c.src'),
835 join('src', 'multiarray', 'array_assign_scalar.c'),
836 join('src', 'multiarray', 'array_assign_array.c'),
837 join('src', 'multiarray', 'arrayfunction_override.c'),
838 join('src', 'multiarray', 'buffer.c'),
839 join('src', 'multiarray', 'calculation.c'),
840 join('src', 'multiarray', 'compiled_base.c'),
841 join('src', 'multiarray', 'common.c'),
842 join('src', 'multiarray', 'convert.c'),
843 join('src', 'multiarray', 'convert_datatype.c'),
844 join('src', 'multiarray', 'conversion_utils.c'),
845 join('src', 'multiarray', 'ctors.c'),
846 join('src', 'multiarray', 'datetime.c'),
847 join('src', 'multiarray', 'datetime_strings.c'),
848 join('src', 'multiarray', 'datetime_busday.c'),
849 join('src', 'multiarray', 'datetime_busdaycal.c'),
850 join('src', 'multiarray', 'descriptor.c'),
851 join('src', 'multiarray', 'dragon4.c'),
852 join('src', 'multiarray', 'dtype_transfer.c'),
853 join('src', 'multiarray', 'einsum.c.src'),
854 join('src', 'multiarray', 'flagsobject.c'),
855 join('src', 'multiarray', 'getset.c'),
856 join('src', 'multiarray', 'hashdescr.c'),
857 join('src', 'multiarray', 'item_selection.c'),
858 join('src', 'multiarray', 'iterators.c'),
859 join('src', 'multiarray', 'lowlevel_strided_loops.c.src'),
860 join('src', 'multiarray', 'mapping.c'),
861 join('src', 'multiarray', 'methods.c'),
862 join('src', 'multiarray', 'multiarraymodule.c'),
863 join('src', 'multiarray', 'nditer_templ.c.src'),
864 join('src', 'multiarray', 'nditer_api.c'),
865 join('src', 'multiarray', 'nditer_constr.c'),
866 join('src', 'multiarray', 'nditer_pywrap.c'),
867 join('src', 'multiarray', 'number.c'),
868 join('src', 'multiarray', 'refcount.c'),
869 join('src', 'multiarray', 'sequence.c'),
870 join('src', 'multiarray', 'shape.c'),
871 join('src', 'multiarray', 'scalarapi.c'),
872 join('src', 'multiarray', 'scalartypes.c.src'),
873 join('src', 'multiarray', 'strfuncs.c'),
874 join('src', 'multiarray', 'temp_elide.c'),
875 join('src', 'multiarray', 'typeinfo.c'),
876 join('src', 'multiarray', 'usertypes.c'),
877 join('src', 'multiarray', 'vdot.c'),
878 ]
879
880 #######################################################################
881 # _multiarray_umath module - umath part #
882 #######################################################################
883
884 def generate_umath_c(ext, build_dir):
885 target = join(build_dir, header_dir, '__umath_generated.c')
886 dir = os.path.dirname(target)
887 if not os.path.exists(dir):
888 os.makedirs(dir)
889 script = generate_umath_py
890 if newer(script, target):
891 with open(target, 'w') as f:
892 f.write(generate_umath.make_code(generate_umath.defdict,
893 generate_umath.__file__))
894 return []
895
896 umath_src = [
897 join('src', 'umath', 'umathmodule.c'),
898 join('src', 'umath', 'reduction.c'),
899 join('src', 'umath', 'funcs.inc.src'),
900 join('src', 'umath', 'simd.inc.src'),
901 join('src', 'umath', 'loops.h.src'),
902 join('src', 'umath', 'loops.c.src'),
903 join('src', 'umath', 'matmul.h.src'),
904 join('src', 'umath', 'matmul.c.src'),
905 join('src', 'umath', 'clip.h.src'),
906 join('src', 'umath', 'clip.c.src'),
907 join('src', 'umath', 'ufunc_object.c'),
908 join('src', 'umath', 'extobj.c'),
909 join('src', 'umath', 'cpuid.c'),
910 join('src', 'umath', 'scalarmath.c.src'),
911 join('src', 'umath', 'ufunc_type_resolution.c'),
912 join('src', 'umath', 'override.c'),
913 ]
914
915 umath_deps = [
916 generate_umath_py,
917 join('include', 'numpy', 'npy_math.h'),
918 join('include', 'numpy', 'halffloat.h'),
919 join('src', 'multiarray', 'common.h'),
920 join('src', 'multiarray', 'number.h'),
921 join('src', 'common', 'templ_common.h.src'),
922 join('src', 'umath', 'simd.inc.src'),
923 join('src', 'umath', 'override.h'),
924 join(codegen_dir, 'generate_ufunc_api.py'),
925 ]
926
927 config.add_extension('_multiarray_umath',
928 sources=multiarray_src + umath_src +
929 npymath_sources + common_src +
930 [generate_config_h,
931 generate_numpyconfig_h,
932 generate_numpy_api,
933 join(codegen_dir, 'generate_numpy_api.py'),
934 join('*.py'),
935 generate_umath_c,
936 generate_ufunc_api,
937 ],
938 depends=deps + multiarray_deps + umath_deps +
939 common_deps,
940 libraries=['npymath', 'npysort'],
941 extra_info=extra_info)
942
943 #######################################################################
944 # umath_tests module #
945 #######################################################################
946
947 config.add_extension('_umath_tests',
948 sources=[join('src', 'umath', '_umath_tests.c.src')])
949
950 #######################################################################
951 # custom rational dtype module #
952 #######################################################################
953
954 config.add_extension('_rational_tests',
955 sources=[join('src', 'umath', '_rational_tests.c.src')])
956
957 #######################################################################
958 # struct_ufunc_test module #
959 #######################################################################
960
961 config.add_extension('_struct_ufunc_tests',
962 sources=[join('src', 'umath', '_struct_ufunc_tests.c.src')])
963
964
965 #######################################################################
966 # operand_flag_tests module #
967 #######################################################################
968
969 config.add_extension('_operand_flag_tests',
970 sources=[join('src', 'umath', '_operand_flag_tests.c.src')])
971
972 config.add_data_dir('tests')
973 config.add_data_dir('tests/data')
974
975 config.make_svn_version_py()
976
977 return config
978
979 if __name__ == '__main__':
980 from numpy.distutils.core import setup
981 setup(configuration=configuration)
982
[end of numpy/core/setup.py]
[start of numpy/core/setup_common.py]
1 from __future__ import division, absolute_import, print_function
2
3 # Code common to build tools
4 import sys
5 import warnings
6 import copy
7 import binascii
8
9 from numpy.distutils.misc_util import mingw32
10
11
12 #-------------------
13 # Versioning support
14 #-------------------
15 # How to change C_API_VERSION ?
16 # - increase C_API_VERSION value
17 # - record the hash for the new C API with the script cversions.py
18 # and add the hash to cversions.txt
19 # The hash values are used to remind developers when the C API number was not
20 # updated - generates a MismatchCAPIWarning warning which is turned into an
21 # exception for released version.
22
23 # Binary compatibility version number. This number is increased whenever the
24 # C-API is changed such that binary compatibility is broken, i.e. whenever a
25 # recompile of extension modules is needed.
26 C_ABI_VERSION = 0x01000009
27
28 # Minor API version. This number is increased whenever a change is made to the
29 # C-API -- whether it breaks binary compatibility or not. Some changes, such
30 # as adding a function pointer to the end of the function table, can be made
31 # without breaking binary compatibility. In this case, only the C_API_VERSION
32 # (*not* C_ABI_VERSION) would be increased. Whenever binary compatibility is
33 # broken, both C_API_VERSION and C_ABI_VERSION should be increased.
34 #
35 # 0x00000008 - 1.7.x
36 # 0x00000009 - 1.8.x
37 # 0x00000009 - 1.9.x
38 # 0x0000000a - 1.10.x
39 # 0x0000000a - 1.11.x
40 # 0x0000000a - 1.12.x
41 # 0x0000000b - 1.13.x
42 # 0x0000000c - 1.14.x
43 # 0x0000000c - 1.15.x
44 # 0x0000000d - 1.16.x
45 C_API_VERSION = 0x0000000d
46
47 class MismatchCAPIWarning(Warning):
48 pass
49
50 def is_released(config):
51 """Return True if a released version of numpy is detected."""
52 from distutils.version import LooseVersion
53
54 v = config.get_version('../version.py')
55 if v is None:
56 raise ValueError("Could not get version")
57 pv = LooseVersion(vstring=v).version
58 if len(pv) > 3:
59 return False
60 return True
61
62 def get_api_versions(apiversion, codegen_dir):
63 """
64 Return current C API checksum and the recorded checksum.
65
66 Return current C API checksum and the recorded checksum for the given
67 version of the C API version.
68
69 """
70 # Compute the hash of the current API as defined in the .txt files in
71 # code_generators
72 sys.path.insert(0, codegen_dir)
73 try:
74 m = __import__('genapi')
75 numpy_api = __import__('numpy_api')
76 curapi_hash = m.fullapi_hash(numpy_api.full_api)
77 apis_hash = m.get_versions_hash()
78 finally:
79 del sys.path[0]
80
81 return curapi_hash, apis_hash[apiversion]
82
83 def check_api_version(apiversion, codegen_dir):
84 """Emits a MismatchCAPIWarning if the C API version needs updating."""
85 curapi_hash, api_hash = get_api_versions(apiversion, codegen_dir)
86
87 # If different hash, it means that the api .txt files in
88 # codegen_dir have been updated without the API version being
89 # updated. Any modification in those .txt files should be reflected
90 # in the api and eventually abi versions.
91 # To compute the checksum of the current API, use
92 # code_generators/cversions.py script
93 if not curapi_hash == api_hash:
94 msg = ("API mismatch detected, the C API version "
95 "numbers have to be updated. Current C api version is %d, "
96 "with checksum %s, but recorded checksum for C API version %d in "
97 "codegen_dir/cversions.txt is %s. If functions were added in the "
98 "C API, you have to update C_API_VERSION in %s."
99 )
100 warnings.warn(msg % (apiversion, curapi_hash, apiversion, api_hash,
101 __file__),
102 MismatchCAPIWarning, stacklevel=2)
103 # Mandatory functions: if not found, fail the build
104 MANDATORY_FUNCS = ["sin", "cos", "tan", "sinh", "cosh", "tanh", "fabs",
105 "floor", "ceil", "sqrt", "log10", "log", "exp", "asin",
106 "acos", "atan", "fmod", 'modf', 'frexp', 'ldexp']
107
108 # Standard functions which may not be available and for which we have a
109 # replacement implementation. Note that some of these are C99 functions.
110 OPTIONAL_STDFUNCS = ["expm1", "log1p", "acosh", "asinh", "atanh",
111 "rint", "trunc", "exp2", "log2", "hypot", "atan2", "pow",
112 "copysign", "nextafter", "ftello", "fseeko",
113 "strtoll", "strtoull", "cbrt", "strtold_l", "fallocate",
114 "backtrace", "madvise"]
115
116
117 OPTIONAL_HEADERS = [
118 # sse headers only enabled automatically on amd64/x32 builds
119 "xmmintrin.h", # SSE
120 "emmintrin.h", # SSE2
121 "immintrin.h", # AVX
122 "features.h", # for glibc version linux
123 "xlocale.h", # see GH#8367
124 "dlfcn.h", # dladdr
125 "sys/mman.h", #madvise
126 ]
127
128 # optional gcc compiler builtins and their call arguments and optional a
129 # required header and definition name (HAVE_ prepended)
130 # call arguments are required as the compiler will do strict signature checking
131 OPTIONAL_INTRINSICS = [("__builtin_isnan", '5.'),
132 ("__builtin_isinf", '5.'),
133 ("__builtin_isfinite", '5.'),
134 ("__builtin_bswap32", '5u'),
135 ("__builtin_bswap64", '5u'),
136 ("__builtin_expect", '5, 0'),
137 ("__builtin_mul_overflow", '5, 5, (int*)5'),
138 # broken on OSX 10.11, make sure its not optimized away
139 ("volatile int r = __builtin_cpu_supports", '"sse"',
140 "stdio.h", "__BUILTIN_CPU_SUPPORTS"),
141 # MMX only needed for icc, but some clangs don't have it
142 ("_m_from_int64", '0', "emmintrin.h"),
143 ("_mm_load_ps", '(float*)0', "xmmintrin.h"), # SSE
144 ("_mm_prefetch", '(float*)0, _MM_HINT_NTA',
145 "xmmintrin.h"), # SSE
146 ("_mm_load_pd", '(double*)0', "emmintrin.h"), # SSE2
147 ("__builtin_prefetch", "(float*)0, 0, 3"),
148 # check that the linker can handle avx
149 ("__asm__ volatile", '"vpand %xmm1, %xmm2, %xmm3"',
150 "stdio.h", "LINK_AVX"),
151 ("__asm__ volatile", '"vpand %ymm1, %ymm2, %ymm3"',
152 "stdio.h", "LINK_AVX2"),
153 ("__asm__ volatile", '"vpaddd %zmm1, %zmm2, %zmm3"',
154 "stdio.h", "LINK_AVX512F"),
155 ("__asm__ volatile", '"xgetbv"', "stdio.h", "XGETBV"),
156 ]
157
158 # function attributes
159 # tested via "int %s %s(void *);" % (attribute, name)
160 # function name will be converted to HAVE_<upper-case-name> preprocessor macro
161 OPTIONAL_FUNCTION_ATTRIBUTES = [('__attribute__((optimize("unroll-loops")))',
162 'attribute_optimize_unroll_loops'),
163 ('__attribute__((optimize("O3")))',
164 'attribute_optimize_opt_3'),
165 ('__attribute__((nonnull (1)))',
166 'attribute_nonnull'),
167 ('__attribute__((target ("avx")))',
168 'attribute_target_avx'),
169 ('__attribute__((target ("avx2")))',
170 'attribute_target_avx2'),
171 ('__attribute__((target ("avx512f")))',
172 'attribute_target_avx512f'),
173 ]
174
175 # function attributes with intrinsics
176 # To ensure your compiler can compile avx intrinsics with just the attributes
177 # gcc 4.8.4 support attributes but not with intrisics
178 # tested via "#include<%s> int %s %s(void *){code; return 0;};" % (header, attribute, name, code)
179 # function name will be converted to HAVE_<upper-case-name> preprocessor macro
180 OPTIONAL_FUNCTION_ATTRIBUTES_WITH_INTRINSICS = [('__attribute__((target("avx2")))',
181 'attribute_target_avx2_with_intrinsics',
182 '__m256 temp = _mm256_set1_ps(1.0)',
183 'immintrin.h'),
184 ('__attribute__((target("avx512f")))',
185 'attribute_target_avx512f_with_intrinsics',
186 '__m512 temp = _mm512_set1_ps(1.0)',
187 'immintrin.h'),
188 ]
189
190 # variable attributes tested via "int %s a" % attribute
191 OPTIONAL_VARIABLE_ATTRIBUTES = ["__thread", "__declspec(thread)"]
192
193 # Subset of OPTIONAL_STDFUNCS which may already have HAVE_* defined by Python.h
194 OPTIONAL_STDFUNCS_MAYBE = [
195 "expm1", "log1p", "acosh", "atanh", "asinh", "hypot", "copysign",
196 "ftello", "fseeko"
197 ]
198
199 # C99 functions: float and long double versions
200 C99_FUNCS = [
201 "sin", "cos", "tan", "sinh", "cosh", "tanh", "fabs", "floor", "ceil",
202 "rint", "trunc", "sqrt", "log10", "log", "log1p", "exp", "expm1",
203 "asin", "acos", "atan", "asinh", "acosh", "atanh", "hypot", "atan2",
204 "pow", "fmod", "modf", 'frexp', 'ldexp', "exp2", "log2", "copysign",
205 "nextafter", "cbrt"
206 ]
207 C99_FUNCS_SINGLE = [f + 'f' for f in C99_FUNCS]
208 C99_FUNCS_EXTENDED = [f + 'l' for f in C99_FUNCS]
209 C99_COMPLEX_TYPES = [
210 'complex double', 'complex float', 'complex long double'
211 ]
212 C99_COMPLEX_FUNCS = [
213 "cabs", "cacos", "cacosh", "carg", "casin", "casinh", "catan",
214 "catanh", "ccos", "ccosh", "cexp", "cimag", "clog", "conj", "cpow",
215 "cproj", "creal", "csin", "csinh", "csqrt", "ctan", "ctanh"
216 ]
217
218 def fname2def(name):
219 return "HAVE_%s" % name.upper()
220
221 def sym2def(symbol):
222 define = symbol.replace(' ', '')
223 return define.upper()
224
225 def type2def(symbol):
226 define = symbol.replace(' ', '_')
227 return define.upper()
228
229 # Code to detect long double representation taken from MPFR m4 macro
230 def check_long_double_representation(cmd):
231 cmd._check_compiler()
232 body = LONG_DOUBLE_REPRESENTATION_SRC % {'type': 'long double'}
233
234 # Disable whole program optimization (the default on vs2015, with python 3.5+)
235 # which generates intermediary object files and prevents checking the
236 # float representation.
237 if sys.platform == "win32" and not mingw32():
238 try:
239 cmd.compiler.compile_options.remove("/GL")
240 except (AttributeError, ValueError):
241 pass
242
243 # Disable multi-file interprocedural optimization in the Intel compiler on Linux
244 # which generates intermediary object files and prevents checking the
245 # float representation.
246 elif (sys.platform != "win32"
247 and cmd.compiler.compiler_type.startswith('intel')
248 and '-ipo' in cmd.compiler.cc_exe):
249 newcompiler = cmd.compiler.cc_exe.replace(' -ipo', '')
250 cmd.compiler.set_executables(
251 compiler=newcompiler,
252 compiler_so=newcompiler,
253 compiler_cxx=newcompiler,
254 linker_exe=newcompiler,
255 linker_so=newcompiler + ' -shared'
256 )
257
258 # We need to use _compile because we need the object filename
259 src, obj = cmd._compile(body, None, None, 'c')
260 try:
261 ltype = long_double_representation(pyod(obj))
262 return ltype
263 except ValueError:
264 # try linking to support CC="gcc -flto" or icc -ipo
265 # struct needs to be volatile so it isn't optimized away
266 body = body.replace('struct', 'volatile struct')
267 body += "int main(void) { return 0; }\n"
268 src, obj = cmd._compile(body, None, None, 'c')
269 cmd.temp_files.append("_configtest")
270 cmd.compiler.link_executable([obj], "_configtest")
271 ltype = long_double_representation(pyod("_configtest"))
272 return ltype
273 finally:
274 cmd._clean()
275
276 LONG_DOUBLE_REPRESENTATION_SRC = r"""
277 /* "before" is 16 bytes to ensure there's no padding between it and "x".
278 * We're not expecting any "long double" bigger than 16 bytes or with
279 * alignment requirements stricter than 16 bytes. */
280 typedef %(type)s test_type;
281
282 struct {
283 char before[16];
284 test_type x;
285 char after[8];
286 } foo = {
287 { '\0', '\0', '\0', '\0', '\0', '\0', '\0', '\0',
288 '\001', '\043', '\105', '\147', '\211', '\253', '\315', '\357' },
289 -123456789.0,
290 { '\376', '\334', '\272', '\230', '\166', '\124', '\062', '\020' }
291 };
292 """
293
294 def pyod(filename):
295 """Python implementation of the od UNIX utility (od -b, more exactly).
296
297 Parameters
298 ----------
299 filename : str
300 name of the file to get the dump from.
301
302 Returns
303 -------
304 out : seq
305 list of lines of od output
306
307 Note
308 ----
309 We only implement enough to get the necessary information for long double
310 representation, this is not intended as a compatible replacement for od.
311 """
312 def _pyod2():
313 out = []
314
315 with open(filename, 'rb') as fid:
316 yo = [int(oct(int(binascii.b2a_hex(o), 16))) for o in fid.read()]
317 for i in range(0, len(yo), 16):
318 line = ['%07d' % int(oct(i))]
319 line.extend(['%03d' % c for c in yo[i:i+16]])
320 out.append(" ".join(line))
321 return out
322
323 def _pyod3():
324 out = []
325
326 with open(filename, 'rb') as fid:
327 yo2 = [oct(o)[2:] for o in fid.read()]
328 for i in range(0, len(yo2), 16):
329 line = ['%07d' % int(oct(i)[2:])]
330 line.extend(['%03d' % int(c) for c in yo2[i:i+16]])
331 out.append(" ".join(line))
332 return out
333
334 if sys.version_info[0] < 3:
335 return _pyod2()
336 else:
337 return _pyod3()
338
339 _BEFORE_SEQ = ['000', '000', '000', '000', '000', '000', '000', '000',
340 '001', '043', '105', '147', '211', '253', '315', '357']
341 _AFTER_SEQ = ['376', '334', '272', '230', '166', '124', '062', '020']
342
343 _IEEE_DOUBLE_BE = ['301', '235', '157', '064', '124', '000', '000', '000']
344 _IEEE_DOUBLE_LE = _IEEE_DOUBLE_BE[::-1]
345 _INTEL_EXTENDED_12B = ['000', '000', '000', '000', '240', '242', '171', '353',
346 '031', '300', '000', '000']
347 _INTEL_EXTENDED_16B = ['000', '000', '000', '000', '240', '242', '171', '353',
348 '031', '300', '000', '000', '000', '000', '000', '000']
349 _MOTOROLA_EXTENDED_12B = ['300', '031', '000', '000', '353', '171',
350 '242', '240', '000', '000', '000', '000']
351 _IEEE_QUAD_PREC_BE = ['300', '031', '326', '363', '105', '100', '000', '000',
352 '000', '000', '000', '000', '000', '000', '000', '000']
353 _IEEE_QUAD_PREC_LE = _IEEE_QUAD_PREC_BE[::-1]
354 _IBM_DOUBLE_DOUBLE_BE = (['301', '235', '157', '064', '124', '000', '000', '000'] +
355 ['000'] * 8)
356 _IBM_DOUBLE_DOUBLE_LE = (['000', '000', '000', '124', '064', '157', '235', '301'] +
357 ['000'] * 8)
358
359 def long_double_representation(lines):
360 """Given a binary dump as given by GNU od -b, look for long double
361 representation."""
362
363 # Read contains a list of 32 items, each item is a byte (in octal
364 # representation, as a string). We 'slide' over the output until read is of
365 # the form before_seq + content + after_sequence, where content is the long double
366 # representation:
367 # - content is 12 bytes: 80 bits Intel representation
368 # - content is 16 bytes: 80 bits Intel representation (64 bits) or quad precision
369 # - content is 8 bytes: same as double (not implemented yet)
370 read = [''] * 32
371 saw = None
372 for line in lines:
373 # we skip the first word, as od -b output an index at the beginning of
374 # each line
375 for w in line.split()[1:]:
376 read.pop(0)
377 read.append(w)
378
379 # If the end of read is equal to the after_sequence, read contains
380 # the long double
381 if read[-8:] == _AFTER_SEQ:
382 saw = copy.copy(read)
383 # if the content was 12 bytes, we only have 32 - 8 - 12 = 12
384 # "before" bytes. In other words the first 4 "before" bytes went
385 # past the sliding window.
386 if read[:12] == _BEFORE_SEQ[4:]:
387 if read[12:-8] == _INTEL_EXTENDED_12B:
388 return 'INTEL_EXTENDED_12_BYTES_LE'
389 if read[12:-8] == _MOTOROLA_EXTENDED_12B:
390 return 'MOTOROLA_EXTENDED_12_BYTES_BE'
391 # if the content was 16 bytes, we are left with 32-8-16 = 16
392 # "before" bytes, so 8 went past the sliding window.
393 elif read[:8] == _BEFORE_SEQ[8:]:
394 if read[8:-8] == _INTEL_EXTENDED_16B:
395 return 'INTEL_EXTENDED_16_BYTES_LE'
396 elif read[8:-8] == _IEEE_QUAD_PREC_BE:
397 return 'IEEE_QUAD_BE'
398 elif read[8:-8] == _IEEE_QUAD_PREC_LE:
399 return 'IEEE_QUAD_LE'
400 elif read[8:-8] == _IBM_DOUBLE_DOUBLE_LE:
401 return 'IBM_DOUBLE_DOUBLE_LE'
402 elif read[8:-8] == _IBM_DOUBLE_DOUBLE_BE:
403 return 'IBM_DOUBLE_DOUBLE_BE'
404 # if the content was 8 bytes, left with 32-8-8 = 16 bytes
405 elif read[:16] == _BEFORE_SEQ:
406 if read[16:-8] == _IEEE_DOUBLE_LE:
407 return 'IEEE_DOUBLE_LE'
408 elif read[16:-8] == _IEEE_DOUBLE_BE:
409 return 'IEEE_DOUBLE_BE'
410
411 if saw is not None:
412 raise ValueError("Unrecognized format (%s)" % saw)
413 else:
414 # We never detected the after_sequence
415 raise ValueError("Could not lock sequences (%s)" % saw)
416
[end of numpy/core/setup_common.py]
[start of numpy/distutils/ccompiler.py]
1 from __future__ import division, absolute_import, print_function
2
3 import os
4 import re
5 import sys
6 import types
7 import shlex
8 import time
9 import subprocess
10 from copy import copy
11 from distutils import ccompiler
12 from distutils.ccompiler import *
13 from distutils.errors import DistutilsExecError, DistutilsModuleError, \
14 DistutilsPlatformError, CompileError
15 from distutils.sysconfig import customize_compiler
16 from distutils.version import LooseVersion
17
18 from numpy.distutils import log
19 from numpy.distutils.compat import get_exception
20 from numpy.distutils.exec_command import (
21 filepath_from_subprocess_output, forward_bytes_to_stdout
22 )
23 from numpy.distutils.misc_util import cyg2win32, is_sequence, mingw32, \
24 get_num_build_jobs, \
25 _commandline_dep_string
26
27 # globals for parallel build management
28 try:
29 import threading
30 except ImportError:
31 import dummy_threading as threading
32 _job_semaphore = None
33 _global_lock = threading.Lock()
34 _processing_files = set()
35
36
37 def _needs_build(obj, cc_args, extra_postargs, pp_opts):
38 """
39 Check if an objects needs to be rebuild based on its dependencies
40
41 Parameters
42 ----------
43 obj : str
44 object file
45
46 Returns
47 -------
48 bool
49 """
50 # defined in unixcompiler.py
51 dep_file = obj + '.d'
52 if not os.path.exists(dep_file):
53 return True
54
55 # dep_file is a makefile containing 'object: dependencies'
56 # formatted like posix shell (spaces escaped, \ line continuations)
57 # the last line contains the compiler commandline arguments as some
58 # projects may compile an extension multiple times with different
59 # arguments
60 with open(dep_file, "r") as f:
61 lines = f.readlines()
62
63 cmdline =_commandline_dep_string(cc_args, extra_postargs, pp_opts)
64 last_cmdline = lines[-1]
65 if last_cmdline != cmdline:
66 return True
67
68 contents = ''.join(lines[:-1])
69 deps = [x for x in shlex.split(contents, posix=True)
70 if x != "\n" and not x.endswith(":")]
71
72 try:
73 t_obj = os.stat(obj).st_mtime
74
75 # check if any of the dependencies is newer than the object
76 # the dependencies includes the source used to create the object
77 for f in deps:
78 if os.stat(f).st_mtime > t_obj:
79 return True
80 except OSError:
81 # no object counts as newer (shouldn't happen if dep_file exists)
82 return True
83
84 return False
85
86
87 def replace_method(klass, method_name, func):
88 if sys.version_info[0] < 3:
89 m = types.MethodType(func, None, klass)
90 else:
91 # Py3k does not have unbound method anymore, MethodType does not work
92 m = lambda self, *args, **kw: func(self, *args, **kw)
93 setattr(klass, method_name, m)
94
95
96 ######################################################################
97 ## Method that subclasses may redefine. But don't call this method,
98 ## it i private to CCompiler class and may return unexpected
99 ## results if used elsewhere. So, you have been warned..
100
101 def CCompiler_find_executables(self):
102 """
103 Does nothing here, but is called by the get_version method and can be
104 overridden by subclasses. In particular it is redefined in the `FCompiler`
105 class where more documentation can be found.
106
107 """
108 pass
109
110
111 replace_method(CCompiler, 'find_executables', CCompiler_find_executables)
112
113
114 # Using customized CCompiler.spawn.
115 def CCompiler_spawn(self, cmd, display=None):
116 """
117 Execute a command in a sub-process.
118
119 Parameters
120 ----------
121 cmd : str
122 The command to execute.
123 display : str or sequence of str, optional
124 The text to add to the log file kept by `numpy.distutils`.
125 If not given, `display` is equal to `cmd`.
126
127 Returns
128 -------
129 None
130
131 Raises
132 ------
133 DistutilsExecError
134 If the command failed, i.e. the exit status was not 0.
135
136 """
137 if display is None:
138 display = cmd
139 if is_sequence(display):
140 display = ' '.join(list(display))
141 log.info(display)
142 try:
143 subprocess.check_output(cmd)
144 except subprocess.CalledProcessError as exc:
145 o = exc.output
146 s = exc.returncode
147 except OSError:
148 # OSError doesn't have the same hooks for the exception
149 # output, but exec_command() historically would use an
150 # empty string for EnvironmentError (base class for
151 # OSError)
152 o = b''
153 # status previously used by exec_command() for parent
154 # of OSError
155 s = 127
156 else:
157 # use a convenience return here so that any kind of
158 # caught exception will execute the default code after the
159 # try / except block, which handles various exceptions
160 return None
161
162 if is_sequence(cmd):
163 cmd = ' '.join(list(cmd))
164
165 forward_bytes_to_stdout(o)
166
167 if re.search(b'Too many open files', o):
168 msg = '\nTry rerunning setup command until build succeeds.'
169 else:
170 msg = ''
171 raise DistutilsExecError('Command "%s" failed with exit status %d%s' %
172 (cmd, s, msg))
173
174 replace_method(CCompiler, 'spawn', CCompiler_spawn)
175
176 def CCompiler_object_filenames(self, source_filenames, strip_dir=0, output_dir=''):
177 """
178 Return the name of the object files for the given source files.
179
180 Parameters
181 ----------
182 source_filenames : list of str
183 The list of paths to source files. Paths can be either relative or
184 absolute, this is handled transparently.
185 strip_dir : bool, optional
186 Whether to strip the directory from the returned paths. If True,
187 the file name prepended by `output_dir` is returned. Default is False.
188 output_dir : str, optional
189 If given, this path is prepended to the returned paths to the
190 object files.
191
192 Returns
193 -------
194 obj_names : list of str
195 The list of paths to the object files corresponding to the source
196 files in `source_filenames`.
197
198 """
199 if output_dir is None:
200 output_dir = ''
201 obj_names = []
202 for src_name in source_filenames:
203 base, ext = os.path.splitext(os.path.normpath(src_name))
204 base = os.path.splitdrive(base)[1] # Chop off the drive
205 base = base[os.path.isabs(base):] # If abs, chop off leading /
206 if base.startswith('..'):
207 # Resolve starting relative path components, middle ones
208 # (if any) have been handled by os.path.normpath above.
209 i = base.rfind('..')+2
210 d = base[:i]
211 d = os.path.basename(os.path.abspath(d))
212 base = d + base[i:]
213 if ext not in self.src_extensions:
214 raise UnknownFileError("unknown file type '%s' (from '%s')" % (ext, src_name))
215 if strip_dir:
216 base = os.path.basename(base)
217 obj_name = os.path.join(output_dir, base + self.obj_extension)
218 obj_names.append(obj_name)
219 return obj_names
220
221 replace_method(CCompiler, 'object_filenames', CCompiler_object_filenames)
222
223 def CCompiler_compile(self, sources, output_dir=None, macros=None,
224 include_dirs=None, debug=0, extra_preargs=None,
225 extra_postargs=None, depends=None):
226 """
227 Compile one or more source files.
228
229 Please refer to the Python distutils API reference for more details.
230
231 Parameters
232 ----------
233 sources : list of str
234 A list of filenames
235 output_dir : str, optional
236 Path to the output directory.
237 macros : list of tuples
238 A list of macro definitions.
239 include_dirs : list of str, optional
240 The directories to add to the default include file search path for
241 this compilation only.
242 debug : bool, optional
243 Whether or not to output debug symbols in or alongside the object
244 file(s).
245 extra_preargs, extra_postargs : ?
246 Extra pre- and post-arguments.
247 depends : list of str, optional
248 A list of file names that all targets depend on.
249
250 Returns
251 -------
252 objects : list of str
253 A list of object file names, one per source file `sources`.
254
255 Raises
256 ------
257 CompileError
258 If compilation fails.
259
260 """
261 # This method is effective only with Python >=2.3 distutils.
262 # Any changes here should be applied also to fcompiler.compile
263 # method to support pre Python 2.3 distutils.
264 global _job_semaphore
265
266 jobs = get_num_build_jobs()
267
268 # setup semaphore to not exceed number of compile jobs when parallelized at
269 # extension level (python >= 3.5)
270 with _global_lock:
271 if _job_semaphore is None:
272 _job_semaphore = threading.Semaphore(jobs)
273
274 if not sources:
275 return []
276 # FIXME:RELATIVE_IMPORT
277 if sys.version_info[0] < 3:
278 from .fcompiler import FCompiler, is_f_file, has_f90_header
279 else:
280 from numpy.distutils.fcompiler import (FCompiler, is_f_file,
281 has_f90_header)
282 if isinstance(self, FCompiler):
283 display = []
284 for fc in ['f77', 'f90', 'fix']:
285 fcomp = getattr(self, 'compiler_'+fc)
286 if fcomp is None:
287 continue
288 display.append("Fortran %s compiler: %s" % (fc, ' '.join(fcomp)))
289 display = '\n'.join(display)
290 else:
291 ccomp = self.compiler_so
292 display = "C compiler: %s\n" % (' '.join(ccomp),)
293 log.info(display)
294 macros, objects, extra_postargs, pp_opts, build = \
295 self._setup_compile(output_dir, macros, include_dirs, sources,
296 depends, extra_postargs)
297 cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
298 display = "compile options: '%s'" % (' '.join(cc_args))
299 if extra_postargs:
300 display += "\nextra options: '%s'" % (' '.join(extra_postargs))
301 log.info(display)
302
303 def single_compile(args):
304 obj, (src, ext) = args
305 if not _needs_build(obj, cc_args, extra_postargs, pp_opts):
306 return
307
308 # check if we are currently already processing the same object
309 # happens when using the same source in multiple extensions
310 while True:
311 # need explicit lock as there is no atomic check and add with GIL
312 with _global_lock:
313 # file not being worked on, start working
314 if obj not in _processing_files:
315 _processing_files.add(obj)
316 break
317 # wait for the processing to end
318 time.sleep(0.1)
319
320 try:
321 # retrieve slot from our #job semaphore and build
322 with _job_semaphore:
323 self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
324 finally:
325 # register being done processing
326 with _global_lock:
327 _processing_files.remove(obj)
328
329
330 if isinstance(self, FCompiler):
331 objects_to_build = list(build.keys())
332 f77_objects, other_objects = [], []
333 for obj in objects:
334 if obj in objects_to_build:
335 src, ext = build[obj]
336 if self.compiler_type=='absoft':
337 obj = cyg2win32(obj)
338 src = cyg2win32(src)
339 if is_f_file(src) and not has_f90_header(src):
340 f77_objects.append((obj, (src, ext)))
341 else:
342 other_objects.append((obj, (src, ext)))
343
344 # f77 objects can be built in parallel
345 build_items = f77_objects
346 # build f90 modules serial, module files are generated during
347 # compilation and may be used by files later in the list so the
348 # ordering is important
349 for o in other_objects:
350 single_compile(o)
351 else:
352 build_items = build.items()
353
354 if len(build) > 1 and jobs > 1:
355 # build parallel
356 import multiprocessing.pool
357 pool = multiprocessing.pool.ThreadPool(jobs)
358 pool.map(single_compile, build_items)
359 pool.close()
360 else:
361 # build serial
362 for o in build_items:
363 single_compile(o)
364
365 # Return *all* object filenames, not just the ones we just built.
366 return objects
367
368 replace_method(CCompiler, 'compile', CCompiler_compile)
369
370 def CCompiler_customize_cmd(self, cmd, ignore=()):
371 """
372 Customize compiler using distutils command.
373
374 Parameters
375 ----------
376 cmd : class instance
377 An instance inheriting from `distutils.cmd.Command`.
378 ignore : sequence of str, optional
379 List of `CCompiler` commands (without ``'set_'``) that should not be
380 altered. Strings that are checked for are:
381 ``('include_dirs', 'define', 'undef', 'libraries', 'library_dirs',
382 'rpath', 'link_objects')``.
383
384 Returns
385 -------
386 None
387
388 """
389 log.info('customize %s using %s' % (self.__class__.__name__,
390 cmd.__class__.__name__))
391 def allow(attr):
392 return getattr(cmd, attr, None) is not None and attr not in ignore
393
394 if allow('include_dirs'):
395 self.set_include_dirs(cmd.include_dirs)
396 if allow('define'):
397 for (name, value) in cmd.define:
398 self.define_macro(name, value)
399 if allow('undef'):
400 for macro in cmd.undef:
401 self.undefine_macro(macro)
402 if allow('libraries'):
403 self.set_libraries(self.libraries + cmd.libraries)
404 if allow('library_dirs'):
405 self.set_library_dirs(self.library_dirs + cmd.library_dirs)
406 if allow('rpath'):
407 self.set_runtime_library_dirs(cmd.rpath)
408 if allow('link_objects'):
409 self.set_link_objects(cmd.link_objects)
410
411 replace_method(CCompiler, 'customize_cmd', CCompiler_customize_cmd)
412
413 def _compiler_to_string(compiler):
414 props = []
415 mx = 0
416 keys = list(compiler.executables.keys())
417 for key in ['version', 'libraries', 'library_dirs',
418 'object_switch', 'compile_switch',
419 'include_dirs', 'define', 'undef', 'rpath', 'link_objects']:
420 if key not in keys:
421 keys.append(key)
422 for key in keys:
423 if hasattr(compiler, key):
424 v = getattr(compiler, key)
425 mx = max(mx, len(key))
426 props.append((key, repr(v)))
427 fmt = '%-' + repr(mx+1) + 's = %s'
428 lines = [fmt % prop for prop in props]
429 return '\n'.join(lines)
430
431 def CCompiler_show_customization(self):
432 """
433 Print the compiler customizations to stdout.
434
435 Parameters
436 ----------
437 None
438
439 Returns
440 -------
441 None
442
443 Notes
444 -----
445 Printing is only done if the distutils log threshold is < 2.
446
447 """
448 if 0:
449 for attrname in ['include_dirs', 'define', 'undef',
450 'libraries', 'library_dirs',
451 'rpath', 'link_objects']:
452 attr = getattr(self, attrname, None)
453 if not attr:
454 continue
455 log.info("compiler '%s' is set to %s" % (attrname, attr))
456 try:
457 self.get_version()
458 except Exception:
459 pass
460 if log._global_log.threshold<2:
461 print('*'*80)
462 print(self.__class__)
463 print(_compiler_to_string(self))
464 print('*'*80)
465
466 replace_method(CCompiler, 'show_customization', CCompiler_show_customization)
467
468 def CCompiler_customize(self, dist, need_cxx=0):
469 """
470 Do any platform-specific customization of a compiler instance.
471
472 This method calls `distutils.sysconfig.customize_compiler` for
473 platform-specific customization, as well as optionally remove a flag
474 to suppress spurious warnings in case C++ code is being compiled.
475
476 Parameters
477 ----------
478 dist : object
479 This parameter is not used for anything.
480 need_cxx : bool, optional
481 Whether or not C++ has to be compiled. If so (True), the
482 ``"-Wstrict-prototypes"`` option is removed to prevent spurious
483 warnings. Default is False.
484
485 Returns
486 -------
487 None
488
489 Notes
490 -----
491 All the default options used by distutils can be extracted with::
492
493 from distutils import sysconfig
494 sysconfig.get_config_vars('CC', 'CXX', 'OPT', 'BASECFLAGS',
495 'CCSHARED', 'LDSHARED', 'SO')
496
497 """
498 # See FCompiler.customize for suggested usage.
499 log.info('customize %s' % (self.__class__.__name__))
500 customize_compiler(self)
501 if need_cxx:
502 # In general, distutils uses -Wstrict-prototypes, but this option is
503 # not valid for C++ code, only for C. Remove it if it's there to
504 # avoid a spurious warning on every compilation.
505 try:
506 self.compiler_so.remove('-Wstrict-prototypes')
507 except (AttributeError, ValueError):
508 pass
509
510 if hasattr(self, 'compiler') and 'cc' in self.compiler[0]:
511 if not self.compiler_cxx:
512 if self.compiler[0].startswith('gcc'):
513 a, b = 'gcc', 'g++'
514 else:
515 a, b = 'cc', 'c++'
516 self.compiler_cxx = [self.compiler[0].replace(a, b)]\
517 + self.compiler[1:]
518 else:
519 if hasattr(self, 'compiler'):
520 log.warn("#### %s #######" % (self.compiler,))
521 if not hasattr(self, 'compiler_cxx'):
522 log.warn('Missing compiler_cxx fix for ' + self.__class__.__name__)
523
524
525 # check if compiler supports gcc style automatic dependencies
526 # run on every extension so skip for known good compilers
527 if hasattr(self, 'compiler') and ('gcc' in self.compiler[0] or
528 'g++' in self.compiler[0] or
529 'clang' in self.compiler[0]):
530 self._auto_depends = True
531 elif os.name == 'posix':
532 import tempfile
533 import shutil
534 tmpdir = tempfile.mkdtemp()
535 try:
536 fn = os.path.join(tmpdir, "file.c")
537 with open(fn, "w") as f:
538 f.write("int a;\n")
539 self.compile([fn], output_dir=tmpdir,
540 extra_preargs=['-MMD', '-MF', fn + '.d'])
541 self._auto_depends = True
542 except CompileError:
543 self._auto_depends = False
544 finally:
545 shutil.rmtree(tmpdir)
546
547 return
548
549 replace_method(CCompiler, 'customize', CCompiler_customize)
550
551 def simple_version_match(pat=r'[-.\d]+', ignore='', start=''):
552 """
553 Simple matching of version numbers, for use in CCompiler and FCompiler.
554
555 Parameters
556 ----------
557 pat : str, optional
558 A regular expression matching version numbers.
559 Default is ``r'[-.\\d]+'``.
560 ignore : str, optional
561 A regular expression matching patterns to skip.
562 Default is ``''``, in which case nothing is skipped.
563 start : str, optional
564 A regular expression matching the start of where to start looking
565 for version numbers.
566 Default is ``''``, in which case searching is started at the
567 beginning of the version string given to `matcher`.
568
569 Returns
570 -------
571 matcher : callable
572 A function that is appropriate to use as the ``.version_match``
573 attribute of a `CCompiler` class. `matcher` takes a single parameter,
574 a version string.
575
576 """
577 def matcher(self, version_string):
578 # version string may appear in the second line, so getting rid
579 # of new lines:
580 version_string = version_string.replace('\n', ' ')
581 pos = 0
582 if start:
583 m = re.match(start, version_string)
584 if not m:
585 return None
586 pos = m.end()
587 while True:
588 m = re.search(pat, version_string[pos:])
589 if not m:
590 return None
591 if ignore and re.match(ignore, m.group(0)):
592 pos = m.end()
593 continue
594 break
595 return m.group(0)
596 return matcher
597
598 def CCompiler_get_version(self, force=False, ok_status=[0]):
599 """
600 Return compiler version, or None if compiler is not available.
601
602 Parameters
603 ----------
604 force : bool, optional
605 If True, force a new determination of the version, even if the
606 compiler already has a version attribute. Default is False.
607 ok_status : list of int, optional
608 The list of status values returned by the version look-up process
609 for which a version string is returned. If the status value is not
610 in `ok_status`, None is returned. Default is ``[0]``.
611
612 Returns
613 -------
614 version : str or None
615 Version string, in the format of `distutils.version.LooseVersion`.
616
617 """
618 if not force and hasattr(self, 'version'):
619 return self.version
620 self.find_executables()
621 try:
622 version_cmd = self.version_cmd
623 except AttributeError:
624 return None
625 if not version_cmd or not version_cmd[0]:
626 return None
627 try:
628 matcher = self.version_match
629 except AttributeError:
630 try:
631 pat = self.version_pattern
632 except AttributeError:
633 return None
634 def matcher(version_string):
635 m = re.match(pat, version_string)
636 if not m:
637 return None
638 version = m.group('version')
639 return version
640
641 try:
642 output = subprocess.check_output(version_cmd, stderr=subprocess.STDOUT)
643 except subprocess.CalledProcessError as exc:
644 output = exc.output
645 status = exc.returncode
646 except OSError:
647 # match the historical returns for a parent
648 # exception class caught by exec_command()
649 status = 127
650 output = b''
651 else:
652 # output isn't actually a filepath but we do this
653 # for now to match previous distutils behavior
654 output = filepath_from_subprocess_output(output)
655 status = 0
656
657 version = None
658 if status in ok_status:
659 version = matcher(output)
660 if version:
661 version = LooseVersion(version)
662 self.version = version
663 return version
664
665 replace_method(CCompiler, 'get_version', CCompiler_get_version)
666
667 def CCompiler_cxx_compiler(self):
668 """
669 Return the C++ compiler.
670
671 Parameters
672 ----------
673 None
674
675 Returns
676 -------
677 cxx : class instance
678 The C++ compiler, as a `CCompiler` instance.
679
680 """
681 if self.compiler_type in ('msvc', 'intelw', 'intelemw'):
682 return self
683
684 cxx = copy(self)
685 cxx.compiler_so = [cxx.compiler_cxx[0]] + cxx.compiler_so[1:]
686 if sys.platform.startswith('aix') and 'ld_so_aix' in cxx.linker_so[0]:
687 # AIX needs the ld_so_aix script included with Python
688 cxx.linker_so = [cxx.linker_so[0], cxx.compiler_cxx[0]] \
689 + cxx.linker_so[2:]
690 else:
691 cxx.linker_so = [cxx.compiler_cxx[0]] + cxx.linker_so[1:]
692 return cxx
693
694 replace_method(CCompiler, 'cxx_compiler', CCompiler_cxx_compiler)
695
696 compiler_class['intel'] = ('intelccompiler', 'IntelCCompiler',
697 "Intel C Compiler for 32-bit applications")
698 compiler_class['intele'] = ('intelccompiler', 'IntelItaniumCCompiler',
699 "Intel C Itanium Compiler for Itanium-based applications")
700 compiler_class['intelem'] = ('intelccompiler', 'IntelEM64TCCompiler',
701 "Intel C Compiler for 64-bit applications")
702 compiler_class['intelw'] = ('intelccompiler', 'IntelCCompilerW',
703 "Intel C Compiler for 32-bit applications on Windows")
704 compiler_class['intelemw'] = ('intelccompiler', 'IntelEM64TCCompilerW',
705 "Intel C Compiler for 64-bit applications on Windows")
706 compiler_class['pathcc'] = ('pathccompiler', 'PathScaleCCompiler',
707 "PathScale Compiler for SiCortex-based applications")
708 ccompiler._default_compilers += (('linux.*', 'intel'),
709 ('linux.*', 'intele'),
710 ('linux.*', 'intelem'),
711 ('linux.*', 'pathcc'),
712 ('nt', 'intelw'),
713 ('nt', 'intelemw'))
714
715 if sys.platform == 'win32':
716 compiler_class['mingw32'] = ('mingw32ccompiler', 'Mingw32CCompiler',
717 "Mingw32 port of GNU C Compiler for Win32"\
718 "(for MSC built Python)")
719 if mingw32():
720 # On windows platforms, we want to default to mingw32 (gcc)
721 # because msvc can't build blitz stuff.
722 log.info('Setting mingw32 as default compiler for nt.')
723 ccompiler._default_compilers = (('nt', 'mingw32'),) \
724 + ccompiler._default_compilers
725
726
727 _distutils_new_compiler = new_compiler
728 def new_compiler (plat=None,
729 compiler=None,
730 verbose=0,
731 dry_run=0,
732 force=0):
733 # Try first C compilers from numpy.distutils.
734 if plat is None:
735 plat = os.name
736 try:
737 if compiler is None:
738 compiler = get_default_compiler(plat)
739 (module_name, class_name, long_description) = compiler_class[compiler]
740 except KeyError:
741 msg = "don't know how to compile C/C++ code on platform '%s'" % plat
742 if compiler is not None:
743 msg = msg + " with '%s' compiler" % compiler
744 raise DistutilsPlatformError(msg)
745 module_name = "numpy.distutils." + module_name
746 try:
747 __import__ (module_name)
748 except ImportError:
749 msg = str(get_exception())
750 log.info('%s in numpy.distutils; trying from distutils',
751 str(msg))
752 module_name = module_name[6:]
753 try:
754 __import__(module_name)
755 except ImportError:
756 msg = str(get_exception())
757 raise DistutilsModuleError("can't compile C/C++ code: unable to load module '%s'" % \
758 module_name)
759 try:
760 module = sys.modules[module_name]
761 klass = vars(module)[class_name]
762 except KeyError:
763 raise DistutilsModuleError(("can't compile C/C++ code: unable to find class '%s' " +
764 "in module '%s'") % (class_name, module_name))
765 compiler = klass(None, dry_run, force)
766 log.debug('new_compiler returns %s' % (klass))
767 return compiler
768
769 ccompiler.new_compiler = new_compiler
770
771 _distutils_gen_lib_options = gen_lib_options
772 def gen_lib_options(compiler, library_dirs, runtime_library_dirs, libraries):
773 # the version of this function provided by CPython allows the following
774 # to return lists, which are unpacked automatically:
775 # - compiler.runtime_library_dir_option
776 # our version extends the behavior to:
777 # - compiler.library_dir_option
778 # - compiler.library_option
779 # - compiler.find_library_file
780 r = _distutils_gen_lib_options(compiler, library_dirs,
781 runtime_library_dirs, libraries)
782 lib_opts = []
783 for i in r:
784 if is_sequence(i):
785 lib_opts.extend(list(i))
786 else:
787 lib_opts.append(i)
788 return lib_opts
789 ccompiler.gen_lib_options = gen_lib_options
790
791 # Also fix up the various compiler modules, which do
792 # from distutils.ccompiler import gen_lib_options
793 # Don't bother with mwerks, as we don't support Classic Mac.
794 for _cc in ['msvc9', 'msvc', '_msvc', 'bcpp', 'cygwinc', 'emxc', 'unixc']:
795 _m = sys.modules.get('distutils.' + _cc + 'compiler')
796 if _m is not None:
797 setattr(_m, 'gen_lib_options', gen_lib_options)
798
799
[end of numpy/distutils/ccompiler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| numpy/numpy | ab87388a76c0afca4eb1159ab0ed232d502a8378 | NumPy 1.17 RC fails to compile with Intel C Compile 2016
<!-- Please describe the issue in detail here, and fill in the fields below -->
Compiling NumPy 1.17.0rc2 sources with Intel C Compiler 2016, which does not yet implement `__builtin_cpu_supports("avx512f")` fails with compilation error:
```
icc: numpy/core/src/umath/cpuid.c
numpy/core/src/umath/cpuid.c(63): catastrophic error: invalid use of '__builtin_cpu_supports'
compilation aborted for numpy/core/src/umath/cpuid.c (code 1)
```
Recent Intel C compiler (2019) proceeds just fine.
There is config test to probe compiler for support of `__builtin_cpu_supports`, but the the test does not discriminate between supported arguments.
| @mattip This is the issue with 1.17 sources and older compiler that I mentioned at the sprint.
To reproduce I did:
1. `conda create -n b_np117 -c defaults --override-channels python setuptools cython pip pytest mkl-devel`
2. `git clone http://github.com/numpy/numpy.git --branch maintenance/1.17.x numpy_src`
3. `conda activate b_np117`
4. Edit `site.cfg`. So that
```
(b_np117) [16:15:03 vmlin numpy_src_tmp]$ cat site.cfg
[mkl]
library_dirs = /tmp/miniconda/envs/b_np117/lib
include_dirs = /tmp/miniconda/envs/b_np117/include
lapack_libs = mkl_rt
mkl_libs = mkl_rt
```
5. Check compiler version:
```
(b_np117) [17:02:25 vmlin numpy_src_tmp]$ icc --version
icc (ICC) 16.0.3 20160415
Copyright (C) 1985-2016 Intel Corporation. All rights reserved.
```
6. Execute `CFLAGS="-DNDEBUG -I$PREFIX/include $CFLAGS" python setup.py config_cc --compiler=intelem config_fc --fcompiler=intelem build --force build_ext --inplace`
It seems we need someone with that compiler to test and fix this.
I definitely volunteer for testing and fixing it, but I would appreciate some guidance as what to try tweaking and where.
Pinging @r-devulap, maybe you can have a look/know something? It seems he wrote (or modified it and is also at Intel – albeit a very different part).
@oleksandr-pavlyk could you try this fix from my branch https://github.com/r-devulap/numpy/tree/avx512-cpuid and let me know if that fixes your problem. If it does, I can submit a PR.
never mind, created a PR with a simpler fix. | 2019-07-21T14:28:45Z | <patch>
diff --git a/numpy/core/setup_common.py b/numpy/core/setup_common.py
--- a/numpy/core/setup_common.py
+++ b/numpy/core/setup_common.py
@@ -138,6 +138,8 @@ def check_api_version(apiversion, codegen_dir):
# broken on OSX 10.11, make sure its not optimized away
("volatile int r = __builtin_cpu_supports", '"sse"',
"stdio.h", "__BUILTIN_CPU_SUPPORTS"),
+ ("volatile int r = __builtin_cpu_supports", '"avx512f"',
+ "stdio.h", "__BUILTIN_CPU_SUPPORTS_AVX512F"),
# MMX only needed for icc, but some clangs don't have it
("_m_from_int64", '0', "emmintrin.h"),
("_mm_load_ps", '(float*)0', "xmmintrin.h"), # SSE
</patch> | [] | [] |