comments
stringlengths
2
31.4k
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingServer and ThreadingServer mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to reqd all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <lkcl@samba.org> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
# (c) 2014 NAME <info@webtrein.nl> # https://github.com/timraasveld/ansible-string-split-filter/ # (c) 2014 NAME <drybjed@gmail.com> # http://debops.org/ # License: CC0 1.0 Universal # # Statement of Purpose # # The laws of most jurisdictions throughout the world automatically confer # exclusive Copyright and Related Rights (defined below) upon the creator and # subsequent owner(s) (each and all, an "owner") of an original work of # authorship and/or a database (each, a "Work"). # # Certain owners wish to permanently relinquish those rights to a Work for the # purpose of contributing to a commons of creative, cultural and scientific # works ("Commons") that the public can reliably and without fear of later # claims of infringement build upon, modify, incorporate in other works, reuse # and redistribute as freely as possible in any form whatsoever and for any # purposes, including without limitation commercial purposes. These owners may # contribute to the Commons to promote the ideal of a free culture and the # further production of creative, cultural and scientific works, or to gain # reputation or greater distribution for their Work in part through the use and # efforts of others. # # For these and/or other purposes and motivations, and without any expectation # of additional consideration or compensation, the person associating CC0 with a # Work (the "Affirmer"), to the extent that he or she is an owner of Copyright # and Related Rights in the Work, voluntarily elects to apply CC0 to the Work # and publicly distribute the Work under its terms, with knowledge of his or her # Copyright and Related Rights in the Work and the meaning and intended legal # effect of CC0 on those rights. # # 1. Copyright and Related Rights. A Work made available under CC0 may be # protected by copyright and related or neighboring rights ("Copyright and # Related Rights"). Copyright and Related Rights include, but are not limited # to, the following: # # i. the right to reproduce, adapt, distribute, perform, display, communicate, # and translate a Work; # # ii. moral rights retained by the original author(s) and/or performer(s); # # iii. publicity and privacy rights pertaining to a person's image or likeness # depicted in a Work; # # iv. rights protecting against unfair competition in regards to a Work, # subject to the limitations in paragraph 4(a), below; # # v. rights protecting the extraction, dissemination, use and reuse of data in # a Work; # # vi. database rights (such as those arising under Directive 96/9/EC of the # European Parliament and of the Council of 11 March 1996 on the legal # protection of databases, and under any national implementation thereof, # including any amended or successor version of such directive); and # # vii. other similar, equivalent or corresponding rights throughout the world # based on applicable law or treaty, and any national implementations thereof. # # 2. Waiver. To the greatest extent permitted by, but not in contravention of, # applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and # unconditionally waives, abandons, and surrenders all of Affirmer's Copyright # and Related Rights and associated claims and causes of action, whether now # known or unknown (including existing as well as future claims and causes of # action), in the Work (i) in all territories worldwide, (ii) for the maximum # duration provided by applicable law or treaty (including future time # extensions), (iii) in any current or future medium and for any number of # copies, and (iv) for any purpose whatsoever, including without limitation # commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes # the Waiver for the benefit of each member of the public at large and to the # detriment of Affirmer's heirs and successors, fully intending that such Waiver # shall not be subject to revocation, rescission, cancellation, termination, or # any other legal or equitable action to disrupt the quiet enjoyment of the Work # by the public as contemplated by Affirmer's express Statement of Purpose. # # 3. Public License Fallback. Should any part of the Waiver for any reason be # judged legally invalid or ineffective under applicable law, then the Waiver # shall be preserved to the maximum extent permitted taking into account # Affirmer's express Statement of Purpose. In addition, to the extent the Waiver # is so judged Affirmer hereby grants to each affected person a royalty-free, # non transferable, non sublicensable, non exclusive, irrevocable and # unconditional license to exercise Affirmer's Copyright and Related Rights in # the Work (i) in all territories worldwide, (ii) for the maximum duration # provided by applicable law or treaty (including future time extensions), (iii) # in any current or future medium and for any number of copies, and (iv) for any # purpose whatsoever, including without limitation commercial, advertising or # promotional purposes (the "License"). The License shall be deemed effective as # of the date CC0 was applied by Affirmer to the Work. Should any part of the # License for any reason be judged legally invalid or ineffective under # applicable law, such partial invalidity or ineffectiveness shall not # invalidate the remainder of the License, and in such case Affirmer hereby # affirms that he or she will not (i) exercise any of his or her remaining # Copyright and Related Rights in the Work or (ii) assert any associated claims # and causes of action with respect to the Work, in either case contrary to # Affirmer's express Statement of Purpose. # # 4. Limitations and Disclaimers. # # a. No trademark or patent rights held by Affirmer are waived, abandoned, # surrendered, licensed or otherwise affected by this document. # # b. Affirmer offers the Work as-is and makes no representations or warranties # of any kind concerning the Work, express, implied, statutory or otherwise, # including without limitation warranties of title, merchantability, fitness # for a particular purpose, non infringement, or the absence of latent or # other defects, accuracy, or the present or absence of errors, whether or not # discoverable, all to the greatest extent permissible under applicable law. # # c. Affirmer disclaims responsibility for clearing rights of other persons # that may apply to the Work or any use thereof, including without limitation # any person's Copyright and Related Rights in the Work. Further, Affirmer # disclaims responsibility for obtaining any necessary consents, permissions # or other rights required for any use of the Work. # # d. Affirmer understands and acknowledges that Creative Commons is not a # party to this document and has no duty or obligation with respect to this # CC0 or use of the Work. # # For more information, please see # <http://creativecommons.org/publicdomain/zero/1.0/>
""" ============ Array basics ============ Array types and conversions between types ========================================= NumPy supports a much greater variety of numerical types than Python does. This section shows which are available, and how to modify an array's data-type. ========== ========================================================== Data type Description ========== ========================================================== bool_ Boolean (True or False) stored as a byte int_ Default integer type (same as C ``long``; normally either ``int64`` or ``int32``) intc Identical to C ``int`` (normally ``int32`` or ``int64``) intp Integer used for indexing (same as C ``ssize_t``; normally either ``int32`` or ``int64``) int8 Byte (-128 to 127) int16 Integer (-32768 to 32767) int32 Integer (-2147483648 to 2147483647) int64 Integer (-9223372036854775808 to 9223372036854775807) uint8 Unsigned integer (0 to 255) uint16 Unsigned integer (0 to 65535) uint32 Unsigned integer (0 to 4294967295) uint64 Unsigned integer (0 to 18446744073709551615) float_ Shorthand for ``float64``. float16 Half precision float: sign bit, 5 bits exponent, 10 bits mantissa float32 Single precision float: sign bit, 8 bits exponent, 23 bits mantissa float64 Double precision float: sign bit, 11 bits exponent, 52 bits mantissa complex_ Shorthand for ``complex128``. complex64 Complex number, represented by two 32-bit floats (real and imaginary components) complex128 Complex number, represented by two 64-bit floats (real and imaginary components) ========== ========================================================== Additionally to ``intc`` the platform dependent C integer types ``short``, ``long``, ``longlong`` and their unsigned versions are defined. NumPy numerical types are instances of ``dtype`` (data-type) objects, each having unique characteristics. Once you have imported NumPy using :: >>> import numpy as np the dtypes are available as ``np.bool_``, ``np.float32``, etc. Advanced types, not listed in the table above, are explored in section :ref:`structured_arrays`. There are 5 basic numerical types representing booleans (bool), integers (int), unsigned integers (uint) floating point (float) and complex. Those with numbers in their name indicate the bitsize of the type (i.e. how many bits are needed to represent a single value in memory). Some types, such as ``int`` and ``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit vs. 64-bit machines). This should be taken into account when interfacing with low-level code (such as C or Fortran) where the raw memory is addressed. Data-types can be used as functions to convert python numbers to array scalars (see the array scalar section for an explanation), python sequences of numbers to arrays of that type, or as arguments to the dtype keyword that many numpy functions or methods accept. Some examples:: >>> import numpy as np >>> x = np.float32(1.0) >>> x 1.0 >>> y = np.int_([1,2,4]) >>> y array([1, 2, 4]) >>> z = np.arange(3, dtype=np.uint8) >>> z array([0, 1, 2], dtype=uint8) Array types can also be referred to by character codes, mostly to retain backward compatibility with older packages such as Numeric. Some documentation may still refer to these, for example:: >>> np.array([1, 2, 3], dtype='f') array([ 1., 2., 3.], dtype=float32) We recommend using dtype objects instead. To convert the type of an array, use the .astype() method (preferred) or the type itself as a function. For example: :: >>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE array([ 0., 1., 2.]) >>> np.int8(z) array([0, 1, 2], dtype=int8) Note that, above, we use the *Python* float object as a dtype. NumPy knows that ``int`` refers to ``np.int_``, ``bool`` means ``np.bool_``, that ``float`` is ``np.float_`` and ``complex`` is ``np.complex_``. The other data-types do not have Python equivalents. To determine the type of an array, look at the dtype attribute:: >>> z.dtype dtype('uint8') dtype objects also contain information about the type, such as its bit-width and its byte-order. The data type can also be used indirectly to query properties of the type, such as whether it is an integer:: >>> d = np.dtype(int) >>> d dtype('int32') >>> np.issubdtype(d, int) True >>> np.issubdtype(d, float) False Array Scalars ============= NumPy generally returns elements of arrays as array scalars (a scalar with an associated dtype). Array scalars differ from Python scalars, but for the most part they can be used interchangeably (the primary exception is for versions of Python older than v2.x, where integer array scalars cannot act as indices for lists and tuples). There are some exceptions, such as when code requires very specific attributes of a scalar or when it checks specifically whether a value is a Python scalar. Generally, problems are easily fixed by explicitly converting array scalars to Python scalars, using the corresponding Python type function (e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``). The primary advantage of using array scalars is that they preserve the array type (Python may not have a matching scalar type available, e.g. ``int16``). Therefore, the use of array scalars ensures identical behaviour between arrays and scalars, irrespective of whether the value is inside an array or not. NumPy scalars also have many of the same methods arrays do. Extended Precision ================== Python's floating-point numbers are usually 64-bit floating-point numbers, nearly equivalent to ``np.float64``. In some unusual situations it may be useful to use floating-point numbers with more precision. Whether this is possible in numpy depends on the hardware and on the development environment: specifically, x86 machines provide hardware floating-point with 80-bit precision, and while most C compilers provide this as their ``long double`` type, MSVC (standard for Windows builds) makes ``long double`` identical to ``double`` (64 bits). NumPy makes the compiler's ``long double`` available as ``np.longdouble`` (and ``np.clongdouble`` for the complex numbers). You can find out what your numpy provides with``np.finfo(np.longdouble)``. NumPy does not provide a dtype with more precision than C ``long double``s; in particular, the 128-bit IEEE quad precision data type (FORTRAN's ``REAL*16``) is not available. For efficient memory alignment, ``np.longdouble`` is usually stored padded with zero bits, either to 96 or 128 bits. Which is more efficient depends on hardware and development environment; typically on 32-bit systems they are padded to 96 bits, while on 64-bit systems they are typically padded to 128 bits. ``np.longdouble`` is padded to the system default; ``np.float96`` and ``np.float128`` are provided for users who want specific padding. In spite of the names, ``np.float96`` and ``np.float128`` provide only as much precision as ``np.longdouble``, that is, 80 bits on most x86 machines and 64 bits in standard Windows builds. Be warned that even if ``np.longdouble`` offers more precision than python ``float``, it is easy to lose that extra precision, since python often forces values to pass through ``float``. For example, the ``%`` formatting operator requires its arguments to be converted to standard python types, and it is therefore impossible to preserve extended precision even if many decimal places are requested. It can be useful to test your code with the value ``1 + np.finfo(np.longdouble).eps``. """
""" Signal Processing Tools ======================= Convolution: convolve: N-dimensional convolution. correlate: N-dimensional correlation. fftconvolve: N-dimensional convolution using the FFT. convolve2d: 2-dimensional convolution (more options). correlate2d: 2-dimensional correlation (more options). sepfir2d: Convolve with a 2-D separable FIR filter. B-splines: bspline: B-spline basis function of order n. gauss_spline: Gaussian approximation to the B-spline basis function. cspline1d: Coefficients for 1-D cubic (3rd order) B-spline. qspline1d: Coefficients for 1-D quadratic (2nd order) B-spline. cspline2d: Coefficients for 2-D cubic (3rd order) B-spline. qspline2d: Coefficients for 2-D quadratic (2nd order) B-spline. spline_filter: Smoothing spline (cubic) filtering of a rank-2 array. Filtering: order_filter: N-dimensional order filter. medfilt: N-dimensional median filter. medfilt2: 2-dimensional median filter (faster). wiener: N-dimensional wiener filter. symiirorder1: 2nd-order IIR filter (cascade of first-order systems). symiirorder2: 4th-order IIR filter (cascade of second-order systems). lfilter: 1-dimensional FIR and IIR digital linear filtering. lfiltic: Construct initial conditions for `lfilter`. deconvolve: 1-d deconvolution using lfilter. hilbert: Compute the analytic signal of a 1-d signal. get_window: Create FIR window. decimate: Downsample a signal. detrend: Remove linear and/or constant trends from data. resample: Resample using Fourier method. Filter design: bilinear: Return a digital filter from an analog filter using the bilinear transform. firwin: Windowed FIR filter design, with frequency response defined as pass and stop bands. firwin2: Windowed FIR filter design, with arbitrary frequency response. freqs: Analog filter frequency response. freqz: Digital filter frequency response. iirdesign: IIR filter design given bands and gains. iirfilter: IIR filter design given order and critical frequencies. invres: Inverse partial fraction expansion. kaiser_beta: Compute the Kaiser parameter beta, given the desired FIR filter attenuation. kaiser_atten: Compute the attenuation of a Kaiser FIR filter, given the number of taps and the transition width at discontinuities in the frequency response. kaiserord: Design a Kaiser window to limit ripple and width of transition region. remez: Optimal FIR filter design. residue: Partial fraction expansion of b(s) / a(s). residuez: Partial fraction expansion of b(z) / a(z). unique_roots: Unique roots and their multiplicities. Matlab-style IIR filter design: butter (buttord): Butterworth cheby1 (cheb1ord): Chebyshev Type I cheby2 (cheb2ord): Chebyshev Type II ellip (ellipord): Elliptic (Cauer) bessel: Bessel (no order selection available -- try butterod) Linear Systems: lti: linear time invariant system object. lsim: continuous-time simulation of output to linear system. lsim2: like lsim, but `scipy.integrate.odeint` is used. impulse: impulse response of linear, time-invariant (LTI) system. impulse2: like impulse, but `scipy.integrate.odeint` is used. step: step response of continous-time LTI system. step2: like step, but `scipy.integrate.odeint` is used. LTI Representations: tf2zpk: transfer function to zero-pole-gain. zpk2tf: zero-pole-gain to transfer function. tf2ss: transfer function to state-space. ss2tf: state-pace to transfer function. zpk2ss: zero-pole-gain to state-space. ss2zpk: state-space to pole-zero-gain. Waveforms: sawtooth: Periodic sawtooth square: Square wave gausspulse: Gaussian modulated sinusoid chirp: Frequency swept cosine signal, with several frequency functions. sweep_poly: Frequency swept cosine signal; frequency is arbitrary polynomial. Window functions: get_window: Return a window of a given length and type. barthann: Bartlett-Hann window bartlett: Bartlett window blackman: Blackman window blackmanharris: Minimum 4-term Blackman-Harris window bohman: Bohman window boxcar: Boxcar window chebwin: Dolph-Chebyshev window flattop: Flat top window gaussian: Gaussian window general_gaussian: Generalized Gaussian window hamming: Hamming window hann: Hann window kaiser: Kaiser window nuttall: Nuttall's minimum 4-term Blackman-Harris window parzen: Parzen window slepian: Slepian window triang: Triangular window Wavelets: daub: return low-pass qmf: return quadrature mirror filter from low-pass cascade: compute scaling function and wavelet from coefficients morlet: Complex Morlet wavelet. """
""" Objects for dealing with Chebyshev series. This module provides a number of objects (mostly functions) useful for dealing with Chebyshev series, including a `Chebyshev` class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its "parent" sub-package, `numpy.polynomial`). Constants --------- - `chebdomain` -- Chebyshev series default domain, [-1,1]. - `chebzero` -- (Coefficients of the) Chebyshev series that evaluates identically to 0. - `chebone` -- (Coefficients of the) Chebyshev series that evaluates identically to 1. - `chebx` -- (Coefficients of the) Chebyshev series for the identity map, ``f(x) = x``. Arithmetic ---------- - `chebadd` -- add two Chebyshev series. - `chebsub` -- subtract one Chebyshev series from another. - `chebmul` -- multiply two Chebyshev series. - `chebdiv` -- divide one Chebyshev series by another. - `chebpow` -- raise a Chebyshev series to an positive integer power - `chebval` -- evaluate a Chebyshev series at given points. - `chebval2d` -- evaluate a 2D Chebyshev series at given points. - `chebval3d` -- evaluate a 3D Chebyshev series at given points. - `chebgrid2d` -- evaluate a 2D Chebyshev series on a Cartesian product. - `chebgrid3d` -- evaluate a 3D Chebyshev series on a Cartesian product. Calculus -------- - `chebder` -- differentiate a Chebyshev series. - `chebint` -- integrate a Chebyshev series. Misc Functions -------------- - `chebfromroots` -- create a Chebyshev series with specified roots. - `chebroots` -- find the roots of a Chebyshev series. - `chebvander` -- Vandermonde-like matrix for Chebyshev polynomials. - `chebvander2d` -- Vandermonde-like matrix for 2D power series. - `chebvander3d` -- Vandermonde-like matrix for 3D power series. - `chebgauss` -- Gauss-Chebyshev quadrature, points and weights. - `chebweight` -- Chebyshev weight function. - `chebcompanion` -- symmetrized companion matrix in Chebyshev form. - `chebfit` -- least-squares fit returning a Chebyshev series. - `chebpts1` -- Chebyshev points of the first kind. - `chebpts2` -- Chebyshev points of the second kind. - `chebtrim` -- trim leading coefficients from a Chebyshev series. - `chebline` -- Chebyshev series representing given straight line. - `cheb2poly` -- convert a Chebyshev series to a polynomial. - `poly2cheb` -- convert a polynomial to a Chebyshev series. Classes ------- - `Chebyshev` -- A Chebyshev series class. See also -------- `numpy.polynomial` Notes ----- The implementations of multiplication, division, integration, and differentiation use the algebraic identities [1]_: .. math :: T_n(x) = \\frac{z^n + z^{-n}}{2} \\\\ z\\frac{dx}{dz} = \\frac{z - z^{-1}}{2}. where .. math :: x = \\frac{z + z^{-1}}{2}. These identities allow a Chebyshev series to be expressed as a finite, symmetric Laurent series. In this module, this sort of Laurent series is referred to as a "z-series." References ---------- .. [1] NAME et al., "Combinatorial Trigonometry with Chebyshev Polynomials," *Journal of Statistical Planning and Inference 14*, 2008 (preprint: http://www.math.hmc.edu/~benjamin/papers/CombTrig.pdf, pg. 4) """
# Test 64-bit COMPARE AND BRANCH in cases where the sheer number of # instructions causes some branches to be out of range. # RUN: python %s | llc -mtriple=s390x-linux-gnu | FileCheck %s # Construct: # # before0: # conditional branch to after0 # ... # beforeN: # conditional branch to after0 # main: # 0xffcc bytes, from MVIY instructions # conditional branch to main # after0: # ... # conditional branch to main # afterN: # # Each conditional branch sequence occupies 12 bytes if it uses a short # branch and 16 if it uses a long one. The ones before "main:" have to # take the branch length into account, which is 6 for short branches, # so the final (0x34 - 6) / 12 == 3 blocks can use short branches. # The ones after "main:" do not, so the first 0x34 / 12 == 4 blocks # can use short branches. The conservative algorithm we use makes # one of the forward branches unnecessarily long, as noted in the # check output below. # # CHECK: lgb [[REG:%r[0-5]]], 0(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL:\.L[^ ]*]] # CHECK: lgb [[REG:%r[0-5]]], 1(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 2(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 3(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 4(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # ...as mentioned above, the next one could be a CGRJE instead... # CHECK: lgb [[REG:%r[0-5]]], 5(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 6(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 7(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # ...main goes here... # CHECK: lgb [[REG:%r[0-5]]], 25(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL:\.L[^ ]*]] # CHECK: lgb [[REG:%r[0-5]]], 26(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 27(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 28(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 29(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 30(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 31(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 32(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]]
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # adapted from http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c # -thepaul # This is an implementation of wcwidth() and wcswidth() (defined in # IEEE Std 1002.1-2001) for Unicode. # # http://www.opengroup.org/onlinepubs/007904975/functions/wcwidth.html # http://www.opengroup.org/onlinepubs/007904975/functions/wcswidth.html # # In fixed-width output devices, Latin characters all occupy a single # "cell" position of equal width, whereas ideographic CJK characters # occupy two such cells. Interoperability between terminal-line # applications and (teletype-style) character terminals using the # UTF-8 encoding requires agreement on which character should advance # the cursor by how many cell positions. No established formal # standards exist at present on which Unicode character shall occupy # how many cell positions on character terminals. These routines are # a first attempt of defining such behavior based on simple rules # applied to data provided by the Unicode Consortium. # # For some graphical characters, the Unicode standard explicitly # defines a character-cell width via the definition of the East Asian # FullWidth (F), Wide (W), Half-width (H), and Narrow (Na) classes. # In all these cases, there is no ambiguity about which width a # terminal shall use. For characters in the East Asian Ambiguous (A) # class, the width choice depends purely on a preference of backward # compatibility with either historic CJK or Western practice. # Choosing single-width for these characters is easy to justify as # the appropriate long-term solution, as the CJK practice of # displaying these characters as double-width comes from historic # implementation simplicity (8-bit encoded characters were displayed # single-width and 16-bit ones double-width, even for Greek, # Cyrillic, etc.) and not any typographic considerations. # # Much less clear is the choice of width for the Not East Asian # (Neutral) class. Existing practice does not dictate a width for any # of these characters. It would nevertheless make sense # typographically to allocate two character cells to characters such # as for instance EM SPACE or VOLUME INTEGRAL, which cannot be # represented adequately with a single-width glyph. The following # routines at present merely assign a single-cell width to all # neutral characters, in the interest of simplicity. This is not # entirely satisfactory and should be reconsidered before # establishing a formal standard in this area. At the moment, the # decision which Not East Asian (Neutral) characters should be # represented by double-width glyphs cannot yet be answered by # applying a simple rule from the Unicode database content. Setting # up a proper standard for the behavior of UTF-8 character terminals # will require a careful analysis not only of each Unicode character, # but also of each presentation form, something the author of these # routines has avoided to do so far. # # http://www.unicode.org/unicode/reports/tr11/ # # NAME -- 2007-05-26 (Unicode 5.0) # # Permission to use, copy, modify, and distribute this software # for any purpose and without fee is hereby granted. The author # disclaims all warranties with regard to this software. # # Latest C version: http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c # auxiliary function for binary search in interval table
# Test 64-bit COMPARE LOGICAL AND BRANCH in cases where the sheer number of # instructions causes some branches to be out of range. # RUN: python %s | llc -mtriple=s390x-linux-gnu | FileCheck %s # Construct: # # before0: # conditional branch to after0 # ... # beforeN: # conditional branch to after0 # main: # 0xffcc bytes, from MVIY instructions # conditional branch to main # after0: # ... # conditional branch to main # afterN: # # Each conditional branch sequence occupies 12 bytes if it uses a short # branch and 16 if it uses a long one. The ones before "main:" have to # take the branch length into account, which is 6 for short branches, # so the final (0x34 - 6) / 12 == 3 blocks can use short branches. # The ones after "main:" do not, so the first 0x34 / 12 == 4 blocks # can use short branches. The conservative algorithm we use makes # one of the forward branches unnecessarily long, as noted in the # check output below. # # CHECK: lgb [[REG:%r[0-5]]], 0(%r3) # CHECK: clgr %r4, [[REG]] # CHECK: jgl [[LABEL:\.L[^ ]*]] # CHECK: lgb [[REG:%r[0-5]]], 1(%r3) # CHECK: clgr %r4, [[REG]] # CHECK: jgl [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 2(%r3) # CHECK: clgr %r4, [[REG]] # CHECK: jgl [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 3(%r3) # CHECK: clgr %r4, [[REG]] # CHECK: jgl [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 4(%r3) # CHECK: clgr %r4, [[REG]] # CHECK: jgl [[LABEL]] # ...as mentioned above, the next one could be a CLGRJL instead... # CHECK: lgb [[REG:%r[0-5]]], 5(%r3) # CHECK: clgr %r4, [[REG]] # CHECK: jgl [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 6(%r3) # CHECK: clgrjl %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 7(%r3) # CHECK: clgrjl %r4, [[REG]], [[LABEL]] # ...main goes here... # CHECK: lgb [[REG:%r[0-5]]], 25(%r3) # CHECK: clgrjl %r4, [[REG]], [[LABEL:\.L[^ ]*]] # CHECK: lgb [[REG:%r[0-5]]], 26(%r3) # CHECK: clgrjl %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 27(%r3) # CHECK: clgrjl %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 28(%r3) # CHECK: clgrjl %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 29(%r3) # CHECK: clgr %r4, [[REG]] # CHECK: jgl [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 30(%r3) # CHECK: clgr %r4, [[REG]] # CHECK: jgl [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 31(%r3) # CHECK: clgr %r4, [[REG]] # CHECK: jgl [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 32(%r3) # CHECK: clgr %r4, [[REG]] # CHECK: jgl [[LABEL]]
""" # ggame The simple cross-platform sprite and game platform for Brython Server (Pygame, Tkinter to follow?). Ggame stands for a couple of things: "good game" (of course!) and also "git game" or "github game" because it is designed to operate with [Brython Server](http://runpython.com) in concert with Github as a backend file store. Ggame is **not** intended to be a full-featured gaming API, with every bell and whistle. Ggame is designed primarily as a tool for teaching computer programming, recognizing that the ability to create engaging and interactive games is a powerful motivator for many progamming students. Accordingly, any functional or performance enhancements that *can* be reasonably implemented by the user are left as an exercise. ## Functionality Goals The ggame library is intended to be trivially easy to use. For example: from ggame import App, ImageAsset, Sprite # Create a displayed object at 100,100 using an image asset Sprite(ImageAsset("ggame/bunny.png"), (100,100)) # Create the app, with a 500x500 pixel stage app = App(500,500) # Run the app app.run() ## Overview There are three major components to the `ggame` system: Assets, Sprites and the App. ### Assets Asset objects (i.e. `ggame.ImageAsset`, etc.) typically represent separate files that are provided by the "art department". These might be background images, user interface images, or images that represent objects in the game. In addition, `ggame.SoundAsset` is used to represent sound files (`.wav` or `.mp3` format) that can be played in the game. Ggame also extends the asset concept to include graphics that are generated dynamically at run-time, such as geometrical objects, e.g. rectangles, lines, etc. ### Sprites All of the visual aspects of the game are represented by instances of `ggame.Sprite` or subclasses of it. ### App Every ggame application must create a single instance of the `ggame.App` class (or a sub-class of it). Creating an instance of the `ggame.App` class will initiate creation of a pop-up window on your browser. Executing the app's `run` method will begin the process of refreshing the visual assets on the screen. ### Events No game is complete without a player and players produce events. Your code handles user input by registering to receive keyboard and mouse events using `ggame.App.listenKeyEvent` and `ggame.App.listenMouseEvent` methods. ## Execution Environment Ggame is designed to be executed in a web browser using [Brython](http://brython.info/), [Pixi.js](http://www.pixijs.com/) and [Buzz](http://buzz.jaysalvat.com/). The easiest way to do this is by executing from [runpython](http://runpython.com), with source code residing on [github](http://github.com). When using [runpython](http://runpython.com), you will have to configure your browser to allow popup windows. To use Ggame in your own application, you will minimally need to create a folder called `ggame` in your project. Within `ggame`, copy the `ggame.py`, `sysdeps.py` and `__init__.py` files from the [ggame project](https://github.com/BrythonServer/ggame). ### Include Ggame as a Git Subtree From the same directory as your own python sources (note: you must have an existing git repository with committed files in order for the following to work properly), execute the following terminal commands: git remote add -f ggame https://github.com/BrythonServer/ggame.git git merge -s ours --no-commit ggame/master mkdir ggame git read-tree --prefix=ggame/ -u ggame/master git commit -m "Merge ggame project as our subdirectory" If you want to pull in updates from ggame in the future: git pull -s subtree ggame master You can see an example of how a ggame subtree is used by examining the [Brython Server Spacewar](https://github.com/BrythonServer/Spacewar) repo on Github. ## Geometry When referring to screen coordinates, note that the x-axis of the computer screen is *horizontal* with the zero position on the left hand side of the screen. The y-axis is *vertical* with the zero position at the **top** of the screen. Increasing positive y-coordinates correspond to the downward direction on the computer screen. Note that this is **different** from the way you may have learned about x and y coordinates in math class! """
""" Artifactor Artifactor is used to collect artifacts from a number of different plugins and put them into one place. Artifactor works around a series of events and is geared towards unit testing, though it is extensible and customizable enough that it can be used for a variety of purposes. The main guts of Artifactor is around the plugins. Before Artifactor can do anything it must have a configured plugin. This plugin is then configured to bind certain functions inside itself to certain events. When Artifactor is triggered to handle a certain event, it will tell the plugin that that particular event has happened and the plugin will respond accordingly. In addition to the plugins, Artifactor can also run certain callback functions before and after the hook function itself. These are call pre and post hook callbacks. Artifactor allows multiple pre and post hook callbacks to be defined per event, but does not guarantee the order that they are executed in. To allow data to be passed to and from hooks, Artifactor has the idea of global and event local values. The global values persist in the Artifactor instance for its lifetime, but the event local values are destroyed at the end of each event. Let's take the example of using the unit testing suite py.test as an example for Artifactor. Suppose we have a number of tests that run as part of a test suite and we wish to store a text file that holds the time the test was run and its result. This information is required to reside in a folder that is relevant to the test itself. This type of job is what Artifactor was designed for. To begin with, we need to create a plugin for Artifactor. Consider the following piece of code:: from artifactor import ArtifactorBasePlugin import time class Test(ArtifactorBasePlugin): def plugin_initialize(self): self.register_plugin_hook('start_test', self.start_test) self.register_plugin_hook('finish_test', self.finish_test) def start_test(self, test_name, test_location, artifact_path): filename = artifact_path + "-" + self.ident + ".log" with open(filename, "w") as f: f.write(test_name + "\n") f.write(str(time.time()) + "\n") def finish_test(self, test_name, artifact_path, test_result): filename = artifact_path + "-" + self.ident + ".log" with open(filename, "w+") as f: f.write(test_result) This is a typical plugin in Artifactor, it consists of 2 things. The first item is the special function called ``plugin_initialize()``. This is important and is equivilent to the ``__init__()`` that would usually be found in a class definition. Artifactor calls ``plugin_initialize()`` for each plugin as it loads it. Inside this section we register the hook functions to their associated events. Each event can only have a single function associated with it. Event names are able to be freely assigned so you can customize plugins to work to specific events for your use case. The ``register_plugin_hook()`` takes an event name as a string and a function to callback when that event is experienced. Next we have the hook functions themselves, ``start_test()`` and ``finish_test()``. These have arguments in their prototypes and these arguments are supplied by Artifactor and are created either as arguments to the ``fire_hook()`` function, which is responsible for actually telling Artifactor that an even has occured, or they are created in the pre hook script. Artifactor uses the global and local values referenced earlier to store these argument values. When a pre, post or hook callback finishes, it has the opportunity to supply updates to both the global and local values dictionaries. In doing this, a pre-hook script can prepare data, which will could be stored in the locals dictionary and then passed to the actual plugin hook as a keyword argument. local values override global values. We need to look at an example of this, but first we must configure artifactor and the plugin:: log_dir: /home/me/artiout per_run: run #test, run, None overwrite: True artifacts: test: enabled: True plugin: test Here we have defined a ``log_dir`` which will be the root of all of our artifacts. We have asked Artifactor to group the artifacts by run, which means that it will try to create a directory under the ``log_dir`` which indicates which test "run" this was. We can also specify a value of "test" here, which will move the test run identifying folder up to the leaf in the tree. The ``log_dir`` and contents of the config are stored in global values as ``log_dir`` and ``artifactor_config`` respectively. These are the only two global values which are setup by Artifactor. This data is then passed to artifactor as a dict, we will assume a variable name of ``config`` here. Let's consider how we would run this test art = artifactor.artifactor art.set_config(config) art.register_plugin(test.Test, "test") artifactor.initialize() a.fire_hook('start_session', run_id=2235) a.fire_hook('start_test', test_name="my_test", test_location="tests/mytest.py") a.fire_hook('finish_test', test_name="my_test", test_location="tests/mytest.py", test_result="FAILED") a.fire_hook('finish_session') The art.register_plugin is used to bind a plugin name to a class definition. Notice in the config section earlier, we have a ``plugin: test`` field. This name ``test`` is what Artifactor will look for when trying to find the appropriate plugin. When we register the plugin with the ``register_plugin`` function, we take the ``test.Test`` class and essentially give it the name ``test`` so that the names will tie up and the plugin will be used. Notice that we have sent some information to along with the request to fire the hook. Ignoring the ``start_session`` event for a minute, the ``start_test`` event sends a ``test_name`` and a ``test_location``. However, the ``start_test`` hook also required an argument called ``argument_path``. This is not supplied by the hook, and isn't setup as a global value, so how does it get there? Inside Artifactor, by default, a pre_hook callback called ``start_test()`` is bound to the ``start_test`` event. This callback returns a local values update which includes ``artifact_path``. This is how the artifact_path is returned. This hook can be removed, by running a ``unregister_hook_callback`` with the name of the hook callback. """
# #!/usr/bin/env python3 # # -*- coding: utf-8 -*- # # __author__ = 'zhangxianqiang' # # import asyncio, logging # # import aiomysql # # def log(sql, args=()): # logging.info('SQL: %s' % sql) # # @asyncio.coroutine # def creat_pool(loop, **kw): # logging.info('creat database connection pool...') # global __pool # __pool = yield from aiomysql.create_pool( # host=kw.get('host', 'localhost'), # port=kw.get('port', 3306), # user=kw['user'], # db=kw['db'], # charset=kw.get('charset', 'utf8'), # autocommit=kw.get('autocommit', True), # maxsize=kw.get('maxsize', 10), # minsize=kw.get('minsize', 1), # loop=loop # ) # # # Select # @asyncio.coroutine # def select(sql, args, size=None): # log(sql, args) # global __pool # with (yield from __pool) as conn: # with 可以检测异常 语法更加好看一点. # cur = yield from conn.cursor(aiomysql.DictCursor) # yield from cur.execute(sql.replace('?', '%s'), args or ()) # if size: # rs = yield from cur.fetchmany(size) # else: # rs = yield from cur.fetchall() # yield from cur.close() # logging.info('rows returned: %s' % len(rs)) # return rs # # # # Insert, Update, Delete # @asyncio.coroutine # def execute(sql, args): # log(sql) # with (yield from __pool) as conn: # try: # cur = yield from conn.cursor() # yield from cur.execute(sql.replace('?', '%s'), args) # affected = cur.rowcount # yield from cur.close() # except BaseException as e: # raise # return affected # # # class Model(dict, metaclass=ModelMetaclass): # def __init__(self, **kw): # super(Model, self).__init__(**kw) # # def __getattr__(self, key): # try: # return self[key] # except KeyError: # raise AttributeError(r"'Model' object has no attribute '%s'" % key) # # def __setattr__(self, key, value): # self[key] = value # # def getValue(self, key): # return getattr(self, key, None) # # def getValueOrDefault(self, key): # value = getattr(self, key, None) # if value is None: # field = self.__mappings__[key] # if field.default is not None: # value = field.default() if callable(field.default) else field.default # logging.debug('using default value for %s: %s' % (key, str(value))) # setattr(self, key, value) # return value # # # class Field(object): # # def __init__(self, name, column_type, primary_key, default): # self.name = name # self.column_type = column_type # self.primary_key = primary_key # self.default = default # # def __str__(self): # return '<%s, %s:%s>' % (self.__class__.__name__, self.column_type, self.name) # # # class StringField(Field): # # def __init__(self, name=None, primary_key=False, default=None, ddl='varchar(100)'): # super().__init__(name, ddl, primary_key, default) # # # class ModelMetaclass(type): # # def __new__(cls, name, bases, attrs): # # 排除Model类本身: # if name=='Model': # return type.__new__(cls, name, bases, attrs) # # 获取table名称: # tableName = attrs.get('__table__', None) or name # logging.info('found model: %s (table: %s)' % (name, tableName)) # # 获取所有的Field和主键名: # mappings = dict() # fields = [] # primaryKey = None # for k, v in attrs.items(): # if isinstance(v, Field): # logging.info(' found mapping: %s ==> %s' % (k, v)) # mappings[k] = v # if v.primary_key: # # 找到主键: # if primaryKey: # raise RuntimeError('Duplicate primary key for field: %s' % k) # primaryKey = k # else: # fields.append(k) # if not primaryKey: # raise RuntimeError('Primary key not found.') # for k in mappings.keys(): # attrs.pop(k) # escaped_fields = list(map(lambda f: '`%s`' % f, fields)) # attrs['__mappings__'] = mappings # 保存属性和列的映射关系 # attrs['__table__'] = tableName # attrs['__primary_key__'] = primaryKey # 主键属性名 # attrs['__fields__'] = fields # 除主键外的属性名 # # 构造默认的SELECT, INSERT, UPDATE和DELETE语句: # attrs['__select__'] = 'select `%s`, %s from `%s`' % (primaryKey, ', '.join(escaped_fields), tableName) # attrs['__insert__'] = 'insert into `%s` (%s, `%s`) values (%s)' % (tableName, ', '.join(escaped_fields), primaryKey, create_args_string(len(escaped_fields) + 1)) # attrs['__update__'] = 'update `%s` set %s where `%s`=?' % (tableName, ', '.join(map(lambda f: '`%s`=?' % (mappings.get(f).name or f), fields)), primaryKey) # attrs['__delete__'] = 'delete from `%s` where `%s`=?' % (tableName, primaryKey) # return type.__new__(cls, name, bases, attrs) # #!/usr/bin/env python3 # -*- coding: utf-8 -*-
"""Stuff to parse AIFF-C and AIFF files. Unless explicitly stated otherwise, the description below is true both for AIFF-C files and AIFF files. An AIFF-C file has the following structure. +-----------------+ | FORM | +-----------------+ | <size> | +----+------------+ | | AIFC | | +------------+ | | <chunks> | | | . | | | . | | | . | +----+------------+ An AIFF file has the string "AIFF" instead of "AIFC". A chunk consists of an identifier (4 bytes) followed by a size (4 bytes, big endian order), followed by the data. The size field does not include the size of the 8 byte header. The following chunk types are recognized. FVER <version number of AIFF-C defining document> (AIFF-C only). MARK <# of markers> (2 bytes) list of markers: <marker ID> (2 bytes, must be > 0) <position> (4 bytes) <marker name> ("pstring") COMM <# of channels> (2 bytes) <# of sound frames> (4 bytes) <size of the samples> (2 bytes) <sampling frequency> (10 bytes, IEEE 80-bit extended floating point) in AIFF-C files only: <compression type> (4 bytes) <human-readable version of compression type> ("pstring") SSND <offset> (4 bytes, not used by this program) <blocksize> (4 bytes, not used by this program) <sound data> A pstring consists of 1 byte length, a string of characters, and 0 or 1 byte pad to make the total length even. Usage. Reading AIFF files: f = aifc.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). In some types of audio files, if the setpos() method is not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' for AIFF files) getcompname() -- returns human-readable version of compression type ('not compressed' for AIFF files) getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- get the list of marks in the audio file or None if there are no marks getmark(id) -- get mark with the specified id (raises an error if the mark does not exist) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell(), the position given to setpos() and the position of marks are all compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing AIFF files: f = aifc.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: aiff() -- create an AIFF file (AIFF-C default) aifc() -- create an AIFF-C file setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple) -- set all parameters at once setmark(id, pos, name) -- add specified mark to the list of marks tell() -- return current position in output file (useful in combination with setmark()) writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. Marks can be added anytime. If there are any marks, ypu must call close() after all frames have been written. The close() method is called automatically when the class instance is destroyed. When a file is opened with the extension '.aiff', an AIFF file is written, otherwise an AIFF-C file is written. This default can be changed by calling aiff() or aifc() before the first writeframes or writeframesraw. """
# -*- encoding: utf-8 -*- # back ported from CPython 3 # A. HISTORY OF THE SOFTWARE # ========================== # # Python was created in the early 1990s by NAME at Stichting # Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands # as a successor of a language called ABC. NAME remains Python's # principal author, although it includes many contributions from others. # # In 1995, NAME continued his work on Python at the Corporation for # National Research Initiatives (CNRI, see http://www.cnri.reston.va.us) # in Reston, Virginia where he released several versions of the # software. # # In May 2000, NAME and the Python core development team moved to # BeOpen.com to form the BeOpen PythonLabs team. In October of the same # year, the PythonLabs team moved to Digital Creations (now Zope # Corporation, see http://www.zope.com). In 2001, the Python Software # Foundation (PSF, see http://www.python.org/psf/) was formed, a # non-profit organization created specifically to own Python-related # Intellectual Property. Zope Corporation is a sponsoring member of # the PSF. # # All Python releases are Open Source (see http://www.opensource.org for # the Open Source Definition). Historically, most, but not all, Python # releases have also been GPL-compatible; the table below summarizes # the various releases. # # Release Derived Year Owner GPL- # from compatible? (1) # # 0.9.0 thru 1.2 1991-1995 CWI yes # 1.3 thru 1.5.2 1.2 1995-1999 CNRI yes # 1.6 1.5.2 2000 CNRI no # 2.0 1.6 2000 BeOpen.com no # 1.6.1 1.6 2001 CNRI yes (2) # 2.1 2.0+1.6.1 2001 PSF no # 2.0.1 2.0+1.6.1 2001 PSF yes # 2.1.1 2.1+2.0.1 2001 PSF yes # 2.2 2.1.1 2001 PSF yes # 2.1.2 2.1.1 2002 PSF yes # 2.1.3 2.1.2 2002 PSF yes # 2.2.1 2.2 2002 PSF yes # 2.2.2 2.2.1 2002 PSF yes # 2.2.3 2.2.2 2003 PSF yes # 2.3 2.2.2 2002-2003 PSF yes # 2.3.1 2.3 2002-2003 PSF yes # 2.3.2 2.3.1 2002-2003 PSF yes # 2.3.3 2.3.2 2002-2003 PSF yes # 2.3.4 2.3.3 2004 PSF yes # 2.3.5 2.3.4 2005 PSF yes # 2.4 2.3 2004 PSF yes # 2.4.1 2.4 2005 PSF yes # 2.4.2 2.4.1 2005 PSF yes # 2.4.3 2.4.2 2006 PSF yes # 2.4.4 2.4.3 2006 PSF yes # 2.5 2.4 2006 PSF yes # 2.5.1 2.5 2007 PSF yes # 2.5.2 2.5.1 2008 PSF yes # 2.5.3 2.5.2 2008 PSF yes # 2.6 2.5 2008 PSF yes # 2.6.1 2.6 2008 PSF yes # 2.6.2 2.6.1 2009 PSF yes # 2.6.3 2.6.2 2009 PSF yes # 2.6.4 2.6.3 2009 PSF yes # 2.6.5 2.6.4 2010 PSF yes # 2.7 2.6 2010 PSF yes # # Footnotes: # # (1) GPL-compatible doesn't mean that we're distributing Python under # the GPL. All Python licenses, unlike the GPL, let you distribute # a modified version without making your changes open source. The # GPL-compatible licenses make it possible to combine Python with # other software that is released under the GPL; the others don't. # # (2) According to NAME 1.6.1 is not GPL-compatible, # because its license has a choice of law clause. According to # CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1 # is "not incompatible" with the GPL. # # Thanks to the many outside volunteers who have worked under NAME's # direction to make these releases possible. # # # B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON # =============================================================== # # PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 # -------------------------------------------- # # 1. This LICENSE AGREEMENT is between the Python Software Foundation # ("PSF"), and the Individual or Organization ("Licensee") accessing and # otherwise using this software ("Python") in source or binary form and # its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, PSF hereby # grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, # analyze, test, perform and/or display publicly, prepare derivative works, # distribute, and otherwise use Python alone or in any derivative version, # provided, however, that PSF's License Agreement and PSF's notice of copyright, # i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013 Python Software Foundation; All Rights Reserved" are retained # in Python alone or in any derivative version prepared by Licensee. # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python. # # 4. PSF is making Python available to Licensee on an "AS IS" # basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. Nothing in this License Agreement shall be deemed to create any # relationship of agency, partnership, or joint venture between PSF and # Licensee. This License Agreement does not grant permission to use PSF # trademarks or trade name in a trademark sense to endorse or promote # products or services of Licensee, or any third party. # # 8. By copying, installing or otherwise using Python, Licensee # agrees to be bound by the terms and conditions of this License # Agreement. # # # BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0 # ------------------------------------------- # # BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1 # # 1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an # office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the # Individual or Organization ("Licensee") accessing and otherwise using # this software in source or binary form and its associated # documentation ("the Software"). # # 2. Subject to the terms and conditions of this BeOpen Python License # Agreement, BeOpen hereby grants Licensee a non-exclusive, # royalty-free, world-wide license to reproduce, analyze, test, perform # and/or display publicly, prepare derivative works, distribute, and # otherwise use the Software alone or in any derivative version, # provided, however, that the BeOpen Python License is retained in the # Software, alone or in any derivative version prepared by Licensee. # # 3. BeOpen is making the Software available to Licensee on an "AS IS" # basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE # SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS # AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY # DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 5. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 6. This License Agreement shall be governed by and interpreted in all # respects by the law of the State of California, excluding conflict of # law provisions. Nothing in this License Agreement shall be deemed to # create any relationship of agency, partnership, or joint venture # between BeOpen and Licensee. This License Agreement does not grant # permission to use BeOpen trademarks or trade names in a trademark # sense to endorse or promote products or services of Licensee, or any # third party. As an exception, the "BeOpen Python" logos available at # http://www.pythonlabs.com/logos.html may be used according to the # permissions granted on that web page. # # 7. By copying, installing or otherwise using the software, Licensee # agrees to be bound by the terms and conditions of this License # Agreement. # # # CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1 # --------------------------------------- # # 1. This LICENSE AGREEMENT is between the Corporation for National # Research Initiatives, having an office at 1895 Preston White Drive, # Reston, VA 20191 ("CNRI"), and the Individual or Organization # ("Licensee") accessing and otherwise using Python 1.6.1 software in # source or binary form and its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, CNRI # hereby grants Licensee a nonexclusive, royalty-free, world-wide # license to reproduce, analyze, test, perform and/or display publicly, # prepare derivative works, distribute, and otherwise use Python 1.6.1 # alone or in any derivative version, provided, however, that CNRI's # License Agreement and CNRI's notice of copyright, i.e., "Copyright (c) # 1995-2001 Corporation for National Research Initiatives; All Rights # Reserved" are retained in Python 1.6.1 alone or in any derivative # version prepared by Licensee. Alternately, in lieu of CNRI's License # Agreement, Licensee may substitute the following text (omitting the # quotes): "Python 1.6.1 is made available subject to the terms and # conditions in CNRI's License Agreement. This Agreement together with # Python 1.6.1 may be located on the Internet using the following # unique, persistent identifier (known as a handle): 1895.22/1013. This # Agreement may also be obtained from a proxy server on the Internet # using the following URL: http://hdl.handle.net/1895.22/1013". # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python 1.6.1 or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python 1.6.1. # # 4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS" # basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # 1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. This License Agreement shall be governed by the federal # intellectual property law of the United States, including without # limitation the federal copyright law, and, to the extent such # U.S. federal law does not apply, by the law of the Commonwealth of # Virginia, excluding Virginia's conflict of law provisions. # Notwithstanding the foregoing, with regard to derivative works based # on Python 1.6.1 that incorporate non-separable material that was # previously distributed under the GNU General Public License (GPL), the # law of the Commonwealth of Virginia shall govern this License # Agreement only as to issues arising under or with respect to # Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this # License Agreement shall be deemed to create any relationship of # agency, partnership, or joint venture between CNRI and Licensee. This # License Agreement does not grant permission to use CNRI trademarks or # trade name in a trademark sense to endorse or promote products or # services of Licensee, or any third party. # # 8. By clicking on the "ACCEPT" button where indicated, or by copying, # installing or otherwise using Python 1.6.1, Licensee agrees to be # bound by the terms and conditions of this License Agreement. # # ACCEPT # # # CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2 # -------------------------------------------------- # # Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam, # The Netherlands. All rights reserved. # # Permission to use, copy, modify, and distribute this software and its # documentation for any purpose and without fee is hereby granted, # provided that the above copyright notice appear in all copies and that # both that copyright notice and this permission notice appear in # supporting documentation, and that the name of Stichting Mathematisch # Centrum or CWI not be used in advertising or publicity pertaining to # distribution of the software without specific, written prior # permission. # # STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO # THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND # FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE # FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT # OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
"""Stuff to parse AIFF-C and AIFF files. Unless explicitly stated otherwise, the description below is true both for AIFF-C files and AIFF files. An AIFF-C file has the following structure. +-----------------+ | FORM | +-----------------+ | <size> | +----+------------+ | | AIFC | | +------------+ | | <chunks> | | | . | | | . | | | . | +----+------------+ An AIFF file has the string "AIFF" instead of "AIFC". A chunk consists of an identifier (4 bytes) followed by a size (4 bytes, big endian order), followed by the data. The size field does not include the size of the 8 byte header. The following chunk types are recognized. FVER <version number of AIFF-C defining document> (AIFF-C only). MARK <# of markers> (2 bytes) list of markers: <marker ID> (2 bytes, must be > 0) <position> (4 bytes) <marker name> ("pstring") COMM <# of channels> (2 bytes) <# of sound frames> (4 bytes) <size of the samples> (2 bytes) <sampling frequency> (10 bytes, IEEE 80-bit extended floating point) in AIFF-C files only: <compression type> (4 bytes) <human-readable version of compression type> ("pstring") SSND <offset> (4 bytes, not used by this program) <blocksize> (4 bytes, not used by this program) <sound data> A pstring consists of 1 byte length, a string of characters, and 0 or 1 byte pad to make the total length even. Usage. Reading AIFF files: f = aifc.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). In some types of audio files, if the setpos() method is not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' for AIFF files) getcompname() -- returns human-readable version of compression type ('not compressed' for AIFF files) getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- get the list of marks in the audio file or None if there are no marks getmark(id) -- get mark with the specified id (raises an error if the mark does not exist) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell(), the position given to setpos() and the position of marks are all compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing AIFF files: f = aifc.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: aiff() -- create an AIFF file (AIFF-C default) aifc() -- create an AIFF-C file setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple) -- set all parameters at once setmark(id, pos, name) -- add specified mark to the list of marks tell() -- return current position in output file (useful in combination with setmark()) writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. Marks can be added anytime. If there are any marks, ypu must call close() after all frames have been written. The close() method is called automatically when the class instance is destroyed. When a file is opened with the extension '.aiff', an AIFF file is written, otherwise an AIFF-C file is written. This default can be changed by calling aiff() or aifc() before the first writeframes or writeframesraw. """
""" Define a simple format for saving numpy arrays to disk with the full information about them. The ``.npy`` format is the standard binary file format in NumPy for persisting a *single* arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals. The ``.npz`` format is the standard format for persisting *multiple* NumPy arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy`` files, one for each array. Capabilities ------------ - Can represent all NumPy arrays including nested record arrays and object arrays. - Represents the data in its native binary form. - Supports Fortran-contiguous arrays directly. - Stores all of the necessary information to reconstruct the array including shape and dtype on a machine of a different architecture. Both little-endian and big-endian arrays are supported, and a file with little-endian numbers will yield a little-endian array on any machine reading the file. The types are described in terms of their actual sizes. For example, if a machine with a 64-bit C "long int" writes out an array with "long ints", a reading machine with 32-bit C "long ints" will yield an array with 64-bit integers. - Is straightforward to reverse engineer. Datasets often live longer than the programs that created them. A competent developer should be able to create a solution in their preferred programming language to read most ``.npy`` files that he has been given without much documentation. - Allows memory-mapping of the data. See `open_memmep`. - Can be read from a filelike stream object instead of an actual file. - Stores object arrays, i.e. arrays containing elements that are arbitrary Python objects. Files with object arrays are not to be mmapable, but can be read and written to disk. Limitations ----------- - Arbitrary subclasses of numpy.ndarray are not completely preserved. Subclasses will be accepted for writing, but only the array data will be written out. A regular numpy.ndarray object will be created upon reading the file. .. warning:: Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by 'f0', 'f1', etc. Such arrays will not round-trip through the format entirely accurately. The data is intact; only the field names will differ. We are working on a fix for this. This fix will not require a change in the file format. The arrays with such structures can still be saved and restored, and the correct dtype may be restored by using the ``loadedarray.view(correct_dtype)`` method. File extensions --------------- We recommend using the ``.npy`` and ``.npz`` extensions for files saved in this format. This is by no means a requirement; applications may wish to use these file formats but use an extension specific to the application. In the absence of an obvious alternative, however, we suggest using ``.npy`` and ``.npz``. Version numbering ----------------- The version numbering of these formats is independent of NumPy version numbering. If the format is upgraded, the code in `numpy.io` will still be able to read and write Version 1.0 files. Format Version 1.0 ------------------ The first 6 bytes are a magic string: exactly ``\\x93NUMPY``. The next 1 byte is an unsigned byte: the major version number of the file format, e.g. ``\\x01``. The next 1 byte is an unsigned byte: the minor version number of the file format, e.g. ``\\x00``. Note: the version of the file format is not tied to the version of the numpy package. The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. The next HEADER_LEN bytes form the header data describing the array's format. It is an ASCII string which contains a Python literal expression of a dictionary. It is terminated by a newline (``\\n``) and padded with spaces (``\\x20``) to make the total length of ``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment purposes. The dictionary contains three keys: "descr" : dtype.descr An object that can be passed as an argument to the `numpy.dtype` constructor to create the array's dtype. "fortran_order" : bool Whether the array data is Fortran-contiguous or not. Since Fortran-contiguous arrays are a common form of non-C-contiguity, we allow them to be written directly to disk for efficiency. "shape" : tuple of int The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic order. This is for convenience only. A writer SHOULD implement this if possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. ``dtype.hasobject is True``), then the data is a Python pickle of the array. Otherwise the data is the contiguous (either C- or Fortran-, depending on ``fortran_order``) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that ``shape=()`` means there is 1 element) by ``dtype.itemsize``. Format Version 2.0 ------------------ The version 1.0 format only allowed the array header to have a total size of 65535 bytes. This can be exceeded by structured arrays with a large number of columns. The version 2.0 format extends the header size to 4 GiB. `numpy.save` will automatically save in 2.0 format if the data requires it, else it will always use the more compatible 1.0 format. The description of the fourth element of the header therefore has become: "The next 4 bytes form a little-endian unsigned int: the length of the header data HEADER_LEN." Notes ----- The ``.npy`` format, including reasons for creating it and a comparison of alternatives, is described fully in the "npy-format" NEP. """
# GUI Application automation and testing library # Copyright (C) 2006 NAME This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # See the GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the # Free Software Foundation, Inc., # 59 Temple Place, # Suite 330, # Boston, MA 02111-1307 USA # #---------------------------------------------------------------- # def GetContextMenu(self): # rect = self.Rectangle # # # set the position of the context menu to be 2 pixels in from # # the control edge # pos = c_long ((rect.top+ 2 << 16) | (rect.left + 2)) # # # get the top window before trying to bring up a context menu # oldTopWin = FindWindow(0, 0) # # # send the message but time-out after 10 mili seconds # res = DWORD() # SendMessageTimeout ( # self.handle, # WM_CONTEXTMENU, # self.handle, # pos, # 0, # 100, # time out in miliseconds # byref(res)) # result # # # get the top window # popMenuWin = FindWindow(0, 0) # # # if no context menu has opened try right clicking the control ## if oldTopWin == popMenuWin: ## SendMessageTimeout ( ## self.handle, ## WM_RBUTTONDOWN, ## 0, ## pos, ## 0, ## 100, # time out in miliseconds ## byref(res)) # result ## ## SendMessageTimeout ( ## self.handle, ## WM_RBUTTONUP, ## 2, ## pos, ## 0, ## 100, # time out in miliseconds ## byref(res)) # result ## ## # wait another .1 of a second to allow it to display ## import time ## time.sleep(.1) ## ## # get the top window ## popMenuWin = FindWindow(0, 0) # # # # if we still haven't opened a popup menu # if oldTopWin == popMenuWin: # return # # # # get the MenuBar info from the PopupWindow which will # # give you the Menu Handle for the menu itself # mbi = MENUBARINFO() # mbi.cbSize = sizeof(MENUBARINFO) # ret = GetMenuBarInfo(popMenuWin, OBJID_CLIENT, 0, byref(mbi)) # # if ret: # GetMenuItems(mbi.hMenu) # self.properties["ContextMenu"] = GetMenuItems(mbi.hMenu) # # # # make sure the popup goes away! # self.handle.SendMessage (WM_CANCELMODE, 0, 0) # SendMessage (popMenuWin, WM_CANCELMODE, 0, 0) # # # if it's still open - then close it. # if IsWindowVisible(popMenuWin): # SendMessage (popMenuWin, WM_CLOSE, 0, 0) # #SendMessage (popMenuWin, WM_DESTROY, 0, 0) # #SendMessage (popMenuWin, WM_NCDESTROY , 0, 0) ##==================================================================== #def RemoveNonCurrentTabControls(dialog, childWindows): # # # find out if there is a tab control and get it if there is one # tab = None # for child in childWindows: # if child.Class == "SysTabControl32": # tab = child # break # # # # make a copy of childWindows # firstTabChildren = list(childWindows) # if tab: # # firstTabHandle = 0 # # # get the parent of the tab control # tabParent = GetParent(tab.handle) # # # find the control with that hwnd # tabParent = [c for c in childWindows if \ # c.handle == tabParent][0] # # # get the index of the parent # parentIdx = childWindows.index(tabParent) + 1 # # passedFirstTab = False # for child in childWindows[parentIdx:]: # # # if the current control is a dialog # if child.Class == "#32770": # # # if this is the first tab # if not passedFirstTab: # # then just skip it # passedFirstTab = True # firstTabHandle = child.handle # else: # # Ok so this is NOT the first tab # # remove the dialog control itself # try: # firstTabChildren.remove(child) # print "Removing(a): ", child.IsVisible, IsWindowChildOf(firstTabHandle, child.handle) # except ValueError: # pass # # # then remove all the children of that dialog # for x in GetChildWindows(child.handle): # try: # firstTabChildren.remove(x) # print "Removing(b): ", child.IsVisible, IsWindowChildOf(firstTabHandle, x) # except ValueError: # pass # # # return firstTabChildren ##==================================================================== #class Window(object): # #---------------------------------------------------------------- # def __init__(self, hwndOrProps): # # self.ref = None # # # if the argument passed in is a Handle # if isinstance(hwndOrProps, HwndWrapper): # # # wrap the handle # self.handle = hwndOrProps # # # Get the properties from this handle # self.properties = self.handle.GetProperties() # # else: # self.properties = XMLHelpers.ControlFromXML(hwndOrProps) # # # #---------------------------------------------------------------- # def __getattr__(self, name): # if name in self.properties: # return self.properties[name] # else: # raise AttributeError("'%s' has no attribute '%s'"% \ # (self.__class__.__name__, name)) # # #---------------------------------------------------------------- # def GetTitle(self): # return self.Titles[0] # Title = property(GetTitle) # # #---------------------------------------------------------------- # def GetRectangle(self): # return self.Rectangles[0] # Rectangle = property(GetRectangle) # # #---------------------------------------------------------------- # def GetFont(self): # return self.Fonts[0] # # #---------------------------------------------------------------- # def SetFont(self, font): # self.Fonts[0] = font # # Font = property(GetFont, SetFont) # # #---------------------------------------------------------------- # def Parent(self): # # do a preliminary construction to a Window # parent = self.handle.Parent() # # # reconstruct it to the correct type # return WindowClassRegistry().GetClass(parent.Class())(parent.handle)#.hwnd) # # #---------------------------------------------------------------- # def Style(self, flag = None): # style = self.properties['Style'] # if flag: # return style & flag == flag # else: # return style # # #---------------------------------------------------------------- # def ExStyle(self, flag = None): # exstyle = self.properties['ExStyle'] # if flag: # return exstyle & flag == flag # else: # return exstyle # # #---------------------------------------------------------------- # def __cmp__(self, other): # return cmp(self.handle, other.handle) # # #---------------------------------------------------------------- # def __hash__(self): # return hash(self.handle) # # #---------------------------------------------------------------- ## def __str__(self): ## return "%8d %-15s\t'%s'" % (self.handle, ## "'%s'"% self.FriendlyClassName, ## self.Title) # # ##==================================================================== #class DialogWindow(Window): # #---------------------------------------------------------------- # def __init__(self, hwndOrXML): # # self.children = [] # # # if the argument passed in is a window hanle # if isinstance(hwndOrXML, (int, long)): # # read the properties for the dialog itself # # Get the dialog Rectangle first - to get the control offset # # if not IsWindow(hwndOrXML): # raise "The window handle passed is not valid" # # Window.__init__(self, hwndOrXML) # # # else: # dialogElemReached = False # for ctrlElem in hwndOrXML.findall("CONTROL"): # # # if this is the first time through the dialog # if not dialogElemReached: # # initialise the Dialog itself # Window.__init__(self, ctrlElem) # dialogElemReached = True # # # otherwise contruct each control normally # else: # # get the class for the control with that window class # Klass = WindowClassRegistry().GetClass(ctrlElem.attrib["Class"]) # # # construct the object and append it # self.children.append(Klass(ctrlElem)) # # self.children.insert(0, self) # # # #---------------------------------------------------------------- # def AllControls(self): # return self.children # # # # #---------------------------------------------------------------- # def AddReference(self, ref): # # # #print "x"*20, ref.AllControls() # if len(self.AllControls()) != len(ref.AllControls()): # print len(self.AllControls()), len(ref.AllControls()) # raise "Numbers of controls on ref. dialog does not match Loc. dialog" # # # allIDsMatched = True # allClassesMatched = True # for idx, ctrl in enumerate(self.AllControls()): # refCtrl = ref.AllControls()[idx] # ctrl.ref = refCtrl # # if ctrl.ControlID != refCtrl.ControlID: # allIDsMatched = False # # if ctrl.Class != refCtrl.Class: # allClassesMatched = False # # toRet = 1 # # allIDsSameFlag = 2 # allClassesSameFlag = 4 # # if allIDsMatched: # toRet += allIDsSameFlag # # if allClassesMatched: # toRet += allClassesSameFlag # # return toRet ##==================================================================== #def DefaultWindowHwndReader(hwnd, dialogRect): # # ctrl = HwndWrapper(hwnd) # # return ctrl.GetProperties() # # if dialogRect: # # offset it's rect depending on it's parents # rect.left -= dialogRect.left # rect.top -= dialogRect.top # rect.right -= dialogRect.left # rect.bottom -= dialogRect.top ##==================================================================== #def GetClass(hwnd): # # get the className # className = (c_wchar * 257)() # GetClassName (hwnd, byref(className), 256) # return className.value # # ##==================================================================== #def GetTitle(hwnd): # # get the title # bufferSize = SendMessage (hwnd, WM_GETTEXTLENGTH, 0, 0) # title = (c_wchar * bufferSize)() # # if bufferSize: # bufferSize += 1 # SendMessage (hwnd, WM_GETTEXT, bufferSize, title) # # # return title.value # # ##==================================================================== #def GetChildWindows(dialog): # # # this will be filled in the callback function # childWindows = [] # # # callback function for EnumChildWindows # def enumChildProc(hWnd, LPARAM): # win = Window(hWnd) # # # construct an instance of the appropriate type # win = WindowClassRegistry().GetClass(win.Class)(hWnd) # # # append it to our list # childWindows.append(win) # # # return true to keep going # return True # # # # define the child proc type # EnumChildProc = WINFUNCTYPE(c_int, HWND, LPARAM) # proc = EnumChildProc(enumChildProc) # # # loop over all the children (callback called for each) # EnumChildWindows(dialog.hwnd, proc, 0) # # return childWindows # ##==================================================================== #def IsWindowChildOf(parent, child): ## try: ## parentHwnd = parent.handle ## except: ## parentHwnd = parent # # childHwnd = child # # while True: # curParentTest = GetParent(childHwnd) # # # # the current parent matches # if curParentTest == parentHwnd: # return True # # # we reached the very top of the heirarchy so no more parents # if curParentTest == 0: # return False # # # the next child is the current parent # childHwnd = curParentTest # # ===================================================== # DEAD XML STUFF CODE # ===================================================== # # props['ClientRect'] = ParseRect(ctrl.find("CLIENTRECT")) # # props['Rectangle'] = ParseRect(ctrl.find("RECTANGLE")) # # props['Font'] = ParseLogFont(ctrl.find("FONT")) # # props['Titles'] = ParseTitles(ctrl.find("TITLES")) # # for key, item in ctrl.attrib.items(): # props[key] = item ##----------------------------------------------------------------------------- #def StructToXML(struct, structElem): # "Convert a ctypes Structure to an ElementTree" # # for propName in struct._fields_: # propName = propName[0] # itemVal = getattr(struct, propName) # # # convert number to string # if isinstance(itemVal, (int, long)): # propName += "_LONG" # itemVal = unicode(itemVal) # # structElem.set(propName, EscapeSpecials(itemVal)) # # # # ##==================================================================== #def XMLToMenuItems(element): # items = [] # # for item in element: # itemProp = {} # # itemProp["ID"] = int(item.attrib["ID_LONG"]) # itemProp["State"] = int(item.attrib["State_LONG"]) # itemProp["Type"] = int(item.attrib["Type_LONG"]) # itemProp["Text"] = item.attrib["Text"] # # #print itemProp # subMenu = item.find("MENUITEMS") # if subMenu: # itemProp["MenuItems"] = XMLToMenuItems(subMenu) # # items.append(itemProp) # return items # # ##==================================================================== #def ListToXML(listItems, itemName, element): # # for i, string in enumerate(listItems): # # element.set("%s%05d"%(itemName, i), EscapeSpecials(string)) # # # ##==================================================================== #def XMLToList(element): # items = [] # for subItem in element: # items.append(PropFromXML(subItem)) # ##==================================================================== #def PropFromXML(element): # # for propName in PropParsers: # if element.tag == propName.upper(): # # ToXMLFunc, FromXMLFunc = PropParsers[element.tag.upper()] # # return FromXMLFunc(element) # # raise "Unknown Element Type : %s"% element.tag # ##==================================================================== #def PropToXML(parentElement, name, value, ): # print "=" *20, name, value # # ToXMLFunc, FromXMLFunc = PropParsers[element.tag.upper()] # # return FromXMLFunc(element) # # # # #PropParsers = { # "Font" : (StructToXML, XMLToFont), # "Rectangle" : (StructToXML, XMLToRect), # "ClientRects" : (ListToXML, XMLToRect), # "Titles" : (TitlesToXML, XMLToTitles), # "Fonts" : (ListToXML, XMLToList), # #"Rectangles" : (ListToXML, XMLToList), # #"" : XMLToMenuItems, # #"" : XMLToMenuItems, # # #} # # USED TO BE NEEDED IN THE XML OUTPUT FUNCTION # # format the output xml a little # xml = open(fileName, "rb").read() # # import re # tags = re.compile(""" # ( # <[^/>]+> # An opening tag # )| # ( # </[^>]+> # A closing tag # )| # ( # <[^>]+/> # an empty element # ) # # """, re.VERBOSE) # # f = open(fileName, "wb") # indent = 0 # indentText = " " # for x in tags.finditer(xml): # # # closing tag # if x.group(2): # indent -= 1 # f.write(indentText*indent + x.group(2) + "\r\n") # # # if the element may have attributes # else: # if x.group(1): # text = x.group(1) # if x.group(3): # text = x.group(3) # # f.write(indentText*indent + text + "\r\n") # ## ## Trying to indent the attributes each on a single line ## but it is more complicated then it first looks :-( ## # items = text.split() # # # f.write(indentText*indent + items[0] + "\r\n") # indent += 1 # for i in items[1:]: # f.write(indentText*indent + i + "\r\n") # # indent -= 1 # # # opening tag # if x.group(1): # indent += 1 # # f.close() ##==================================================================== ## specializes XMLToStruct for Fonts #def XMLToFont(element): # font = LOGFONTW() # #print element.attrib # XMLToStruct(element, font) # # return font # ##==================================================================== ## specializes XMLToStruct for Rects #def XMLToRect(element): # rect = RECT() # # XMLToStruct(element, rect) # return rect # ##==================================================================== #def TitlesToXML(titles, titleElem): # for i, string in enumerate(titles): # # titleElem.set("s%05d"%i, EscapeSpecials(string)) # # ##==================================================================== # sys.exit() # # # SendText playing around!! - not required # SetForegroundWindow(handle) # SendText("here is some test text") # # # # some SendText testing # text = sys.argv[2] # import os.path # if os.path.exists(text): # text = open(text, "rb").read().decode('utf-16') # # print `text` # # #SendText("--%s--"%text) # for c in dialog.AllChildren(): # print "(%6d) %s - '%s'"% (c.handle,c.Class, c.Title) # if c.Class == "Edit": # #SetActiveWindow (c.handle) # SetForegroundWindow(c.handle) # #SetFocus(c.handle) # #EnableWindow(c.handle, True) # SendText("--%s--"%text) # # # # # get all the windows involved for this control #windows.extend(windows[0].Children()) # # # styles = { # "WS_DISABLED" : 134217728, # Variable c_long # "WS_BORDER" : 8388608, # Variable c_long # "WS_TABSTOP" : 65536, # Variable c_long << adds min, max, buttons # "WS_MINIMIZE" : 536870912, # Variable c_long # "WS_DLGFRAME" : 4194304, # Variable c_long # "WS_VISIBLE" : 268435456, # Variable c_long # "WS_OVERLAPPED" : 0, # Variable c_long # "WS_CHILD" : 1073741824, # Variable c_long # "WS_CAPTION" : 12582912, # Variable c_long # "WS_POPUPWINDOW" : 2156396544L, # Variable c_ulong # "WS_HSCROLL" : 1048576, # Variable c_long # "WS_THICKFRAME" : 262144, # Variable c_long << takes about 2 pixes off length # #"WS_SIZEBOX" : WS_THICKFRAME, # alias # "WS_OVERLAPPEDWINDOW" : 13565952, # Variable c_long << turns off sysmenu! # #"WS_TILEDWINDOW" : WS_OVERLAPPEDWINDOW, # alias # "WS_GROUP" : 131072, # Variable c_long << adds both minimize and maximize boxes # "WS_VSCROLL" : 2097152, # Variable c_long # "WS_MAXIMIZEBOX" : 65536, # Variable c_long << adds both minimize and maximize boxes # "WS_MAXIMIZE" : 16777216, # Variable c_long # "WS_SYSMENU" : 524288, # Variable c_long << adds/removes close box # "WS_POPUP" : 2147483648L, # Variable c_ulong # "WS_MINIMIZEBOX" : 131072, # Variable c_long << adds both minimize and maximize boxes # "WS_CLIPCHILDREN" : 33554432, # Variable c_long # #"WS_ICONIC" : WS_MINIMIZE, # alias # "WS_CLIPSIBLINGS" : 67108864, # Variable c_long # #"WS_TILED" : WS_OVERLAPPED, # alias # "WS_CHILDWINDOW" : 1073741824, # Variable c_long # # } # # exstyles = { # "WS_EX_TOOLWINDOW" : 128, # Variable c_long << small font # "WS_EX_MDICHILD" : 64, # Variable c_long # "WS_EX_WINDOWEDGE" : 256, # Variable c_long # "WS_EX_RIGHT" : 4096, # Variable c_long # "WS_EX_NOPARENTNOTIFY" : 4, # Variable c_long # "WS_EX_ACCEPTFILES" : 16, # Variable c_long # "WS_EX_LEFTSCROLLBAR" : 16384, # Variable c_long # "WS_EX_OVERLAPPEDWINDOW" : 768, # Variable c_long # "WS_EX_DLGMODALFRAME" : 1, # Variable c_long << adds Icon # "WS_EX_TRANSPARENT" : 32, # Variable c_long # "WS_EX_STATICEDGE" : 131072, # Variable c_long # "WS_EX_TOPMOST" : 8, # Variable c_long # "WS_EX_LTRREADING" : 0, # Variable c_long # "WS_EX_RIGHTSCROLLBAR" : 0, # Variable c_long # "WS_EX_APPWINDOW" : 262144, # Variable c_long # "WS_EX_CONTROLPARENT" : 65536, # Variable c_long # "WS_EX_LEFT" : 0, # Variable c_long # "WS_EX_PALETTEWINDOW" : 392, # Variable c_long << small font # "WS_EX_CONTEXTHELP" : 1024, # Variable c_long << adds a CH button # "WS_EX_CLIENTEDGE" : 512, # Variable c_long # "WS_EX_RTLREADING" : 8192, # Variable c_long # } # # # # for s in styles: # if dialog.Style(styles[s]): # print "%30s\t0x%-8x"% (s, styles[s]) # # print "-"*20 # for s in exstyles: # if dialog.ExStyle(exstyles[s]): # print "%30s\t0x%-8x"% (s, exstyles[s]) # print dialog.Font().lfHeight, dialog.Font().lfWidth, dialog.Font().lfFaceName # print "STyle 0x%08x EXStyle 0x%08x" % (dialog.Style(), dialog.ExStyle()) # print "please type the style to set/unset" # typed = "" # while typed.lower() != "x": # typed = raw_input() # # if typed in exstyles: # old = dlg.ExStyle() # new = dlg.ExStyle() ^ exstyles[typed] # SetWindowLong(dlg.handle, GWL_EXSTYLE, c_long(new)) # print "%0x %0x %0x"% (old, new, exstyles[typed]) # SetWindowLong(dlg.handle, GWL_STYLE, dlg.Style() ^268435456) # SetWindowLong(dlg.handle, GWL_STYLE, dlg.Style() ^268435456) # SendMessage(dlg.handle, WM_PAINT, 0, 0) # SetForegroundWindow(dlg.handle) # # # if typed in styles: # old = dlg.Style() # new = dlg.Style() ^ styles[typed] # SetWindowLong(dlg.handle, GWL_STYLE, c_long(new)) # print "%0x %0x %0x"% (old, new, styles[typed]) # SetWindowLong(dlg.handle, GWL_STYLE, dlg.Style() ^268435456) # SetWindowLong(dlg.handle, GWL_STYLE, dlg.Style() ^268435456) # SendMessage(dlg.handle, WM_PAINT, 0, 0) # SetForegroundWindow(dlg.handle) #dialog = ParentWindow(dlg.handle) # # ##==================================================================== #from SendInput import TypeKeys, PressKey, LiftKey, TypeKey, VK_MENU, \ # VK_SHIFT, VK_BACK, VK_DOWN, VK_LEFT # # #def SendText(text): # # # write the text passed in # TypeKeys(text) # # # press shift # PressKey(VK_SHIFT) # # lowercase 'a' # #import pm # #pm.set_trace() # toType = (VK_LEFT,) * 13 # TypeKeys(toType) # # # unpress shift # LiftKey(VK_SHIFT) # # # PressKey(VK_MENU) # TypeKey('F') # LiftKey(VK_MENU) # # # TypeKeys((VK_DOWN,)*4) # # #==================================================================== #class Menuitem(object): # def __init__(self, item): # for attr in item.keys(): # self.__dict__["_%s_"%attr] = item[attr] # # self.__dict__.setdefault("_MenuItems_", []) # # def __getattr__(self, key): # return getattr(MenuWrapper(self._MenuItems_), key) # # # ##==================================================================== #class MenuWrapper(object): # def __init__(self, items): # # clean up the existing menuItem attributes # # and set them # self.__items = items # # self.__texts = [item['Text'] for item in self.__items] # # # def __getattr__(self, key): # # item = find_best_match(key, self._texts_, self.__items) # # return item # # ##==================================================================== #def MenuSelect(ctrl, menupath, menu_items): # # id = FindMenu(menupath, menu_items) # # #print ctrl['MenuItems'] # APIFuncs.PostMessage(ctrl.handle, win32defines.WM_COMMAND, id, 0) # # # # ##==================================================================== #class Dialog2(controls.HwndWrapper.HwndWrapper): # #---------------------------------------------------------------- # def __init__(self, title = None, class_ = None, timeout = 1, handle = None): # # if not handle: # # handle = FindDialog(title, testClass = class_) # waited = 0 # while not handle and waited <= timeout: # time.sleep(.1) # handle = FindDialog(title, testClass = class_) # waited += .1 # # if not handle: # raise WindowNotFound("Window not found") # # super(Dialog2, self).__init__(handle) # # self.controls = [self, ] # self.controls.extend(self.Children) # # controlactions.add_actions(self) # # #self._build_control_id_map() # # self.ctrl_texts = [ctrl.Text or ctrl.FriendlyClassName for ctrl in self.controls] # # # we need to handle controls where the default text is not that interesting e.g. # # edit boxes # # # #---------------------------------------------------------------- # def __getattr__(self, to_find): # # waited = 0 # while waited <= 1: # try: # #if "Dialog" in self.ctrl_texts: # # print "&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&" # # print self.ctrl_texts, self.Class, self.Children # ctrl = find_best_match(to_find, self.ctrl_texts, self.controls) # return controlactions.add_actions(ctrl) # except WindowNotFound: # waited += .1 # # print self # print "failed to find %s in %s" % (to_find, self.ctrl_texts) # # raise # # # #---------------------------------------------------------------- # def MenuSelect(self, path): # # # item_id = FindMenu(self.MenuItems, path) # #menu_items = MenuWrapper(self.MenuItems) # # #item_id = FindMenu(menu_items, path) # # #print ctrl['MenuItems'] # self.PostMessage(win32defines.WM_COMMAND, item_id) # # # # ##==================================================================== #def TestNotepad(): # # try: # notepad = Dialog2(title = "^.*Notepad.*", class_ = "Notepad") # except WindowNotFound: # os.system("start notepad") # time.sleep(.1) # notepad = Dialog2(title = "^.*Notepad.*", class_ = "Notepad") # # # #print notepad.handle # ## notepad.SendKeys("testing") ## notepad.edit.SendKeys("Here is so\\nme tëÿext%H") ## notepad.SendKeys("{DOWN}{ENTER}") ## #notepad.SendKeys("a") ## ## # need to get active window!! ## notepad.SendKeys("{ESC}") ## notepad.SendKeys("%E") # if "1" in sys.argv: # # Select that menu item # notepad.MenuSelect("File->Page Setup") # # # find the dialog # page_setup = Dialog2(title = "Page Setup") # # edit = page_setup.Edit1 # edit.TypeKeys("{HOME}+{END}{BKSP}23") # # # page_setup.Combo1.Select(5) # time.sleep(1) # # page_setup.Combo1.Select("Tabloid") # time.sleep(1) # # # click the printer button # page_setup.Printer.Click() # # dlg = Dialog2("^Page Setup").Properties.Click() # # Dialog2(".*Document Properties").Advanced.Click() # # Dialog2(".*Advanced Options").Cancel.Click() # # Dialog2(".*Document Properties").cancel.Click() # # Dialog2("^Page Setup").cancel.Click() # # # dialog doesn't go away because 23 that we typed is 'wrong' # Dialog2(title = "^Page Setup").ok.Click() # # # this is teh message box # Dialog2(title = "^Page Setup").ok.Click() # # # dialog doesn't go away because 23 that we typed is 'wrong' # Dialog2(title = "^Page Setup").cancel.Click() # # if "2" in sys.argv: # # Select that menu item # notepad.MenuSelect("Format->Font") # # font_dlg = Dialog2(title = "^Font$") # # font_dlg.combobox2.Select(3) # time.sleep(2) # # Dialog2(title = "^Font$").OK.Click() # # if "3" in sys.argv: # notepad.Edit1.Select(1,4) # time.sleep(2) # # print notepad.edit1.SelectionIndices # # if "4" in sys.argv: # # raise "NotWorkingYet" # edit = notepad.Edit1 # print edit.Rectangle # # edit.PressMouse(coords = (0,0)) # edit.MoveMouse(coords = (400, 400)) # edit.ReleaseMouse(coords = (400,400)) # # if "5" in sys.argv: # # edit = notepad.Edit1 # # edit.DoubleClick(coords = (1290,1290)) # # # ##==================================================================== #def test(): # # # some some normal dailogs # if 1: # testStrings = ["Combo", "Combo2", "ComboBox", "blah blah", "test" ,"hex" ,"matchwhole" ,"regularExp" ,"wrapsearch" ,"wrap" ,"inalldocs" ,"extend_sel" ,"Find next" ,"markall" ,"up" ,"down" ,"direct" ,"conds"] # else: # testStrings = ["blah blah", "first", "from", "from0", "from001", "from2", "from0000003", "from3", "insensitive", "delduplicate", "charCodeOrder"] # # # item_texts = [ctrl.Text or ctrl.FriendlyClassName for ctrl in ctrls] # # missedMatches = [] # for test in testStrings: # # try: # ctrl = find_best_match(test, item_texts, ctrls) # print "%15s %15s %-20s %s"% (test, ctrl.FriendlyClassName, `ctrl.Text[:20]`, str(ctrl.Rectangle)) # except IndexError, e: # missedMatches.append(test) # # if missedMatches: # print "\nNo Matches for: " + ", ".join(missedMatches) # # # #if __name__ == "__main__": # TestNotepad() #print "\n\nMenuTesting" #missedMatches = [] #try: # print MenuWrapper(ctrls[0].MenuItems).File.PageSetup._Text_ #except IndexError, e: # missedMatches.append(test) # #if missedMatches: # print "\nNo Matches for: " + ", ".join(missedMatches) # # #for ctrl in ctrls: # print CtrlAccessName(ctrl) # # if ctrl.Class in ('ComboBox'): # # # candidates = [] # # find controls that are to it's left # for ctrl2 in ctrls: # # if this ctrl has a top or bottom between # # other ctrl top and bottom # # if \ # (((ctrl2.Rectangle.top >= ctrl.Rectangle.top and \ # ctrl2.Rectangle.top < ctrl.Rectangle.bottom) or \ # (ctrl2.Rectangle.bottom > ctrl.Rectangle.top and \ # ctrl2.Rectangle.bottom <= ctrl.Rectangle.bottom)) and\ # ctrl2.Rectangle.left < ctrl.Rectangle.left) \ # or \ # (((ctrl2.Rectangle.right >= ctrl.Rectangle.left and \ # ctrl2.Rectangle.right < ctrl.Rectangle.bottom) or \ # (ctrl2.Rectangle.bottom > ctrl.Rectangle.top and \ # ctrl2.Rectangle.bottom <= ctrl.Rectangle.bottom)) and\ # ctrl2.Rectangle.left < ctrl.Rectangle.left) \ # : # # # # # # candidates.append(ctrl2) # # # # #for candidate in cadidates: # # print "%18s - 20%s" % (candidate.Class, "'%s'"%candidate.Title), CtrlAccessName(candidate) # # # # #if ctrl2.Rectangle.top >= ctrl.Rectangle.top <= ctrl2.Rectangle.bottom or \ # # ctrl2.Rectangle.bottom >= ctrl.Rectangle.top # # # #if ctrl2.Rectangle.top # # # ##import pprint ##pprint.pprint(ctrls) # # how should we read in the XML file # NOT USING MS Components (requirement on machine) # maybe using built in XML # maybe using elementtree # others?
""" Test models for the multilingual library. # Note: the to_str() calls in all the tests are here only to make it # easier to test both pre-unicode and current Django. >>> from testproject.utils import to_str # make sure the settings are right >>> from multilingual.languages import LANGUAGES >>> LANGUAGES [['en', 'English'], ['pl', 'Polish'], ['zh-cn', 'Simplified Chinese']] >>> from multilingual import set_default_language >>> from django.db.models import Q >>> set_default_language(1) ### Check the table names >>> Category._meta.translation_model._meta.db_table 'category_language' >>> Article._meta.translation_model._meta.db_table 'articles_article_translation' ### Create the test data # Check both assigning via the proxy properties and set_* functions >>> c = Category() >>> c.name_en = 'category 1' >>> c.name_pl = 'kategoria 1' >>> c.save() >>> c = Category() >>> c.set_name('category 2', 'en') >>> c.set_name('kategoria 2', 'pl') >>> c.save() ### See if the test data was saved correctly ### Note: first object comes from the initial fixture. >>> c = Category.objects.all().order_by('id')[1] >>> to_str((c.name, c.get_name(1), c.get_name(2))) ('category 1', 'category 1', 'kategoria 1') >>> c = Category.objects.all().order_by('id')[2] >>> to_str((c.name, c.get_name(1), c.get_name(2))) ('category 2', 'category 2', 'kategoria 2') ### Check translation changes. ### Make sure the name and description properties obey ### set_default_language. >>> c = Category.objects.all().order_by('id')[1] # set language: pl >>> set_default_language(2) >>> to_str((c.name, c.get_name(1), c.get_name(2))) ('kategoria 1', 'category 1', 'kategoria 1') >>> c.name = 'kat 1' >>> to_str((c.name, c.get_name(1), c.get_name(2))) ('kat 1', 'category 1', 'kat 1') # set language: en >>> set_default_language('en') >>> c.name = 'cat 1' >>> to_str((c.name, c.get_name(1), c.get_name(2))) ('cat 1', 'cat 1', 'kat 1') >>> c.save() # Read the entire Category objects from the DB again to see if # everything was saved correctly. >>> c = Category.objects.all().order_by('id')[1] >>> to_str((c.name, c.get_name('en'), c.get_name('pl'))) ('cat 1', 'cat 1', 'kat 1') >>> c = Category.objects.all().order_by('id')[2] >>> to_str((c.name, c.get_name('en'), c.get_name('pl'))) ('category 2', 'category 2', 'kategoria 2') ### Check ordering >>> set_default_language(1) >>> to_str([c.name for c in Category.objects.all().order_by('name_en')]) ['Fixture category', 'cat 1', 'category 2'] ### Check ordering # start with renaming one of the categories so that the order actually # depends on the default language >>> set_default_language(1) >>> c = Category.objects.get(name='cat 1') >>> c.name = 'zzz cat 1' >>> c.save() >>> to_str([c.name for c in Category.objects.all().order_by('name_en')]) ['Fixture category', 'category 2', 'zzz cat 1'] >>> to_str([c.name for c in Category.objects.all().order_by('name')]) ['Fixture category', 'category 2', 'zzz cat 1'] >>> to_str([c.name for c in Category.objects.all().order_by('-name')]) ['zzz cat 1', 'category 2', 'Fixture category'] >>> set_default_language(2) >>> to_str([c.name for c in Category.objects.all().order_by('name')]) ['Fixture kategoria', 'kat 1', 'kategoria 2'] >>> to_str([c.name for c in Category.objects.all().order_by('-name')]) ['kategoria 2', 'kat 1', 'Fixture kategoria'] ### Check filtering # Check for filtering defined by Q objects as well. This is a recent # improvement: the translation fields are being handled by an # extension of lookup_inner instead of overridden # QuerySet._filter_or_exclude >>> set_default_language('en') >>> to_str([c.name for c in Category.objects.all().filter(name__contains='2')]) ['category 2'] >>> set_default_language('en') >>> to_str([c.name for c in Category.objects.all().filter(Q(name__contains='2'))]) ['category 2'] >>> set_default_language(1) >>> to_str([c.name for c in ... Category.objects.all().filter(Q(name__contains='2')|Q(name_pl__contains='kat'))]) ['Fixture category', 'zzz cat 1', 'category 2'] >>> set_default_language(1) >>> to_str([c.name for c in Category.objects.all().filter(name_en__contains='2')]) ['category 2'] >>> set_default_language(1) >>> to_str([c.name for c in Category.objects.all().filter(Q(name_pl__contains='kat'))]) ['Fixture category', 'zzz cat 1', 'category 2'] >>> set_default_language('pl') >>> to_str([c.name for c in Category.objects.all().filter(name__contains='k')]) ['Fixture kategoria', 'kat 1', 'kategoria 2'] >>> set_default_language('pl') >>> to_str([c.name for c in Category.objects.all().filter(Q(name__contains='kategoria'))]) ['Fixture kategoria', 'kategoria 2'] ### Check specifying query set language >>> c_en = Category.objects.all().for_language('en') >>> c_pl = Category.objects.all().for_language(2) # both ID and code work here >>> to_str(c_en.get(name__contains='1').name) 'zzz cat 1' >>> to_str(c_pl.get(name__contains='1').name) 'kat 1' >>> to_str([c.name for c in c_en.order_by('name')]) ['Fixture category', 'category 2', 'zzz cat 1'] >>> to_str([c.name for c in c_pl.order_by('-name')]) ['kategoria 2', 'kat 1', 'Fixture kategoria'] >>> c = c_en.get(id=2) >>> c.name = 'test' >>> to_str((c.name, c.name_en, c.name_pl)) ('test', 'test', 'kat 1') >>> c = c_pl.get(id=2) >>> c.name = 'test' >>> to_str((c.name, c.name_en, c.name_pl)) ('test', 'zzz cat 1', 'test') ### Check filtering spanning more than one model >>> set_default_language(1) >>> cat_1 = Category.objects.get(name='zzz cat 1') >>> cat_2 = Category.objects.get(name='category 2') >>> a = Article(category=cat_1) >>> a.set_title('article 1', 1) >>> a.set_title('artykul 1', 2) >>> a.set_contents('contents 1', 1) >>> a.set_contents('zawartosc 1', 1) >>> a.save() >>> a = Article(category=cat_2) >>> a.set_title('article 2', 1) >>> a.set_title('artykul 2', 2) >>> a.set_contents('contents 2', 1) >>> a.set_contents('zawartosc 2', 1) >>> a.save() >>> to_str([a.title for a in Article.objects.filter(category=cat_1)]) ['article 1'] >>> to_str([a.title for a in Article.objects.filter(category__name=cat_1.name)]) ['article 1'] >>> to_str([a.title for a in Article.objects.filter(Q(category__name=cat_1.name)|Q(category__name_pl__contains='2')).order_by('-title')]) ['article 2', 'article 1'] ### Test the creation of new objects using keywords passed to the ### constructor >>> set_default_language(2) >>> c_n = Category.objects.create(name_en='new category', name_pl='nowa kategoria') >>> to_str((c_n.name, c_n.name_en, c_n.name_pl)) ('nowa kategoria', 'new category', 'nowa kategoria') >>> c_n.save() >>> c_n2 = Category.objects.get(name_en='new category') >>> to_str((c_n2.name, c_n2.name_en, c_n2.name_pl)) ('nowa kategoria', 'new category', 'nowa kategoria') >>> set_default_language(2) >>> c_n3 = Category.objects.create(name='nowa kategoria 2') >>> to_str((c_n3.name, c_n3.name_en, c_n3.name_pl)) ('nowa kategoria 2', None, 'nowa kategoria 2') ######################################## ###### Check if the admin behaviour for categories with incomplete translations >>> from django.contrib.auth.models import User >>> User.objects.create_superuser('test', 'test_email', 'test_password') and None >>> from django.test.client import Client >>> c = Client() >>> c.login(username='test', password='test_password') True # create a category with only 2 translations, skipping the # first language >>> resp = c.post('/admin/articles/category/add/', ... {'creator': 1, ... 'translations-TOTAL_FORMS': '3', ... 'translations-INITIAL_FORMS': '0', ... 'translations-0-language_id': '1', ... 'translations-1-language_id': '2', ... 'translations-2-language_id': '3', ... 'translations-1-name': 'pl name', ... 'translations-2-name': 'zh-cn name', ... }) >>> resp.status_code 302 >>> cat = Category.objects.order_by('-id')[0] >>> cat.name_en >>> cat.name_pl u'pl name' >>> cat.name_zh_cn u'zh-cn name' >>> cat.translations.count() 2 """
"""Stuff to parse Sun and NeXT audio files. An audio file consists of a header followed by the data. The structure of the header is as follows. +---------------+ | magic word | +---------------+ | header size | +---------------+ | data size | +---------------+ | encoding | +---------------+ | sample rate | +---------------+ | # of channels | +---------------+ | info | | | +---------------+ The magic word consists of the 4 characters '.snd'. Apart from the info field, all header fields are 4 bytes in size. They are all 32-bit unsigned integers encoded in big-endian byte order. The header size really gives the start of the data. The data size is the physical size of the data. From the other parameters the number of frames can be calculated. The encoding gives the way in which audio samples are encoded. Possible values are listed below. The info field currently consists of an ASCII string giving a human-readable description of the audio file. The info field is padded with NUL bytes to the header size. Usage. Reading audio files: f = sunau.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). When the setpos() and rewind() methods are not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' or 'ULAW') getcompname() -- returns human-readable version of compression type ('not compressed' matches 'NONE') getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- returns None (for compatibility with the aifc module) getmark(id) -- raises an error since the mark does not exist (for compatibility with the aifc module) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell() and the position given to setpos() are compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing audio files: f = sunau.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple)-- set all parameters at once tell() -- return current position in output file writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. The close() method is called automatically when the class instance is destroyed. """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to read all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <lkcl@samba.org> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
""" ===================================== Structured Arrays (and Record Arrays) ===================================== Introduction ============ Numpy provides powerful capabilities to create arrays of structs or records. These arrays permit one to manipulate the data by the structs or by fields of the struct. A simple example will show what is meant.: :: >>> x = np.zeros((2,),dtype=('i4,f4,a10')) >>> x[:] = [(1,2.,'Hello'),(2,3.,"World")] >>> x array([(1, 2.0, 'Hello'), (2, 3.0, 'World')], dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')]) Here we have created a one-dimensional array of length 2. Each element of this array is a record that contains three items, a 32-bit integer, a 32-bit float, and a string of length 10 or less. If we index this array at the second position we get the second record: :: >>> x[1] (2,3.,"World") Conveniently, one can access any field of the array by indexing using the string that names that field. In this case the fields have received the default names 'f0', 'f1' and 'f2'. :: >>> y = x['f1'] >>> y array([ 2., 3.], dtype=float32) >>> y[:] = 2*y >>> y array([ 4., 6.], dtype=float32) >>> x array([(1, 4.0, 'Hello'), (2, 6.0, 'World')], dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')]) In these examples, y is a simple float array consisting of the 2nd field in the record. But, rather than being a copy of the data in the structured array, it is a view, i.e., it shares exactly the same memory locations. Thus, when we updated this array by doubling its values, the structured array shows the corresponding values as doubled as well. Likewise, if one changes the record, the field view also changes: :: >>> x[1] = (-1,-1.,"Master") >>> x array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')], dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')]) >>> y array([ 4., -1.], dtype=float32) Defining Structured Arrays ========================== One defines a structured array through the dtype object. There are **several** alternative ways to define the fields of a record. Some of these variants provide backward compatibility with Numeric, numarray, or another module, and should not be used except for such purposes. These will be so noted. One specifies record structure in one of four alternative ways, using an argument (as supplied to a dtype function keyword or a dtype object constructor itself). This argument must be one of the following: 1) string, 2) tuple, 3) list, or 4) dictionary. Each of these is briefly described below. 1) String argument (as used in the above examples). In this case, the constructor expects a comma-separated list of type specifiers, optionally with extra shape information. The type specifiers can take 4 different forms: :: a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n> (representing bytes, ints, unsigned ints, floats, complex and fixed length strings of specified byte lengths) b) int8,...,uint8,...,float16, float32, float64, complex64, complex128 (this time with bit sizes) c) older Numeric/numarray type specifications (e.g. Float32). Don't use these in new code! d) Single character type specifiers (e.g H for unsigned short ints). Avoid using these unless you must. Details can be found in the Numpy book These different styles can be mixed within the same string (but why would you want to do that?). Furthermore, each type specifier can be prefixed with a repetition number, or a shape. In these cases an array element is created, i.e., an array within a record. That array is still referred to as a single field. An example: :: >>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64') >>> x array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])], dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))]) By using strings to define the record structure, it precludes being able to name the fields in the original definition. The names can be changed as shown later, however. 2) Tuple argument: The only relevant tuple case that applies to record structures is when a structure is mapped to an existing data type. This is done by pairing in a tuple, the existing data type with a matching dtype definition (using any of the variants being described here). As an example (using a definition using a list, so see 3) for further details): :: >>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')])) >>> x array([0, 0, 0]) >>> x['r'] array([0, 0, 0], dtype=uint8) In this case, an array is produced that looks and acts like a simple int32 array, but also has definitions for fields that use only one byte of the int32 (a bit like Fortran equivalencing). 3) List argument: In this case the record structure is defined with a list of tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field ('' is permitted), 2) the type of the field, and 3) the shape (optional). For example:: >>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) >>> x array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])], dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))]) 4) Dictionary argument: two different forms are permitted. The first consists of a dictionary with two required keys ('names' and 'formats'), each having an equal sized list of values. The format list contains any type/shape specifier allowed in other contexts. The names must be strings. There are two optional keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to the required two where offsets contain integer offsets for each field, and titles are objects containing metadata for each field (these do not have to be strings), where the value of None is permitted. As an example: :: >>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[('col1', '>i4'), ('col2', '>f4')]) The other dictionary form permitted is a dictionary of name keys with tuple values specifying type, offset, and an optional title. :: >>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')]) Accessing and modifying field names =================================== The field names are an attribute of the dtype object defining the record structure. For the last example: :: >>> x.dtype.names ('col1', 'col2') >>> x.dtype.names = ('x', 'y') >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')]) >>> x.dtype.names = ('x', 'y', 'z') # wrong number of names <type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2 Accessing field titles ==================================== The field titles provide a standard place to put associated info for fields. They do not have to be strings. :: >>> x.dtype.fields['x'][2] 'title 1' Accessing multiple fields at once ==================================== You can access multiple fields at once using a list of field names: :: >>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))], dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) Notice that `x` is created with a list of tuples. :: >>> x[['x','y']] array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)], dtype=[('x', '<f4'), ('y', '<f4')]) >>> x[['x','value']] array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]), (1.0, [[2.0, 6.0], [2.0, 6.0]])], dtype=[('x', '<f4'), ('value', '<f4', (2, 2))]) The fields are returned in the order they are asked for.:: >>> x[['y','x']] array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)], dtype=[('y', '<f4'), ('x', '<f4')]) Filling structured arrays ========================= Structured arrays can be filled by field or row by row. :: >>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')]) >>> arr['var1'] = np.arange(5) If you fill it in row by row, it takes a take a tuple (but not a list or array!):: >>> arr[0] = (10,20) >>> arr array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)], dtype=[('var1', '<f8'), ('var2', '<f8')]) More information ==================================== You can find some more information on recarrays and structured arrays (including the difference between the two) `here <http://www.scipy.org/Cookbook/Recarray>`_. """
""" ============= Miscellaneous ============= IEEE 754 Floating Point Special Values -------------------------------------- Special values defined in numpy: nan, inf, NaNs can be used as a poor-man's mask (if you don't care what the original value was) Note: cannot use equality to test NaNs. E.g.: :: >>> myarr = np.array([1., 0., np.nan, 3.]) >>> np.where(myarr == np.nan) >>> np.nan == np.nan # is always False! Use special numpy functions instead. False >>> myarr[myarr == np.nan] = 0. # doesn't work >>> myarr array([ 1., 0., NaN, 3.]) >>> myarr[np.isnan(myarr)] = 0. # use this instead find >>> myarr array([ 1., 0., 0., 3.]) Other related special value functions: :: isinf(): True if value is inf isfinite(): True if not nan or inf nan_to_num(): Map nan to 0, inf to max float, -inf to min float The following corresponds to the usual functions except that nans are excluded from the results: :: nansum() nanmax() nanmin() nanargmax() nanargmin() >>> x = np.arange(10.) >>> x[3] = np.nan >>> x.sum() nan >>> np.nansum(x) 42.0 How numpy handles numerical exceptions -------------------------------------- The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow`` and ``'ignore'`` for ``underflow``. But this can be changed, and it can be set individually for different kinds of exceptions. The different behaviors are: - 'ignore' : Take no action when the exception occurs. - 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module). - 'raise' : Raise a `FloatingPointError`. - 'call' : Call a function specified using the `seterrcall` function. - 'print' : Print a warning directly to ``stdout``. - 'log' : Record error in a Log object specified by `seterrcall`. These behaviors can be set for all kinds of errors or specific ones: - all : apply to all numeric exceptions - invalid : when NaNs are generated - divide : divide by zero (for integers as well!) - overflow : floating point overflows - underflow : floating point underflows Note that integer divide-by-zero is handled by the same machinery. These behaviors are set on a per-thread basis. Examples -------- :: >>> oldsettings = np.seterr(all='warn') >>> np.zeros(5,dtype=np.float32)/0. invalid value encountered in divide >>> j = np.seterr(under='ignore') >>> np.array([1.e-100])**10 >>> j = np.seterr(invalid='raise') >>> np.sqrt(np.array([-1.])) FloatingPointError: invalid value encountered in sqrt >>> def errorhandler(errstr, errflag): ... print "saw stupid error!" >>> np.seterrcall(errorhandler) <function err_handler at 0x...> >>> j = np.seterr(all='call') >>> np.zeros(5, dtype=np.int32)/0 FloatingPointError: invalid value encountered in divide saw stupid error! >>> j = np.seterr(**oldsettings) # restore previous ... # error-handling settings Interfacing to C ---------------- Only a survey of the choices. Little detail on how each works. 1) Bare metal, wrap your own C-code manually. - Plusses: - Efficient - No dependencies on other tools - Minuses: - Lots of learning overhead: - need to learn basics of Python C API - need to learn basics of numpy C API - need to learn how to handle reference counting and love it. - Reference counting often difficult to get right. - getting it wrong leads to memory leaks, and worse, segfaults - API will change for Python 3.0! 2) Cython - Plusses: - avoid learning C API's - no dealing with reference counting - can code in pseudo python and generate C code - can also interface to existing C code - should shield you from changes to Python C api - has become the de-facto standard within the scientific Python community - fast indexing support for arrays - Minuses: - Can write code in non-standard form which may become obsolete - Not as flexible as manual wrapping 4) ctypes - Plusses: - part of Python standard library - good for interfacing to existing sharable libraries, particularly Windows DLLs - avoids API/reference counting issues - good numpy support: arrays have all these in their ctypes attribute: :: a.ctypes.data a.ctypes.get_strides a.ctypes.data_as a.ctypes.shape a.ctypes.get_as_parameter a.ctypes.shape_as a.ctypes.get_data a.ctypes.strides a.ctypes.get_shape a.ctypes.strides_as - Minuses: - can't use for writing code to be turned into C extensions, only a wrapper tool. 5) SWIG (automatic wrapper generator) - Plusses: - around a long time - multiple scripting language support - C++ support - Good for wrapping large (many functions) existing C libraries - Minuses: - generates lots of code between Python and the C code - can cause performance problems that are nearly impossible to optimize out - interface files can be hard to write - doesn't necessarily avoid reference counting issues or needing to know API's 7) scipy.weave - Plusses: - can turn many numpy expressions into C code - dynamic compiling and loading of generated C code - can embed pure C code in Python module and have weave extract, generate interfaces and compile, etc. - Minuses: - Future very uncertain: it's the only part of Scipy not ported to Python 3 and is effectively deprecated in favor of Cython. 8) Psyco - Plusses: - Turns pure python into efficient machine code through jit-like optimizations - very fast when it optimizes well - Minuses: - Only on intel (windows?) - Doesn't do much for numpy? Interfacing to Fortran: ----------------------- The clear choice to wrap Fortran code is `f2py <http://docs.scipy.org/doc/numpy-dev/f2py/>`_. Pyfort is an older alternative, but not supported any longer. Fwrap is a newer project that looked promising but isn't being developed any longer. Interfacing to C++: ------------------- 1) Cython 2) CXX 3) Boost.python 4) SWIG 5) SIP (used mainly in PyQT) """
""" Convert an RDF graph into an image for displaying in the notebook, via GraphViz It has two parts: - conversion from rdf into dot language. Code based in rdflib.utils.rdf2dot - rendering of the dot graph into an image. Code based on ipython-hierarchymagic, which in turn bases it from Sphinx See https://github.com/tkf/ipython-hierarchymagic License for RDFLIB ------------------ Copyright (c) 2002-2015, RDFLib Team See CONTRIBUTORS and http://github.com/RDFLib/rdflib All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of NAME nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. License for ipython-hierarchymagic ---------------------------------- ipython-hierarchymagic is licensed under the term of the Simplified BSD License (BSD 2-clause license), as follows: Copyright (c) 2012 NAME rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. License for Sphinx ------------------ `run_dot` function and `HierarchyMagic._class_name` method in this extension heavily based on Sphinx code `sphinx.ext.graphviz.render_dot` and `InheritanceGraph.class_name`. Copyright notice for Sphinx can be found below. Copyright (c) 2007-2011 by the Sphinx team (see AUTHORS file). All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # 2014-12-02 ch/doko Add workaround for gzip bomb vulnerability # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME Lundh. # # EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME Lundh # # By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
"""This module tests SyntaxErrors. Here's an example of the sort of thing that is tested. >>> def f(x): ... global x Traceback (most recent call last): SyntaxError: name 'x' is local and global (<doctest test.test_syntax[0]>, line 1) The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and symtable.c. The parser itself outlaws a lot of invalid syntax. None of these errors are tested here at the moment. We should add some tests; since there are infinitely many programs with invalid syntax, we would need to be judicious in selecting some. The compiler generates a synthetic module name for code executed by doctest. Since all the code comes from the same module, a suffix like [1] is appended to the module name, As a consequence, changing the order of tests in this module means renumbering all the errors after it. (Maybe we should enable the ellipsis option for these tests.) In ast.c, syntax errors are raised by calling ast_error(). Errors from set_context(): >>> obj.None = 1 Traceback (most recent call last): File "<doctest test.test_syntax[1]>", line 1 SyntaxError: cannot assign to None >>> None = 1 Traceback (most recent call last): File "<doctest test.test_syntax[2]>", line 1 SyntaxError: cannot assign to None It's a syntax error to assign to the empty tuple. Why isn't it an error to assign to the empty list? It will always raise some error at runtime. >>> () = 1 Traceback (most recent call last): File "<doctest test.test_syntax[3]>", line 1 SyntaxError: can't assign to () >>> f() = 1 Traceback (most recent call last): File "<doctest test.test_syntax[4]>", line 1 SyntaxError: can't assign to function call >>> del f() Traceback (most recent call last): File "<doctest test.test_syntax[5]>", line 1 SyntaxError: can't delete function call >>> a + 1 = 2 Traceback (most recent call last): File "<doctest test.test_syntax[6]>", line 1 SyntaxError: can't assign to operator >>> (x for x in x) = 1 Traceback (most recent call last): File "<doctest test.test_syntax[7]>", line 1 SyntaxError: can't assign to generator expression >>> 1 = 1 Traceback (most recent call last): File "<doctest test.test_syntax[8]>", line 1 SyntaxError: can't assign to literal >>> "abc" = 1 Traceback (most recent call last): File "<doctest test.test_syntax[8]>", line 1 SyntaxError: can't assign to literal >>> `1` = 1 Traceback (most recent call last): File "<doctest test.test_syntax[10]>", line 1 SyntaxError: can't assign to repr If the left-hand side of an assignment is a list or tuple, an illegal expression inside that contain should still cause a syntax error. This test just checks a couple of cases rather than enumerating all of them. >>> (a, "b", c) = (1, 2, 3) Traceback (most recent call last): File "<doctest test.test_syntax[11]>", line 1 SyntaxError: can't assign to literal >>> [a, b, c + 1] = [1, 2, 3] Traceback (most recent call last): File "<doctest test.test_syntax[12]>", line 1 SyntaxError: can't assign to operator >>> a if 1 else b = 1 Traceback (most recent call last): File "<doctest test.test_syntax[13]>", line 1 SyntaxError: can't assign to conditional expression From compiler_complex_args(): >>> def f(None=1): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[14]>", line 1 SyntaxError: cannot assign to None From ast_for_arguments(): >>> def f(x, y=1, z): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[15]>", line 1 SyntaxError: non-default argument follows default argument >>> def f(x, None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[16]>", line 1 SyntaxError: cannot assign to None >>> def f(*None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[17]>", line 1 SyntaxError: cannot assign to None >>> def f(**None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[18]>", line 1 SyntaxError: cannot assign to None From ast_for_funcdef(): >>> def None(x): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[19]>", line 1 SyntaxError: cannot assign to None From ast_for_call(): >>> def f(it, *varargs): ... return list(it) >>> L = range(10) >>> f(x for x in L) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(x for x in L, 1) Traceback (most recent call last): File "<doctest test.test_syntax[23]>", line 1 SyntaxError: Generator expression must be parenthesized if not sole argument >>> f((x for x in L), 1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... i244, i245, i246, i247, i248, i249, i250, i251, i252, ... i253, i254, i255) Traceback (most recent call last): File "<doctest test.test_syntax[25]>", line 1 SyntaxError: more than 255 arguments The actual error cases counts positional arguments, keyword arguments, and generator expression arguments separately. This test combines the three. >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... (x for x in i244), i245, i246, i247, i248, i249, i250, i251, ... i252=1, i253=1, i254=1, i255=1) Traceback (most recent call last): File "<doctest test.test_syntax[26]>", line 1 SyntaxError: more than 255 arguments >>> f(lambda x: x[0] = 3) Traceback (most recent call last): File "<doctest test.test_syntax[27]>", line 1 SyntaxError: lambda cannot contain assignment The grammar accepts any test (basically, any expression) in the keyword slot of a call site. Test a few different options. >>> f(x()=2) Traceback (most recent call last): File "<doctest test.test_syntax[28]>", line 1 SyntaxError: keyword can't be an expression >>> f(a or b=1) Traceback (most recent call last): File "<doctest test.test_syntax[29]>", line 1 SyntaxError: keyword can't be an expression >>> f(x.y=1) Traceback (most recent call last): File "<doctest test.test_syntax[30]>", line 1 SyntaxError: keyword can't be an expression More set_context(): >>> (x for x in x) += 1 Traceback (most recent call last): File "<doctest test.test_syntax[31]>", line 1 SyntaxError: can't assign to generator expression >>> None += 1 Traceback (most recent call last): File "<doctest test.test_syntax[32]>", line 1 SyntaxError: cannot assign to None >>> f() += 1 Traceback (most recent call last): File "<doctest test.test_syntax[33]>", line 1 SyntaxError: can't assign to function call Test continue in finally in weird combinations. continue in for loop under finally should be ok. >>> def test(): ... try: ... pass ... finally: ... for abc in range(10): ... continue ... print abc >>> test() 9 Start simple, a continue in a finally should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[36]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause This is essentially a continue in a finally which should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... try: ... continue ... except: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[37]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[38]>", line 5 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[39]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... try: ... continue ... finally: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[40]>", line 7 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: pass ... finally: ... try: ... pass ... except: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[41]>", line 8 SyntaxError: 'continue' not supported inside 'finally' clause There is one test for a break that is not in a loop. The compiler uses a single data structure to keep track of try-finally and loops, so we need to be sure that a break is actually inside a loop. If it isn't, there should be a syntax error. >>> try: ... print 1 ... break ... print 2 ... finally: ... print 3 Traceback (most recent call last): ... File "<doctest test.test_syntax[42]>", line 3 SyntaxError: 'break' outside loop This should probably raise a better error than a SystemError (or none at all). In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 >>> while 1: ... while 2: ... while 3: ... while 4: ... while 5: ... while 6: ... while 8: ... while 9: ... while 10: ... while 11: ... while 12: ... while 13: ... while 14: ... while 15: ... while 16: ... while 17: ... while 18: ... while 19: ... while 20: ... while 21: ... while 22: ... break Traceback (most recent call last): ... SystemError: too many statically nested blocks This tests assignment-context; there was a bug in Python 2.5 where compiling a complex 'if' (one with 'elif') would fail to notice an invalid suite, leading to spurious errors. >>> if 1: ... x() = 1 ... elif 1: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[44]>", line 2 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 Traceback (most recent call last): ... File "<doctest test.test_syntax[45]>", line 4 SyntaxError: can't assign to function call >>> if 1: ... x() = 1 ... elif 1: ... pass ... else: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[46]>", line 2 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 ... else: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[47]>", line 4 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... pass ... else: ... x() = 1 Traceback (most recent call last): ... File "<doctest test.test_syntax[48]>", line 6 SyntaxError: can't assign to function call >>> f(a=23, a=234) Traceback (most recent call last): ... File "<doctest test.test_syntax[49]>", line 1 SyntaxError: keyword argument repeated >>> del () Traceback (most recent call last): ... File "<doctest test.test_syntax[50]>", line 1 SyntaxError: can't delete () >>> {1, 2, 3} = 42 Traceback (most recent call last): ... File "<doctest test.test_syntax[50]>", line 1 SyntaxError: can't assign to literal Corner-case that used to crash: >>> def f(*xx, **__debug__): pass Traceback (most recent call last): SyntaxError: cannot assign to __debug__ """
# # ElementTree # $Id$ # # light-weight XML support for Python 1.5.2 and later. # # history: # 2001-10-20 fl created (from various sources) # 2001-11-01 fl return root from parse method # 2002-02-16 fl sort attributes in lexical order # 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup # 2002-05-01 fl finished TreeBuilder refactoring # 2002-07-14 fl added basic namespace support to ElementTree.write # 2002-07-25 fl added QName attribute support # 2002-10-20 fl fixed encoding in write # 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding # 2002-11-27 fl accept file objects or file names for parse/write # 2002-12-04 fl moved XMLTreeBuilder back to this module # 2003-01-11 fl fixed entity encoding glitch for us-ascii # 2003-02-13 fl added XML literal factory # 2003-02-21 fl added ProcessingInstruction/PI factory # 2003-05-11 fl added tostring/fromstring helpers # 2003-05-26 fl added ElementPath support # 2003-07-05 fl added makeelement factory method # 2003-07-28 fl added more well-known namespace prefixes # 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed # 2003-10-31 fl markup updates # 2003-11-15 fl fixed nested namespace bug # 2004-03-28 fl added XMLID helper # 2004-06-02 fl added default support to findtext # 2004-06-08 fl fixed encoding of non-ascii element/attribute names # 2004-08-23 fl take advantage of post-2.1 expat features # 2005-02-01 fl added iterparse implementation # 2005-03-02 fl fixed iterparse support for pre-2.2 versions # # Copyright (c) 1999-2005 by NAME All rights reserved. # # EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The ElementTree toolkit is # # Copyright (c) 1999-2005 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
""" ============================= Subclassing ndarray in python ============================= Credits ------- This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses. Introduction ------------ Subclassing ndarray is relatively simple, but it has some complications compared to other Python objects. On this page we explain the machinery that allows you to subclass ndarray, and the implications for implementing a subclass. ndarrays and object creation ============================ Subclassing ndarray is complicated by the fact that new instances of ndarray classes can come about in three different ways. These are: #. Explicit constructor call - as in ``MySubClass(params)``. This is the usual route to Python instance creation. #. View casting - casting an existing ndarray as a given subclass #. New from template - creating a new instance from a template instance. Examples include returning slices from a subclassed array, creating return types from ufuncs, and copying arrays. See :ref:`new-from-template` for more details The last two are characteristics of ndarrays - in order to support things like array slicing. The complications of subclassing ndarray are due to the mechanisms numpy has to support these latter two routes of instance creation. .. _view-casting: View casting ------------ *View casting* is the standard ndarray mechanism by which you take an ndarray of any subclass, and return a view of the array as another (specified) subclass: >>> import numpy as np >>> # create a completely useless ndarray subclass >>> class C(np.ndarray): pass >>> # create a standard ndarray >>> arr = np.zeros((3,)) >>> # take a view of it, as our useless subclass >>> c_arr = arr.view(C) >>> type(c_arr) <class 'C'> .. _new-from-template: Creating new from template -------------------------- New instances of an ndarray subclass can also come about by a very similar mechanism to :ref:`view-casting`, when numpy finds it needs to create a new instance from a template instance. The most obvious place this has to happen is when you are taking slices of subclassed arrays. For example: >>> v = c_arr[1:] >>> type(v) # the view is of type 'C' <class 'C'> >>> v is c_arr # but it's a new instance False The slice is a *view* onto the original ``c_arr`` data. So, when we take a view from the ndarray, we return a new ndarray, of the same class, that points to the data in the original. There are other points in the use of ndarrays where we need such views, such as copying arrays (``c_arr.copy()``), creating ufunc output arrays (see also :ref:`array-wrap`), and reducing methods (like ``c_arr.mean()``. Relationship of view casting and new-from-template -------------------------------------------------- These paths both use the same machinery. We make the distinction here, because they result in different input to your methods. Specifically, :ref:`view-casting` means you have created a new instance of your array type from any potential subclass of ndarray. :ref:`new-from-template` means you have created a new instance of your class from a pre-existing instance, allowing you - for example - to copy across attributes that are particular to your subclass. Implications for subclassing ---------------------------- If we subclass ndarray, we need to deal not only with explicit construction of our array type, but also :ref:`view-casting` or :ref:`new-from-template`. Numpy has the machinery to do this, and this machinery that makes subclassing slightly non-standard. There are two aspects to the machinery that ndarray uses to support views and new-from-template in subclasses. The first is the use of the ``ndarray.__new__`` method for the main work of object initialization, rather then the more usual ``__init__`` method. The second is the use of the ``__array_finalize__`` method to allow subclasses to clean up after the creation of views and new instances from templates. A brief Python primer on ``__new__`` and ``__init__`` ===================================================== ``__new__`` is a standard Python method, and, if present, is called before ``__init__`` when we create a class instance. See the `python __new__ documentation <http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail. For example, consider the following Python code: .. testcode:: class C(object): def __new__(cls, *args): print 'Cls in __new__:', cls print 'Args in __new__:', args return object.__new__(cls, *args) def __init__(self, *args): print 'type(self) in __init__:', type(self) print 'Args in __init__:', args meaning that we get: >>> c = C('hello') Cls in __new__: <class 'C'> Args in __new__: ('hello',) type(self) in __init__: <class 'C'> Args in __init__: ('hello',) When we call ``C('hello')``, the ``__new__`` method gets its own class as first argument, and the passed argument, which is the string ``'hello'``. After python calls ``__new__``, it usually (see below) calls our ``__init__`` method, with the output of ``__new__`` as the first argument (now a class instance), and the passed arguments following. As you can see, the object can be initialized in the ``__new__`` method or the ``__init__`` method, or both, and in fact ndarray does not have an ``__init__`` method, because all the initialization is done in the ``__new__`` method. Why use ``__new__`` rather than just the usual ``__init__``? Because in some cases, as for ndarray, we want to be able to return an object of some other class. Consider the following: .. testcode:: class D(C): def __new__(cls, *args): print 'D cls is:', cls print 'D args in __new__:', args return C.__new__(C, *args) def __init__(self, *args): # we never get here print 'In D __init__' meaning that: >>> obj = D('hello') D cls is: <class 'D'> D args in __new__: ('hello',) Cls in __new__: <class 'C'> Args in __new__: ('hello',) >>> type(obj) <class 'C'> The definition of ``C`` is the same as before, but for ``D``, the ``__new__`` method returns an instance of class ``C`` rather than ``D``. Note that the ``__init__`` method of ``D`` does not get called. In general, when the ``__new__`` method returns an object of class other than the class in which it is defined, the ``__init__`` method of that class is not called. This is how subclasses of the ndarray class are able to return views that preserve the class type. When taking a view, the standard ndarray machinery creates the new ndarray object with something like:: obj = ndarray.__new__(subtype, shape, ... where ``subdtype`` is the subclass. Thus the returned view is of the same class as the subclass, rather than being of class ``ndarray``. That solves the problem of returning views of the same type, but now we have a new problem. The machinery of ndarray can set the class this way, in its standard methods for taking views, but the ndarray ``__new__`` method knows nothing of what we have done in our own ``__new__`` method in order to set attributes, and so on. (Aside - why not call ``obj = subdtype.__new__(...`` then? Because we may not have a ``__new__`` method with the same call signature). The role of ``__array_finalize__`` ================================== ``__array_finalize__`` is the mechanism that numpy provides to allow subclasses to handle the various ways that new instances get created. Remember that subclass instances can come about in these three ways: #. explicit constructor call (``obj = MySubClass(params)``). This will call the usual sequence of ``MySubClass.__new__`` then (if it exists) ``MySubClass.__init__``. #. :ref:`view-casting` #. :ref:`new-from-template` Our ``MySubClass.__new__`` method only gets called in the case of the explicit constructor call, so we can't rely on ``MySubClass.__new__`` or ``MySubClass.__init__`` to deal with the view casting and new-from-template. It turns out that ``MySubClass.__array_finalize__`` *does* get called for all three methods of object creation, so this is where our object creation housekeeping usually goes. * For the explicit constructor call, our subclass will need to create a new ndarray instance of its own class. In practice this means that we, the authors of the code, will need to make a call to ``ndarray.__new__(MySubClass,...)``, or do view casting of an existing array (see below) * For view casting and new-from-template, the equivalent of ``ndarray.__new__(MySubClass,...`` is called, at the C level. The arguments that ``__array_finalize__`` recieves differ for the three methods of instance creation above. The following code allows us to look at the call sequences and arguments: .. testcode:: import numpy as np class C(np.ndarray): def __new__(cls, *args, **kwargs): print 'In __new__ with class %s' % cls return np.ndarray.__new__(cls, *args, **kwargs) def __init__(self, *args, **kwargs): # in practice you probably will not need or want an __init__ # method for your subclass print 'In __init__ with class %s' % self.__class__ def __array_finalize__(self, obj): print 'In array_finalize:' print ' self type is %s' % type(self) print ' obj type is %s' % type(obj) Now: >>> # Explicit constructor >>> c = C((10,)) In __new__ with class <class 'C'> In array_finalize: self type is <class 'C'> obj type is <type 'NoneType'> In __init__ with class <class 'C'> >>> # View casting >>> a = np.arange(10) >>> cast_a = a.view(C) In array_finalize: self type is <class 'C'> obj type is <type 'numpy.ndarray'> >>> # Slicing (example of new-from-template) >>> cv = c[:1] In array_finalize: self type is <class 'C'> obj type is <class 'C'> The signature of ``__array_finalize__`` is:: def __array_finalize__(self, obj): ``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our own class (``self``) as well as the object from which the view has been taken (``obj``). As you can see from the output above, the ``self`` is always a newly created instance of our subclass, and the type of ``obj`` differs for the three instance creation methods: * When called from the explicit constructor, ``obj`` is ``None`` * When called from view casting, ``obj`` can be an instance of any subclass of ndarray, including our own. * When called in new-from-template, ``obj`` is another instance of our own subclass, that we might use to update the new ``self`` instance. Because ``__array_finalize__`` is the only method that always sees new instances being created, it is the sensible place to fill in instance defaults for new object attributes, among other tasks. This may be clearer with an example. Simple example - adding an extra attribute to ndarray ----------------------------------------------------- .. testcode:: import numpy as np class InfoArray(np.ndarray): def __new__(subtype, shape, dtype=float, buffer=None, offset=0, strides=None, order=None, info=None): # Create the ndarray instance of our type, given the usual # ndarray input arguments. This will call the standard # ndarray constructor, but return an object of our type. # It also triggers a call to InfoArray.__array_finalize__ obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides, order) # set the new 'info' attribute to the value passed obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # ``self`` is a new object resulting from # ndarray.__new__(InfoArray, ...), therefore it only has # attributes that the ndarray.__new__ constructor gave it - # i.e. those of a standard ndarray. # # We could have got to the ndarray.__new__ call in 3 ways: # From an explicit constructor - e.g. InfoArray(): # obj is None # (we're in the middle of the InfoArray.__new__ # constructor, and self.info will be set when we return to # InfoArray.__new__) if obj is None: return # From view casting - e.g arr.view(InfoArray): # obj is arr # (type(obj) can be InfoArray) # From new-from-template - e.g infoarr[:3] # type(obj) is InfoArray # # Note that it is here, rather than in the __new__ method, # that we set the default value for 'info', because this # method sees all creation of default objects - with the # InfoArray.__new__ constructor, but also with # arr.view(InfoArray). self.info = getattr(obj, 'info', None) # We do not need to return anything Using the object looks like this: >>> obj = InfoArray(shape=(3,)) # explicit constructor >>> type(obj) <class 'InfoArray'> >>> obj.info is None True >>> obj = InfoArray(shape=(3,), info='information') >>> obj.info 'information' >>> v = obj[1:] # new-from-template - here - slicing >>> type(v) <class 'InfoArray'> >>> v.info 'information' >>> arr = np.arange(10) >>> cast_arr = arr.view(InfoArray) # view casting >>> type(cast_arr) <class 'InfoArray'> >>> cast_arr.info is None True This class isn't very useful, because it has the same constructor as the bare ndarray object, including passing in buffers and shapes and so on. We would probably prefer the constructor to be able to take an already formed ndarray from the usual numpy calls to ``np.array`` and return an object. Slightly more realistic example - attribute added to existing array ------------------------------------------------------------------- Here is a class that takes a standard ndarray that already exists, casts as our type, and adds an extra attribute. .. testcode:: import numpy as np class RealisticInfoArray(np.ndarray): def __new__(cls, input_array, info=None): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(input_array).view(cls) # add the new attribute to the created instance obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments if obj is None: return self.info = getattr(obj, 'info', None) So: >>> arr = np.arange(5) >>> obj = RealisticInfoArray(arr, info='information') >>> type(obj) <class 'RealisticInfoArray'> >>> obj.info 'information' >>> v = obj[1:] >>> type(v) <class 'RealisticInfoArray'> >>> v.info 'information' .. _array-wrap: ``__array_wrap__`` for ufuncs ------------------------------------------------------- ``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy functions, to allow a subclass to set the type of the return value and update attributes and metadata. Let's show how this works with an example. First we make the same subclass as above, but with a different name and some print statements: .. testcode:: import numpy as np class MySubClass(np.ndarray): def __new__(cls, input_array, info=None): obj = np.asarray(input_array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print 'In __array_finalize__:' print ' self is %s' % repr(self) print ' obj is %s' % repr(obj) if obj is None: return self.info = getattr(obj, 'info', None) def __array_wrap__(self, out_arr, context=None): print 'In __array_wrap__:' print ' self is %s' % repr(self) print ' arr is %s' % repr(out_arr) # then just call the parent return np.ndarray.__array_wrap__(self, out_arr, context) We run a ufunc on an instance of our new array: >>> obj = MySubClass(np.arange(5), info='spam') In __array_finalize__: self is MySubClass([0, 1, 2, 3, 4]) obj is array([0, 1, 2, 3, 4]) >>> arr2 = np.arange(5)+1 >>> ret = np.add(arr2, obj) In __array_wrap__: self is MySubClass([0, 1, 2, 3, 4]) arr is array([1, 3, 5, 7, 9]) In __array_finalize__: self is MySubClass([1, 3, 5, 7, 9]) obj is MySubClass([0, 1, 2, 3, 4]) >>> ret MySubClass([1, 3, 5, 7, 9]) >>> ret.info 'spam' Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the input with the highest ``__array_priority__`` value, in this case ``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and ``out_arr`` as the (ndarray) result of the addition. In turn, the default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the result to class ``MySubClass``, and called ``__array_finalize__`` - hence the copying of the ``info`` attribute. This has all happened at the C level. But, we could do anything we wanted: .. testcode:: class SillySubClass(np.ndarray): def __array_wrap__(self, arr, context=None): return 'I lost your data' >>> arr1 = np.arange(5) >>> obj = arr1.view(SillySubClass) >>> arr2 = np.arange(5) >>> ret = np.multiply(obj, arr2) >>> ret 'I lost your data' So, by defining a specific ``__array_wrap__`` method for our subclass, we can tweak the output from ufuncs. The ``__array_wrap__`` method requires ``self``, then an argument - which is the result of the ufunc - and an optional parameter *context*. This parameter is returned by some ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc, domain of the ufunc). ``__array_wrap__`` should return an instance of its containing class. See the masked array subclass for an implementation. In addition to ``__array_wrap__``, which is called on the way out of the ufunc, there is also an ``__array_prepare__`` method which is called on the way into the ufunc, after the output arrays are created but before any computation has been performed. The default implementation does nothing but pass through the array. ``__array_prepare__`` should not attempt to access the array data or resize the array, it is intended for setting the output array type, updating attributes and metadata, and performing any checks based on the input that may be desired before computation begins. Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or subclass thereof or raise an error. Extra gotchas - custom ``__del__`` methods and ndarray.base ----------------------------------------------------------- One of the problems that ndarray solves is keeping track of memory ownership of ndarrays and their views. Consider the case where we have created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``. The two objects are looking at the same memory. Numpy keeps track of where the data came from for a particular array or view, with the ``base`` attribute: >>> # A normal ndarray, that owns its own data >>> arr = np.zeros((4,)) >>> # In this case, base is None >>> arr.base is None True >>> # We take a view >>> v1 = arr[1:] >>> # base now points to the array that it derived from >>> v1.base is arr True >>> # Take a view of a view >>> v2 = v1[1:] >>> # base points to the view it derived from >>> v2.base is v1 True In general, if the array owns its own memory, as for ``arr`` in this case, then ``arr.base`` will be None - there are some exceptions to this - see the numpy book for more details. The ``base`` attribute is useful in being able to tell whether we have a view or the original array. This in turn can be useful if we need to know whether or not to do some specific cleanup when the subclassed array is deleted. For example, we may only want to do the cleanup if the original array is deleted, but not the views. For an example of how this can work, have a look at the ``memmap`` class in ``numpy.core``. """
"""Stuff to parse AIFF-C and AIFF files. Unless explicitly stated otherwise, the description below is true both for AIFF-C files and AIFF files. An AIFF-C file has the following structure. +-----------------+ | FORM | +-----------------+ | <size> | +----+------------+ | | AIFC | | +------------+ | | <chunks> | | | . | | | . | | | . | +----+------------+ An AIFF file has the string "AIFF" instead of "AIFC". A chunk consists of an identifier (4 bytes) followed by a size (4 bytes, big endian order), followed by the data. The size field does not include the size of the 8 byte header. The following chunk types are recognized. FVER <version number of AIFF-C defining document> (AIFF-C only). MARK <# of markers> (2 bytes) list of markers: <marker ID> (2 bytes, must be > 0) <position> (4 bytes) <marker name> ("pstring") COMM <# of channels> (2 bytes) <# of sound frames> (4 bytes) <size of the samples> (2 bytes) <sampling frequency> (10 bytes, IEEE 80-bit extended floating point) in AIFF-C files only: <compression type> (4 bytes) <human-readable version of compression type> ("pstring") SSND <offset> (4 bytes, not used by this program) <blocksize> (4 bytes, not used by this program) <sound data> A pstring consists of 1 byte length, a string of characters, and 0 or 1 byte pad to make the total length even. Usage. Reading AIFF files: f = aifc.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). In some types of audio files, if the setpos() method is not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' for AIFF files) getcompname() -- returns human-readable version of compression type ('not compressed' for AIFF files) getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- get the list of marks in the audio file or None if there are no marks getmark(id) -- get mark with the specified id (raises an error if the mark does not exist) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell(), the position given to setpos() and the position of marks are all compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing AIFF files: f = aifc.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: aiff() -- create an AIFF file (AIFF-C default) aifc() -- create an AIFF-C file setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple) -- set all parameters at once setmark(id, pos, name) -- add specified mark to the list of marks tell() -- return current position in output file (useful in combination with setmark()) writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. Marks can be added anytime. If there are any marks, ypu must call close() after all frames have been written. The close() method is called automatically when the class instance is destroyed. When a file is opened with the extension '.aiff', an AIFF file is written, otherwise an AIFF-C file is written. This default can be changed by calling aiff() or aifc() before the first writeframes or writeframesraw. """
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I an trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
# (c) 2013, NAME <skvidal@fedoraproject.org> red hat, inc # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # take a list of files and (optionally) a list of paths # return the first existing file found in the paths # [file1, file2, file3], [path1, path2, path3] # search order is: # path1/file1 # path1/file2 # path1/file3 # path2/file1 # path2/file2 # path2/file3 # path3/file1 # path3/file2 # path3/file3 # first file found with os.path.exists() is returned # no file matches raises ansibleerror # EXAMPLES # - name: copy first existing file found to /some/file # action: copy src=$item dest=/some/file # with_first_found: # - files: foo ${inventory_hostname} bar # paths: /tmp/production /tmp/staging # that will look for files in this order: # /tmp/production/foo # ${inventory_hostname} # bar # /tmp/staging/foo # ${inventory_hostname} # bar # - name: copy first existing file found to /some/file # action: copy src=$item dest=/some/file # with_first_found: # - files: /some/place/foo ${inventory_hostname} /some/place/else # that will look for files in this order: # /some/place/foo # $relative_path/${inventory_hostname} # /some/place/else # example - including tasks: # tasks: # - include: $item # with_first_found: # - files: generic # paths: tasks/staging tasks/production # this will include the tasks in the file generic where it is found first (staging or production) # example simple file lists #tasks: #- name: first found file # action: copy src=$item dest=/etc/file.cfg # with_first_found: # - files: foo.${inventory_hostname} foo # example skipping if no matched files # First_found also offers the ability to control whether or not failing # to find a file returns an error or not # #- name: first found file - or skip # action: copy src=$item dest=/etc/file.cfg # with_first_found: # - files: foo.${inventory_hostname} # skip: true # example a role with default configuration and configuration per host # you can set multiple terms with their own files and paths to look through. # consider a role that sets some configuration per host falling back on a default config. # #- name: some configuration template # template: src={{ item }} dest=/etc/file.cfg mode=0444 owner=root group=root # with_first_found: # - files: # - ${inventory_hostname}/etc/file.cfg # paths: # - ../../../templates.overwrites # - ../../../templates # - files: # - etc/file.cfg # paths: # - templates # the above will return an empty list if the files cannot be found at all # if skip is unspecificed or if it is set to false then it will return a list # error which can be caught bye ignore_errors: true for that action. # finally - if you want you can use it, in place to replace first_available_file: # you simply cannot use the - files, path or skip options. simply replace # first_available_file with with_first_found and leave the file listing in place # # # - name: with_first_found like first_available_file # action: copy src=$item dest=/tmp/faftest # with_first_found: # - ../files/foo # - ../files/bar # - ../files/baz # ignore_errors: true
# # XML-RPC CLIENT LIBRARY # $Id: xmlrpclib.py 65467 2008-08-04 00:50:11Z USERNAME $ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
"""subprocess - Subprocesses with accessible I/O streams This module allows you to spawn processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several other, older modules and functions, like: os.system os.spawn* os.popen* popen2.* commands.* Information about how the subprocess module can be used to replace these modules and functions can be found below. Using the subprocess module =========================== This module defines one class called Popen: class Popen(args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0): Arguments are: args should be a string, or a sequence of program arguments. The program to execute is normally the first item in the args sequence or string, but can be explicitly set by using the executable argument. On UNIX, with shell=False (default): In this case, the Popen class uses os.execvp() to execute the child program. args should normally be a sequence. A string will be treated as a sequence with the string as the only item (the program to execute). On UNIX, with shell=True: If args is a string, it specifies the command string to execute through the shell. If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments. On Windows: the Popen class uses CreateProcess() to execute the child program, which operates on strings. If args is a sequence, it will be converted to a string using the list2cmdline method. Please note that not all MS Windows applications interpret the command line the same way: The list2cmdline is designed for applications using the same rules as the MS C runtime. bufsize, if given, has the same meaning as the corresponding argument to the built-in open() function: 0 means unbuffered, 1 means line buffered, any other positive value means use a buffer of (approximately) that size. A negative bufsize means to use the system default, which usually means fully buffered. The default value for bufsize is 0 (unbuffered). stdin, stdout and stderr specify the executed programs' standard input, standard output and standard error file handles, respectively. Valid values are PIPE, an existing file descriptor (a positive integer), an existing file object, and None. PIPE indicates that a new pipe to the child should be created. With None, no redirection will occur; the child's file handles will be inherited from the parent. Additionally, stderr can be STDOUT, which indicates that the stderr data from the applications should be captured into the same file handle as for stdout. If preexec_fn is set to a callable object, this object will be called in the child process just before the child is executed. If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. if shell is true, the specified command will be executed through the shell. If cwd is not None, the current directory will be changed to cwd before the child is executed. If env is not None, it defines the environment variables for the new process. If universal_newlines is true, the file objects stdout and stderr are opened as a text files, but lines may be terminated by any of '\n', the Unix end-of-line convention, '\r', the Macintosh convention or '\r\n', the Windows convention. All of these external representations are seen as '\n' by the Python program. Note: This feature is only available if Python is built with universal newline support (the default). Also, the newlines attribute of the file objects stdout, stdin and stderr are not updated by the communicate() method. The startupinfo and creationflags, if given, will be passed to the underlying CreateProcess() function. They can specify things such as appearance of the main window and priority for the new process. (Windows only) This module also defines some shortcut functions: call(*popenargs, **kwargs): Run command with arguments. Wait for command to complete, then return the returncode attribute. The arguments are the same as for the Popen constructor. Example: retcode = call(["ls", "-l"]) check_call(*popenargs, **kwargs): Run command with arguments. Wait for command to complete. If the exit code was zero then return, otherwise raise CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute. The arguments are the same as for the Popen constructor. Example: check_call(["ls", "-l"]) check_output(*popenargs, **kwargs): Run command with arguments and return its output as a byte string. If the exit code was non-zero it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute and output in the output attribute. The arguments are the same as for the Popen constructor. Example: output = check_output(["ls", "-l", "/dev/null"]) Exceptions ---------- Exceptions raised in the child process, before the new program has started to execute, will be re-raised in the parent. Additionally, the exception object will have one extra attribute called 'child_traceback', which is a string containing traceback information from the childs point of view. The most common exception raised is OSError. This occurs, for example, when trying to execute a non-existent file. Applications should prepare for OSErrors. A ValueError will be raised if Popen is called with invalid arguments. check_call() and check_output() will raise CalledProcessError, if the called process returns a non-zero return code. Security -------- Unlike some other popen functions, this implementation will never call /bin/sh implicitly. This means that all characters, including shell metacharacters, can safely be passed to child processes. Popen objects ============= Instances of the Popen class have the following methods: poll() Check if child process has terminated. Returns returncode attribute. wait() Wait for child process to terminate. Returns returncode attribute. communicate(input=None) Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child. communicate() returns a tuple (stdout, stderr). Note: The data read is buffered in memory, so do not use this method if the data size is large or unlimited. The following attributes are also available: stdin If the stdin argument is PIPE, this attribute is a file object that provides input to the child process. Otherwise, it is None. stdout If the stdout argument is PIPE, this attribute is a file object that provides output from the child process. Otherwise, it is None. stderr If the stderr argument is PIPE, this attribute is file object that provides error output from the child process. Otherwise, it is None. pid The process ID of the child process. returncode The child return code. A None value indicates that the process hasn't terminated yet. A negative value -N indicates that the child was terminated by signal N (UNIX only). Replacing older functions with the subprocess module ==================================================== In this section, "a ==> b" means that b can be used as a replacement for a. Note: All functions in this section fail (more or less) silently if the executed program cannot be found; this module raises an OSError exception. In the following examples, we assume that the subprocess module is imported with "from subprocess import *". Replacing /bin/sh shell backquote --------------------------------- output=`mycmd myarg` ==> output = Popen(["mycmd", "myarg"], stdout=PIPE).communicate()[0] Replacing shell pipe line ------------------------- output=`dmesg | grep hda` ==> p1 = Popen(["dmesg"], stdout=PIPE) p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] Replacing os.system() --------------------- sts = os.system("mycmd" + " myarg") ==> p = Popen("mycmd" + " myarg", shell=True) pid, sts = os.waitpid(p.pid, 0) Note: * Calling the program through the shell is usually not required. * It's easier to look at the returncode attribute than the exitstatus. A more real-world example would look like this: try: retcode = call("mycmd" + " myarg", shell=True) if retcode < 0: print >>sys.stderr, "Child was terminated by signal", -retcode else: print >>sys.stderr, "Child returned", retcode except OSError, e: print >>sys.stderr, "Execution failed:", e Replacing os.spawn* ------------------- P_NOWAIT example: pid = os.spawnlp(os.P_NOWAIT, "/bin/mycmd", "mycmd", "myarg") ==> pid = Popen(["/bin/mycmd", "myarg"]).pid P_WAIT example: retcode = os.spawnlp(os.P_WAIT, "/bin/mycmd", "mycmd", "myarg") ==> retcode = call(["/bin/mycmd", "myarg"]) Vector example: os.spawnvp(os.P_NOWAIT, path, args) ==> Popen([path] + args[1:]) Environment example: os.spawnlpe(os.P_NOWAIT, "/bin/mycmd", "mycmd", "myarg", env) ==> Popen(["/bin/mycmd", "myarg"], env={"PATH": "/usr/bin"}) Replacing os.popen* ------------------- pipe = os.popen("cmd", mode='r', bufsize) ==> pipe = Popen("cmd", shell=True, bufsize=bufsize, stdout=PIPE).stdout pipe = os.popen("cmd", mode='w', bufsize) ==> pipe = Popen("cmd", shell=True, bufsize=bufsize, stdin=PIPE).stdin (child_stdin, child_stdout) = os.popen2("cmd", mode, bufsize) ==> p = Popen("cmd", shell=True, bufsize=bufsize, stdin=PIPE, stdout=PIPE, close_fds=True) (child_stdin, child_stdout) = (p.stdin, p.stdout) (child_stdin, child_stdout, child_stderr) = os.popen3("cmd", mode, bufsize) ==> p = Popen("cmd", shell=True, bufsize=bufsize, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True) (child_stdin, child_stdout, child_stderr) = (p.stdin, p.stdout, p.stderr) (child_stdin, child_stdout_and_stderr) = os.popen4("cmd", mode, bufsize) ==> p = Popen("cmd", shell=True, bufsize=bufsize, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True) (child_stdin, child_stdout_and_stderr) = (p.stdin, p.stdout) On Unix, os.popen2, os.popen3 and os.popen4 also accept a sequence as the command to execute, in which case arguments will be passed directly to the program without shell intervention. This usage can be replaced as follows: (child_stdin, child_stdout) = os.popen2(["/bin/ls", "-l"], mode, bufsize) ==> p = Popen(["/bin/ls", "-l"], bufsize=bufsize, stdin=PIPE, stdout=PIPE) (child_stdin, child_stdout) = (p.stdin, p.stdout) Return code handling translates as follows: pipe = os.popen("cmd", 'w') ... rc = pipe.close() if rc is not None and rc % 256: print "There were some errors" ==> process = Popen("cmd", 'w', shell=True, stdin=PIPE) ... process.stdin.close() if process.wait() != 0: print "There were some errors" Replacing popen2.* ------------------ (child_stdout, child_stdin) = popen2.popen2("somestring", bufsize, mode) ==> p = Popen(["somestring"], shell=True, bufsize=bufsize stdin=PIPE, stdout=PIPE, close_fds=True) (child_stdout, child_stdin) = (p.stdout, p.stdin) On Unix, popen2 also accepts a sequence as the command to execute, in which case arguments will be passed directly to the program without shell intervention. This usage can be replaced as follows: (child_stdout, child_stdin) = popen2.popen2(["mycmd", "myarg"], bufsize, mode) ==> p = Popen(["mycmd", "myarg"], bufsize=bufsize, stdin=PIPE, stdout=PIPE, close_fds=True) (child_stdout, child_stdin) = (p.stdout, p.stdin) The popen2.Popen3 and popen2.Popen4 basically works as subprocess.Popen, except that: * subprocess.Popen raises an exception if the execution fails * the capturestderr argument is replaced with the stderr argument. * stdin=PIPE and stdout=PIPE must be specified. * popen2 closes all filedescriptors by default, but you have to specify close_fds=True with subprocess.Popen. """
# -*- coding: utf-8 -*- # Part of Odoo. See LICENSE file for full copyright and licensing details. # SKR03 # ===== # Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03. # Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig. # Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel # grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder # Sachkonten oder zu Partnern. # Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der # Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung # (Kategorie: Umsatzsteuer). # Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit # der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter # Finanzbuchhaltung (Kategorie: Vorsteuer). # Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch # für den Ein- und Verkauf aus und in Drittländer sollten beim Partner # (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland # des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als # die Zuordnung bei Produkten und überschreibt diese im Einzelfall. # # Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften # erlaubt Odoo ein generelles Mapping von Steuerausweis und Steuerkonten # (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU') # zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant). # Die Rechnungsbuchung beim Einkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer # Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer # 19%). Durch multidimensionale Hierachien können verschiedene Positionen # zusammengefasst werden und dann in Form eines Reports ausgegeben werden. # # Die Rechnungsbuchung beim Verkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag # (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer' # (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können # verschiedene Positionen zusammengefasst werden. # Die zugewiesenen Steuerausweise können auf Ebene der einzelnen # Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden, # und dort gegebenenfalls angepasst werden. # Rechnungsgutschriften führen zu einer Korrektur (Gegenposition) # der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
#!/usr/bin/env python # -*- coding: utf-8 -*- # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the terms and conditions of this license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for further * # * information. * # * * # * If you have received a written license agreement or contract for * # * Covered Software stating terms other than these, you may choose to use * # * and redistribute Covered Software under those terms instead of these. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
""" --- Day 10: Knot Hash --- You come across some programs that are trying to implement a software emulation of a hash based on knot-tying. The hash these programs are implementing isn't very strong, but you decide to help them anyway. You make a mental note to remind the Elves later not to invent their own cryptographic functions. This hash function simulates tying a knot in a circle of string with 256 marks on it. Based on the input to be hashed, the function repeatedly selects a span of string, brings the ends together, and gives the span a half-twist to reverse the order of the marks within it. After doing this many times, the order of the marks is used to build the resulting hash. 4--5 pinch 4 5 4 1 / \ 5,0,1 / \/ \ twist / \ / \ 3 0 --> 3 0 --> 3 X 0 \ / \ /\ / \ / \ / 2--1 2 1 2 5 To achieve this, begin with a list of numbers from 0 to 255, a current position which begins at 0 (the first element in the list), a skip size (which starts at 0), and a sequence of lengths (your puzzle input). Then, for each length: Reverse the order of that length of elements in the list, starting with the element at the current position. Move the current position forward by that length plus the skip size. Increase the skip size by one. The list is circular; if the current position and the length try to reverse elements beyond the end of the list, the operation reverses using as many extra elements as it needs from the front of the list. If the current position moves past the end of the list, it wraps around to the front. Lengths larger than the size of the list are invalid. Here's an example using a smaller list: Suppose we instead only had a circular list containing five elements, 0, 1, 2, 3, 4, and were given input lengths of 3, 4, 1, 5. The list begins as [0] 1 2 3 4 (where square brackets indicate the current position). The first length, 3, selects ([0] 1 2) 3 4 (where parentheses indicate the sublist to be reversed). After reversing that section (0 1 2 into 2 1 0), we get ([2] 1 0) 3 4. Then, the current position moves forward by the length, 3, plus the skip size, 0: 2 1 0 [3] 4. Finally, the skip size increases to 1. The second length, 4, selects a section which wraps: 2 1) 0 ([3] 4. The sublist 3 4 2 1 is reversed to form 1 2 4 3: 4 3) 0 ([1] 2. The current position moves forward by the length plus the skip size, a total of 5, causing it not to move because it wraps around: 4 3 0 [1] 2. The skip size increases to 2. The third length, 1, selects a sublist of a single element, and so reversing it has no effect. The current position moves forward by the length (1) plus the skip size (2): 4 [3] 0 1 2. The skip size increases to 3. The fourth length, 5, selects every element starting with the second: 4) ([3] 0 1 2. Reversing this sublist (3 0 1 2 4 into 4 2 1 0 3) produces: 3) ([4] 2 1 0. Finally, the current position moves forward by 8: 3 4 2 1 [0]. The skip size increases to 4. In this example, the first two numbers in the list end up being 3 and 4; to check the process, you can multiply them together to produce 12. However, you should instead use the standard list size of 256 (with values 0 to 255) and the sequence of lengths in your puzzle input. Once this process is complete, what is the result of multiplying the first two numbers in the list? --- Part Two --- The logic you've constructed forms a single round of the Knot Hash algorithm; running the full thing requires many of these rounds. Some input and output processing is also required. First, from now on, your input should be taken not as a list of numbers, but as a string of bytes instead. Unless otherwise specified, convert characters to bytes using their ASCII codes. This will allow you to handle arbitrary ASCII strings, and it also ensures that your input lengths are never larger than 255. For example, if you are given 1,2,3, you should convert it to the ASCII codes for each character: 49,44,50,44,51. Once you have determined the sequence of lengths to use, add the following lengths to the end of the sequence: 17, 31, 73, 47, 23. For example, if you are given 1,2,3, your final sequence of lengths should be 49,44,50,44,51,17,31,73,47,23 (the ASCII codes from the input string combined with the standard length suffix values). Second, instead of merely running one round like you did above, run a total of 64 rounds, using the same length sequence in each round. The current position and skip size should be preserved between rounds. For example, if the previous example was your first round, you would start your second round with the same length sequence (3, 4, 1, 5, 17, 31, 73, 47, 23, now assuming they came from ASCII codes and include the suffix), but start with the previous round's current position (4) and skip size (4). Once the rounds are complete, you will be left with the numbers from 0 to 255 in some order, called the sparse hash. Your next task is to reduce these to a list of only 16 numbers called the dense hash. To do this, use numeric bitwise XOR to combine each consecutive block of 16 numbers in the sparse hash (there are 16 such blocks in a list of 256 numbers). So, the first element in the dense hash is the first sixteen elements of the sparse hash XOR'd together, the second element in the dense hash is the second sixteen elements of the sparse hash XOR'd together, etc. For example, if the first sixteen elements of your sparse hash are as shown below, and the XOR operator is ^, you would calculate the first output number like this: 65 ^ 27 ^ 9 ^ 1 ^ 4 ^ 3 ^ 40 ^ 50 ^ 91 ^ 7 ^ 6 ^ 0 ^ 2 ^ 5 ^ 68 ^ 22 = 64 Perform this operation on each of the sixteen blocks of sixteen numbers in your sparse hash to determine the sixteen numbers in your dense hash. Finally, the standard way to represent a Knot Hash is as a single hexadecimal string; the final output is the dense hash in hexadecimal notation. Because each number in your dense hash will be between 0 and 255 (inclusive), always represent each number as two hexadecimal digits (including a leading zero as necessary). So, if your first three numbers are 64, 7, 255, they correspond to the hexadecimal numbers 40, 07, ff, and so the first six characters of the hash would be 4007ff. Because every Knot Hash is sixteen such numbers, the hexadecimal representation is always 32 hexadecimal digits (0-f) long. Here are some example hashes: The empty string becomes a2582a3a0e66e6e86e3812dcb672a272. AoC 2017 becomes 33efeb34ea91902bb2f59c9920caa6cd. 1,2,3 becomes 3efbe78a8d82f29979031a4aa0b16a9d. 1,2,4 becomes 63960835bcdc130f0b66d7ff4f6a5a8e. Treating your puzzle input as a string of ASCII characters, what is the Knot Hash of your puzzle input? Ignore any leading or trailing whitespace you might encounter. """
""" Simple config ============= Although CherryPy uses the :mod:`Python logging module <logging>`, it does so behind the scenes so that simple logging is simple, but complicated logging is still possible. "Simple" logging means that you can log to the screen (i.e. console/stdout) or to a file, and that you can easily have separate error and access log files. Here are the simplified logging settings. You use these by adding lines to your config file or dict. You should set these at either the global level or per application (see next), but generally not both. * ``log.screen``: Set this to True to have both "error" and "access" messages printed to stdout. * ``log.access_file``: Set this to an absolute filename where you want "access" messages written. * ``log.error_file``: Set this to an absolute filename where you want "error" messages written. Many events are automatically logged; to log your own application events, call :func:`cherrypy.log`. Architecture ============ Separate scopes --------------- CherryPy provides log managers at both the global and application layers. This means you can have one set of logging rules for your entire site, and another set of rules specific to each application. The global log manager is found at :func:`cherrypy.log`, and the log manager for each application is found at :attr:`app.log<cherrypy._cptree.Application.log>`. If you're inside a request, the latter is reachable from ``cherrypy.request.app.log``; if you're outside a request, you'll have to obtain a reference to the ``app``: either the return value of :func:`tree.mount()<cherrypy._cptree.Tree.mount>` or, if you used :func:`quickstart()<cherrypy.quickstart>` instead, via ``cherrypy.tree.apps['/']``. By default, the global logs are named "cherrypy.error" and "cherrypy.access", and the application logs are named "cherrypy.error.2378745" and "cherrypy.access.2378745" (the number is the id of the Application object). This means that the application logs "bubble up" to the site logs, so if your application has no log handlers, the site-level handlers will still log the messages. Errors vs. Access ----------------- Each log manager handles both "access" messages (one per HTTP request) and "error" messages (everything else). Note that the "error" log is not just for errors! The format of access messages is highly formalized, but the error log isn't--it receives messages from a variety of sources (including full error tracebacks, if enabled). If you are logging the access log and error log to the same source, then there is a possibility that a specially crafted error message may replicate an access log message as described in CWE-117. In this case it is the application developer's responsibility to manually escape data before using CherryPy's log() functionality, or they may create an application that is vulnerable to CWE-117. This would be achieved by using a custom handler escape any special characters, and attached as described below. Custom Handlers =============== The simple settings above work by manipulating Python's standard :mod:`logging` module. So when you need something more complex, the full power of the standard module is yours to exploit. You can borrow or create custom handlers, formats, filters, and much more. Here's an example that skips the standard FileHandler and uses a RotatingFileHandler instead: :: #python log = app.log # Remove the default FileHandlers if present. log.error_file = "" log.access_file = "" maxBytes = getattr(log, "rot_maxBytes", 10000000) backupCount = getattr(log, "rot_backupCount", 1000) # Make a new RotatingFileHandler for the error log. fname = getattr(log, "rot_error_file", "error.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.error_log.addHandler(h) # Make a new RotatingFileHandler for the access log. fname = getattr(log, "rot_access_file", "access.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.access_log.addHandler(h) The ``rot_*`` attributes are pulled straight from the application log object. Since "log.*" config entries simply set attributes on the log object, you can add custom attributes to your heart's content. Note that these handlers are used ''instead'' of the default, simple handlers outlined above (so don't set the "log.error_file" config entry, for example). """
"""Discussion of bloom constants for bup: There are four basic things to consider when building a bloom filter: The size, in bits, of the filter The capacity, in entries, of the filter The probability of a false positive that is tolerable The number of bits readily available to use for addressing filter bits There is one major tunable that is not directly related to the above: k: the number of bits set in the filter per entry Here's a wall of numbers showing the relationship between k; the ratio between the filter size in bits and the entries in the filter; and pfalse_positive: mn|k=3 |k=4 |k=5 |k=6 |k=7 |k=8 |k=9 |k=10 |k=11 8|3.05794|2.39687|2.16792|2.15771|2.29297|2.54917|2.92244|3.41909|4.05091 9|2.27780|1.65770|1.40703|1.32721|1.34892|1.44631|1.61138|1.84491|2.15259 10|1.74106|1.18133|0.94309|0.84362|0.81937|0.84555|0.91270|1.01859|1.16495 11|1.36005|0.86373|0.65018|0.55222|0.51259|0.50864|0.53098|0.57616|0.64387 12|1.08231|0.64568|0.45945|0.37108|0.32939|0.31424|0.31695|0.33387|0.36380 13|0.87517|0.49210|0.33183|0.25527|0.21689|0.19897|0.19384|0.19804|0.21013 14|0.71759|0.38147|0.24433|0.17934|0.14601|0.12887|0.12127|0.12012|0.12399 15|0.59562|0.30019|0.18303|0.12840|0.10028|0.08523|0.07749|0.07440|0.07468 16|0.49977|0.23941|0.13925|0.09351|0.07015|0.05745|0.05049|0.04700|0.04587 17|0.42340|0.19323|0.10742|0.06916|0.04990|0.03941|0.03350|0.03024|0.02870 18|0.36181|0.15765|0.08392|0.05188|0.03604|0.02748|0.02260|0.01980|0.01827 19|0.31160|0.12989|0.06632|0.03942|0.02640|0.01945|0.01549|0.01317|0.01182 20|0.27026|0.10797|0.05296|0.03031|0.01959|0.01396|0.01077|0.00889|0.00777 21|0.23591|0.09048|0.04269|0.02356|0.01471|0.01014|0.00759|0.00609|0.00518 22|0.20714|0.07639|0.03473|0.01850|0.01117|0.00746|0.00542|0.00423|0.00350 23|0.18287|0.06493|0.02847|0.01466|0.00856|0.00555|0.00392|0.00297|0.00240 24|0.16224|0.05554|0.02352|0.01171|0.00663|0.00417|0.00286|0.00211|0.00166 25|0.14459|0.04779|0.01957|0.00944|0.00518|0.00316|0.00211|0.00152|0.00116 26|0.12942|0.04135|0.01639|0.00766|0.00408|0.00242|0.00157|0.00110|0.00082 27|0.11629|0.03595|0.01381|0.00626|0.00324|0.00187|0.00118|0.00081|0.00059 28|0.10489|0.03141|0.01170|0.00515|0.00259|0.00146|0.00090|0.00060|0.00043 29|0.09492|0.02756|0.00996|0.00426|0.00209|0.00114|0.00069|0.00045|0.00031 30|0.08618|0.02428|0.00853|0.00355|0.00169|0.00090|0.00053|0.00034|0.00023 31|0.07848|0.02147|0.00733|0.00297|0.00138|0.00072|0.00041|0.00025|0.00017 32|0.07167|0.01906|0.00633|0.00250|0.00113|0.00057|0.00032|0.00019|0.00013 Here's a table showing available repository size for a given pfalse_positive and three values of k (assuming we only use the 160 bit SHA1 for addressing the filter and 8192bytes per object): pfalse|obj k=4 |cap k=4 |obj k=5 |cap k=5 |obj k=6 |cap k=6 2.500%|139333497228|1038.11 TiB|558711157|4262.63 GiB|13815755|105.41 GiB 1.000%|104489450934| 778.50 TiB|436090254|3327.10 GiB|11077519| 84.51 GiB 0.125%| 57254889824| 426.58 TiB|261732190|1996.86 GiB| 7063017| 55.89 GiB This eliminates pretty neatly any k>6 as long as we use the raw SHA for addressing. filter size scales linearly with repository size for a given k and pfalse. Here's a table of filter sizes for a 1 TiB repository: pfalse| k=3 | k=4 | k=5 | k=6 2.500%| 138.78 MiB | 126.26 MiB | 123.00 MiB | 123.37 MiB 1.000%| 197.83 MiB | 168.36 MiB | 157.58 MiB | 153.87 MiB 0.125%| 421.14 MiB | 307.26 MiB | 262.56 MiB | 241.32 MiB For bup: * We want the bloom filter to fit in memory; if it doesn't, the k pagefaults per lookup will be worse than the two required for midx. * We want the pfalse_positive to be low enough that the cost of sometimes faulting on the midx doesn't overcome the benefit of the bloom filter. * We have readily available 160 bits for addressing the filter. * We want to be able to have a single bloom address entire repositories of reasonable size. Based on these parameters, a combination of k=4 and k=5 provides the behavior that bup needs. As such, I've implemented bloom addressing, adding and checking functions in C for these two values. Because k=5 requires less space and gives better overall pfalse_positive performance, it is preferred if a table with k=5 can represent the repository. None of this tells us what max_pfalse_positive to choose. Brandon NAME <lostlogic@lostlogicx.com> 2011-02-04 """
""" Simple config ============= Although CherryPy uses the :mod:`Python logging module <logging>`, it does so behind the scenes so that simple logging is simple, but complicated logging is still possible. "Simple" logging means that you can log to the screen (i.e. console/stdout) or to a file, and that you can easily have separate error and access log files. Here are the simplified logging settings. You use these by adding lines to your config file or dict. You should set these at either the global level or per application (see next), but generally not both. * ``log.screen``: Set this to True to have both "error" and "access" messages printed to stdout. * ``log.access_file``: Set this to an absolute filename where you want "access" messages written. * ``log.error_file``: Set this to an absolute filename where you want "error" messages written. Many events are automatically logged; to log your own application events, call :func:`cherrypy.log`. Architecture ============ Separate scopes --------------- CherryPy provides log managers at both the global and application layers. This means you can have one set of logging rules for your entire site, and another set of rules specific to each application. The global log manager is found at :func:`cherrypy.log`, and the log manager for each application is found at :attr:`app.log<cherrypy._cptree.Application.log>`. If you're inside a request, the latter is reachable from ``cherrypy.request.app.log``; if you're outside a request, you'll have to obtain a reference to the ``app``: either the return value of :func:`tree.mount()<cherrypy._cptree.Tree.mount>` or, if you used :func:`quickstart()<cherrypy.quickstart>` instead, via ``cherrypy.tree.apps['/']``. By default, the global logs are named "cherrypy.error" and "cherrypy.access", and the application logs are named "cherrypy.error.2378745" and "cherrypy.access.2378745" (the number is the id of the Application object). This means that the application logs "bubble up" to the site logs, so if your application has no log handlers, the site-level handlers will still log the messages. Errors vs. Access ----------------- Each log manager handles both "access" messages (one per HTTP request) and "error" messages (everything else). Note that the "error" log is not just for errors! The format of access messages is highly formalized, but the error log isn't--it receives messages from a variety of sources (including full error tracebacks, if enabled). Custom Handlers =============== The simple settings above work by manipulating Python's standard :mod:`logging` module. So when you need something more complex, the full power of the standard module is yours to exploit. You can borrow or create custom handlers, formats, filters, and much more. Here's an example that skips the standard FileHandler and uses a RotatingFileHandler instead: :: #python log = app.log # Remove the default FileHandlers if present. log.error_file = "" log.access_file = "" maxBytes = getattr(log, "rot_maxBytes", 10000000) backupCount = getattr(log, "rot_backupCount", 1000) # Make a new RotatingFileHandler for the error log. fname = getattr(log, "rot_error_file", "error.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.error_log.addHandler(h) # Make a new RotatingFileHandler for the access log. fname = getattr(log, "rot_access_file", "access.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.access_log.addHandler(h) The ``rot_*`` attributes are pulled straight from the application log object. Since "log.*" config entries simply set attributes on the log object, you can add custom attributes to your heart's content. Note that these handlers are used ''instead'' of the default, simple handlers outlined above (so don't set the "log.error_file" config entry, for example). """
""" ============= Miscellaneous ============= IEEE 754 Floating Point Special Values: ----------------------------------------------- Special values defined in numpy: nan, inf, NaNs can be used as a poor-man's mask (if you don't care what the original value was) Note: cannot use equality to test NaNs. E.g.: :: >>> myarr = np.array([1., 0., np.nan, 3.]) >>> np.where(myarr == np.nan) >>> np.nan == np.nan # is always False! Use special numpy functions instead. False >>> myarr[myarr == np.nan] = 0. # doesn't work >>> myarr array([ 1., 0., NaN, 3.]) >>> myarr[np.isnan(myarr)] = 0. # use this instead find >>> myarr array([ 1., 0., 0., 3.]) Other related special value functions: :: isinf(): True if value is inf isfinite(): True if not nan or inf nan_to_num(): Map nan to 0, inf to max float, -inf to min float The following corresponds to the usual functions except that nans are excluded from the results: :: nansum() nanmax() nanmin() nanargmax() nanargmin() >>> x = np.arange(10.) >>> x[3] = np.nan >>> x.sum() nan >>> np.nansum(x) 42.0 How numpy handles numerical exceptions: ------------------------------------------ The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow`` and ``'ignore'`` for ``underflow``. But this can be changed, and it can be set individually for different kinds of exceptions. The different behaviors are: - 'ignore' : Take no action when the exception occurs. - 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module). - 'raise' : Raise a `FloatingPointError`. - 'call' : Call a function specified using the `seterrcall` function. - 'print' : Print a warning directly to ``stdout``. - 'log' : Record error in a Log object specified by `seterrcall`. These behaviors can be set for all kinds of errors or specific ones: - all : apply to all numeric exceptions - invalid : when NaNs are generated - divide : divide by zero (for integers as well!) - overflow : floating point overflows - underflow : floating point underflows Note that integer divide-by-zero is handled by the same machinery. These behaviors are set on a per-thread basis. Examples: ------------ :: >>> oldsettings = np.seterr(all='warn') >>> np.zeros(5,dtype=np.float32)/0. invalid value encountered in divide >>> j = np.seterr(under='ignore') >>> np.array([1.e-100])**10 >>> j = np.seterr(invalid='raise') >>> np.sqrt(np.array([-1.])) FloatingPointError: invalid value encountered in sqrt >>> def errorhandler(errstr, errflag): ... print "saw stupid error!" >>> np.seterrcall(errorhandler) <function err_handler at 0x...> >>> j = np.seterr(all='call') >>> np.zeros(5, dtype=np.int32)/0 FloatingPointError: invalid value encountered in divide saw stupid error! >>> j = np.seterr(**oldsettings) # restore previous ... # error-handling settings Interfacing to C: ----------------- Only a survey of the choices. Little detail on how each works. 1) Bare metal, wrap your own C-code manually. - Plusses: - Efficient - No dependencies on other tools - Minuses: - Lots of learning overhead: - need to learn basics of Python C API - need to learn basics of numpy C API - need to learn how to handle reference counting and love it. - Reference counting often difficult to get right. - getting it wrong leads to memory leaks, and worse, segfaults - API will change for Python 3.0! 2) pyrex - Plusses: - avoid learning C API's - no dealing with reference counting - can code in psuedo python and generate C code - can also interface to existing C code - should shield you from changes to Python C api - become pretty popular within Python community - Minuses: - Can write code in non-standard form which may become obsolete - Not as flexible as manual wrapping - Maintainers not easily adaptable to new features Thus: 3) cython - fork of pyrex to allow needed features for SAGE - being considered as the standard scipy/numpy wrapping tool - fast indexing support for arrays 4) ctypes - Plusses: - part of Python standard library - good for interfacing to existing sharable libraries, particularly Windows DLLs - avoids API/reference counting issues - good numpy support: arrays have all these in their ctypes attribute: :: a.ctypes.data a.ctypes.get_strides a.ctypes.data_as a.ctypes.shape a.ctypes.get_as_parameter a.ctypes.shape_as a.ctypes.get_data a.ctypes.strides a.ctypes.get_shape a.ctypes.strides_as - Minuses: - can't use for writing code to be turned into C extensions, only a wrapper tool. 5) SWIG (automatic wrapper generator) - Plusses: - around a long time - multiple scripting language support - C++ support - Good for wrapping large (many functions) existing C libraries - Minuses: - generates lots of code between Python and the C code - can cause performance problems that are nearly impossible to optimize out - interface files can be hard to write - doesn't necessarily avoid reference counting issues or needing to know API's 7) Weave - Plusses: - Phenomenal tool - can turn many numpy expressions into C code - dynamic compiling and loading of generated C code - can embed pure C code in Python module and have weave extract, generate interfaces and compile, etc. - Minuses: - Future uncertain--lacks a champion 8) Psyco - Plusses: - Turns pure python into efficient machine code through jit-like optimizations - very fast when it optimizes well - Minuses: - Only on intel (windows?) - Doesn't do much for numpy? Interfacing to Fortran: ----------------------- Fortran: Clear choice is f2py. (Pyfort is an older alternative, but not supported any longer) Interfacing to C++: ------------------- 1) CXX 2) Boost.python 3) SWIG 4) Sage has used cython to wrap C++ (not pretty, but it can be done) 5) SIP (used mainly in PyQT) """
""" Linear mixed effects models are regression models for dependent data. They can be used to estimate regression relationships involving both means and variances. These models are also known as multilevel linear models, and hierarchical linear models. The MixedLM class fits linear mixed effects models to data, and provides support for some common post-estimation tasks. This is a group-based implementation that is most efficient for models in which the data can be partitioned into independent groups. Some models with crossed effects can be handled by specifying a model with a single group. The data are partitioned into disjoint groups. The probability model for group i is: Y = X*beta + Z*gamma + epsilon where * n_i is the number of observations in group i * Y is a n_i dimensional response vector (called endog in MixedLM) * X is a n_i x k_fe dimensional design matrix for the fixed effects (called exog in MixedLM) * beta is a k_fe-dimensional vector of fixed effects parameters (called fe_params in MixedLM) * Z is a design matrix for the random effects with n_i rows (called exog_re in MixedLM). The number of columns in Z can vary by group as discussed below. * gamma is a random vector with mean 0. The covariance matrix for the first `k_re` elements of `gamma` (called cov_re in MixedLM) is common to all groups. The remaining elements of `gamma` are variance components as discussed in more detail below. Each group receives its own independent realization of gamma. * epsilon is a n_i dimensional vector of iid normal errors with mean 0 and variance sigma^2; the epsilon values are independent both within and between groups Y, X and Z must be entirely observed. beta, Psi, and sigma^2 are estimated using ML or REML estimation, and gamma and epsilon are random so define the probability model. The marginal mean structure is E[Y | X, Z] = X*beta. If only the mean structure is of interest, GEE is an alternative to using linear mixed models. Two types of random effects are supported. Standard random effects are correlated with each other in arbitrary ways. Every group has the same number (`k_re`) of standard random effects, with the same joint distribution (but with independent realizations across the groups). Variance components are uncorrelated with each other, and with the standard random effects. Each variance component has mean zero, and all realizations of a given variance component have the same variance parameter. The number of realized variance components per variance parameter can differ across the groups. The primary reference for the implementation details is: MJ NAME NAME (1988). "Newton Raphson and EM algorithms for linear mixed effects models for repeated measures data". Journal of the American Statistical Association. Volume 83, Issue 404, pages 1014-1022. See also this more recent document: http://econ.ucsb.edu/~doug/245a/Papers/Mixed%20Effects%20Implement.pdf All the likelihood, gradient, and Hessian calculations closely follow Lindstrom and Bates 1988, adapted to support variance components. The following two documents are written more from the perspective of users: http://lme4.r-forge.r-project.org/lMMwR/lrgprt.pdf http://lme4.r-forge.r-project.org/slides/2009-07-07-Rennes/3Longitudinal-4.pdf Notation: * `cov_re` is the random effects covariance matrix (referred to above as Psi) and `scale` is the (scalar) error variance. For a single group, the marginal covariance matrix of endog given exog is scale*I + Z * cov_re * Z', where Z is the design matrix for the random effects in one group. * `vcomp` is a vector of variance parameters. The length of `vcomp` is determined by the number of keys in either the `exog_vc` argument to ``MixedLM``, or the `vc_formula` argument when using formulas to fit a model. Notes: 1. Three different parameterizations are used in different places. The regression slopes (usually called `fe_params`) are identical in all three parameterizations, but the variance parameters differ. The parameterizations are: * The "user parameterization" in which cov(endog) = scale*I + Z * cov_re * Z', as described above. This is the main parameterization visible to the user. * The "profile parameterization" in which cov(endog) = I + Z * cov_re1 * Z'. This is the parameterization of the profile likelihood that is maximized to produce parameter estimates. (see Lindstrom and Bates for details). The "user" cov_re is equal to the "profile" cov_re1 times the scale. * The "square root parameterization" in which we work with the Cholesky factor of cov_re1 instead of cov_re directly. This is hidden from the user. All three parameterizations can be packed into a vector by (optionally) concatenating `fe_params` together with the lower triangle or Cholesky square root of the dependence structure, followed by the variance parameters for the variance components. The are stored as square roots if (and only if) the random effects covariance matrix is stored as its Cholesky factor. Note that when unpacking, it is important to either square or reflect the dependence structure depending on which parameterization is being used. Two score methods are implemented. One takes the score with respect to the elements of the random effects covariance matrix (used for inference once the MLE is reached), and the other takes the score with respect to the parameters of the Cholesky square root of the random effects covariance matrix (used for optimization). The numerical optimization uses GLS to avoid explicitly optimizing over the fixed effects parameters. The likelihood that is optimized is profiled over both the scale parameter (a scalar) and the fixed effects parameters (if any). As a result of this profiling, it is difficult and unnecessary to calculate the Hessian of the profiled log likelihood function, so that calculation is not implemented here. Therefore, optimization methods requiring the Hessian matrix such as the Newton-Raphson algorithm cannot be used for model fitting. """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to reqd all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <lkcl@samba.org> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
""" ===================================== Sparse matrices (:mod:`scipy.sparse`) ===================================== .. currentmodule:: scipy.sparse SciPy 2-D sparse matrix package for numeric data. Contents ======== Sparse matrix classes --------------------- .. autosummary:: :toctree: generated/ bsr_matrix - Block Sparse Row matrix coo_matrix - A sparse matrix in COOrdinate format csc_matrix - Compressed Sparse Column matrix csr_matrix - Compressed Sparse Row matrix dia_matrix - Sparse matrix with DIAgonal storage dok_matrix - Dictionary Of Keys based sparse matrix lil_matrix - Row-based linked list sparse matrix Functions --------- Building sparse matrices: .. autosummary:: :toctree: generated/ eye - Sparse MxN matrix whose k-th diagonal is all ones identity - Identity matrix in sparse format kron - kronecker product of two sparse matrices kronsum - kronecker sum of sparse matrices diags - Return a sparse matrix from diagonals spdiags - Return a sparse matrix from diagonals block_diag - Build a block diagonal sparse matrix tril - Lower triangular portion of a matrix in sparse format triu - Upper triangular portion of a matrix in sparse format bmat - Build a sparse matrix from sparse sub-blocks hstack - Stack sparse matrices horizontally (column wise) vstack - Stack sparse matrices vertically (row wise) rand - Random values in a given shape Sparse matrix tools: .. autosummary:: :toctree: generated/ find Identifying sparse matrices: .. autosummary:: :toctree: generated/ issparse isspmatrix isspmatrix_csc isspmatrix_csr isspmatrix_bsr isspmatrix_lil isspmatrix_dok isspmatrix_coo isspmatrix_dia Submodules ---------- .. autosummary:: :toctree: generated/ csgraph - Compressed sparse graph routines linalg - sparse linear algebra routines Exceptions ---------- .. autosummary:: :toctree: generated/ SparseEfficiencyWarning SparseWarning Usage information ================= There are seven available sparse matrix types: 1. csc_matrix: Compressed Sparse Column format 2. csr_matrix: Compressed Sparse Row format 3. bsr_matrix: Block Sparse Row format 4. lil_matrix: List of Lists format 5. dok_matrix: Dictionary of Keys format 6. coo_matrix: COOrdinate format (aka IJV, triplet format) 7. dia_matrix: DIAgonal format To construct a matrix efficiently, use either dok_matrix or lil_matrix. The lil_matrix class supports basic slicing and fancy indexing with a similar syntax to NumPy arrays. As illustrated below, the COO format may also be used to efficiently construct matrices. To perform manipulations such as multiplication or inversion, first convert the matrix to either CSC or CSR format. The lil_matrix format is row-based, so conversion to CSR is efficient, whereas conversion to CSC is less so. All conversions among the CSR, CSC, and COO formats are efficient, linear-time operations. Matrix vector product --------------------- To do a vector product between a sparse matrix and a vector simply use the matrix `dot` method, as described in its docstring: >>> import numpy as np >>> from scipy.sparse import csr_matrix >>> A = csr_matrix([[1, 2, 0], [0, 0, 3], [4, 0, 5]]) >>> v = np.array([1, 0, -1]) >>> A.dot(v) array([ 1, -3, -1], dtype=int64) .. warning:: As of NumPy 1.7, `np.dot` is not aware of sparse matrices, therefore using it will result on unexpected results or errors. The corresponding dense array should be obtained first instead: >>> np.dot(A.toarray(), v) array([ 1, -3, -1], dtype=int64) but then all the performance advantages would be lost. The CSR format is specially suitable for fast matrix vector products. Example 1 --------- Construct a 1000x1000 lil_matrix and add some values to it: >>> from scipy.sparse import lil_matrix >>> from scipy.sparse.linalg import spsolve >>> from numpy.linalg import solve, norm >>> from numpy.random import rand >>> A = lil_matrix((1000, 1000)) >>> A[0, :100] = rand(100) >>> A[1, 100:200] = A[0, :100] >>> A.setdiag(rand(1000)) Now convert it to CSR format and solve A x = b for x: >>> A = A.tocsr() >>> b = rand(1000) >>> x = spsolve(A, b) Convert it to a dense matrix and solve, and check that the result is the same: >>> x_ = solve(A.toarray(), b) Now we can compute norm of the error with: >>> err = norm(x-x_) >>> err < 1e-10 True It should be small :) Example 2 --------- Construct a matrix in COO format: >>> from scipy import sparse >>> from numpy import array >>> I = array([0,3,1,0]) >>> J = array([0,3,1,2]) >>> V = array([4,5,7,9]) >>> A = sparse.coo_matrix((V,(I,J)),shape=(4,4)) Notice that the indices do not need to be sorted. Duplicate (i,j) entries are summed when converting to CSR or CSC. >>> I = array([0,0,1,3,1,0,0]) >>> J = array([0,2,1,3,1,0,0]) >>> V = array([1,1,1,1,1,1,1]) >>> B = sparse.coo_matrix((V,(I,J)),shape=(4,4)).tocsr() This is useful for constructing finite-element stiffness and mass matrices. Further Details --------------- CSR column indices are not necessarily sorted. Likewise for CSC row indices. Use the .sorted_indices() and .sort_indices() methods when sorted indices are required (e.g. when passing data to other libraries). """
""" Low-level LAPACK functions (:mod:`scipy.linalg.lapack`) ======================================================= This module contains low-level functions from the LAPACK library. The `*gegv` family of routines have been removed from LAPACK 3.6.0 and have been deprecated in SciPy 0.17.0. They will be removed in a future release. .. versionadded:: 0.12.0 .. warning:: These functions do little to no error checking. It is possible to cause crashes by mis-using them, so prefer using the higher-level routines in `scipy.linalg`. Finding functions ----------------- .. autosummary:: get_lapack_funcs All functions ------------- .. autosummary:: :toctree: generated/ sgbsv dgbsv cgbsv zgbsv sgbtrf dgbtrf cgbtrf zgbtrf sgbtrs dgbtrs cgbtrs zgbtrs sgebal dgebal cgebal zgebal sgees dgees cgees zgees sgeev dgeev cgeev zgeev sgeev_lwork dgeev_lwork cgeev_lwork zgeev_lwork sgegv dgegv cgegv zgegv sgehrd dgehrd cgehrd zgehrd sgehrd_lwork dgehrd_lwork cgehrd_lwork zgehrd_lwork sgelss dgelss cgelss zgelss sgelss_lwork dgelss_lwork cgelss_lwork zgelss_lwork sgelsd dgelsd cgelsd zgelsd sgelsd_lwork dgelsd_lwork cgelsd_lwork zgelsd_lwork sgelsy dgelsy cgelsy zgelsy sgelsy_lwork dgelsy_lwork cgelsy_lwork zgelsy_lwork sgeqp3 dgeqp3 cgeqp3 zgeqp3 sgeqrf dgeqrf cgeqrf zgeqrf sgerqf dgerqf cgerqf zgerqf sgesdd dgesdd cgesdd zgesdd sgesdd_lwork dgesdd_lwork cgesdd_lwork zgesdd_lwork sgesvd dgesvd cgesvd zgesvd sgesvd_lwork dgesvd_lwork cgesvd_lwork zgesvd_lwork sgesv dgesv cgesv zgesv sgesvx dgesvx cgesvx zgesvx sgecon dgecon cgecon zgecon ssysv dsysv csysv zsysv ssysv_lwork dsysv_lwork csysv_lwork zsysv_lwork ssysvx dsysvx csysvx zsysvx ssysvx_lwork dsysvx_lwork csysvx_lwork zsysvx_lwork chesv zhesv chesv_lwork zhesv_lwork chesvx zhesvx chesvx_lwork zhesvx_lwork sgetrf dgetrf cgetrf zgetrf sgetri dgetri cgetri zgetri sgetri_lwork dgetri_lwork cgetri_lwork zgetri_lwork sgetrs dgetrs cgetrs zgetrs sgges dgges cgges zgges sggev dggev cggev zggev chbevd zhbevd chbevx zhbevx cheev zheev cheevd zheevd cheevr zheevr chegv zhegv chegvd zhegvd chegvx zhegvx slarf dlarf clarf zlarf slarfg dlarfg clarfg zlarfg slartg dlartg clartg zlartg slasd4 dlasd4 slaswp dlaswp claswp zlaswp slauum dlauum clauum zlauum spbsv dpbsv cpbsv zpbsv spbtrf dpbtrf cpbtrf zpbtrf spbtrs dpbtrs cpbtrs zpbtrs sposv dposv cposv zposv sposvx dposvx cposvx zposvx spocon dpocon cpocon zpocon spotrf dpotrf cpotrf zpotrf spotri dpotri cpotri zpotri spotrs dpotrs cpotrs zpotrs crot zrot strsyl dtrsyl ctrsyl ztrsyl strtri dtrtri ctrtri ztrtri strtrs dtrtrs ctrtrs ztrtrs cunghr zunghr cungqr zungqr cungrq zungrq cunmqr zunmqr sgtsv dgtsv cgtsv zgtsv sptsv dptsv cptsv zptsv slamch dlamch sorghr dorghr sorgqr dorgqr sorgrq dorgrq sormqr dormqr ssbev dsbev ssbevd dsbevd ssbevx dsbevx ssyev dsyev ssyevd dsyevd ssyevr dsyevr ssygv dsygv ssygvd dsygvd ssygvx dsygvx slange dlange clange zlange ilaver """
#!/usr/bin/env python # infill_generator.py # # Generate hatch fills for the closed paths (polygons) in the currently # selected document elements. If no elements are selected, then all the # polygons throughout the document are hatched. The fill rule is an odd/even # rule: odd numbered intersections (1, 3, 5, etc.) are a hatch line entering # a polygon while even numbered intersections (2, 4, 6, etc.) are the same # hatch line exiting the polygon. # # This extension first decomposes the selected <path>, <rect>, <line>, # <polyline>, <polygon>, <circle>, and <ellipse> elements into individual # moveto and lineto coordinates. These coordinates are then used to build vertex lists. # Only the vertex lists corresponding to polygons (closed paths) are # kept. Note that a single graphical element may be composed of several # subpaths, each subpath potentially a polygon. # # Once the lists of all the vertices are built, potential hatch lines are # "projected" through the bounding box containing all of the vertices. # For each potential hatch line, all intersections with all the polygon # edges are determined. These intersections are stored as decimal fractions # indicating where along the length of the hatch line the intersection # occurs. These values will always be in the range [0, 1]. A value of 0 # indicates that the intersection is at the start of the hatch line, a value # of 0.5 midway, and a value of 1 at the end of the hatch line. # # For a given hatch line, all the fractional values are sorted and any # duplicates removed. Duplicates occur, for instance, when the hatch # line passes through a polygon vertex and thus intersects two edges # segments of the polygon: the end of one edge and the start of # another. # # Once sorted and duplicates removed, an odd/even rule is applied to # determine which segments of the potential hatch line are within # polygons. These segments found to be within polygons are then saved # and become the hatch fill lines which will be drawn. # # With each saved hatch fill line, information about which SVG graphical # element it is within is saved. This way, the hatch fill lines can # later be grouped with the element they are associated with. This makes # it possible to manipulate the two -- graphical element and hatch lines -- # as a single object within Inkscape. # # Note: we also save the transformation matrix for each graphical element. # That way, when we group the hatch fills with the element they are # filling, we can invert the transformation. That is, in order to compute # the hatch fills, we first have had apply ALL applicable transforms to # all the graphical elements. We need to do that so that we know where in # the drawing each of the graphical elements are relative to one another. # However, this means that the hatch lines have been computed in a setting # where no further transforms are needed. If we then put these hatch lines # into the same groups as the elements being hatched in the ORIGINAL # drawing, then the hatch lines will have transforms applied again. So, # once we compute the hatch lines, we need to invert the transforms of # the group they will be placed in and apply this inverse transform to the # hatch lines. Hence the need to save the transform matrix for every # graphical element. # Written by NAME for the Eggbot Project # dan dot newman at mtbaldy dot us # Last updated 28 November 2010 # 15 October 2010 # Updated by NAME 6/14/2012 # Added tolerance parameter # Update by NAME, 6/20/2012 # Add min span/gap width # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
""" ===================================== Sparse matrices (:mod:`scipy.sparse`) ===================================== .. currentmodule:: scipy.sparse SciPy 2-D sparse matrix package for numeric data. Contents ======== Sparse matrix classes --------------------- .. autosummary:: :toctree: generated/ bsr_matrix - Block Sparse Row matrix coo_matrix - A sparse matrix in COOrdinate format csc_matrix - Compressed Sparse Column matrix csr_matrix - Compressed Sparse Row matrix dia_matrix - Sparse matrix with DIAgonal storage dok_matrix - Dictionary Of Keys based sparse matrix lil_matrix - Row-based linked list sparse matrix Functions --------- Building sparse matrices: .. autosummary:: :toctree: generated/ eye - Sparse MxN matrix whose k-th diagonal is all ones identity - Identity matrix in sparse format kron - kronecker product of two sparse matrices kronsum - kronecker sum of sparse matrices diags - Return a sparse matrix from diagonals spdiags - Return a sparse matrix from diagonals block_diag - Build a block diagonal sparse matrix tril - Lower triangular portion of a matrix in sparse format triu - Upper triangular portion of a matrix in sparse format bmat - Build a sparse matrix from sparse sub-blocks hstack - Stack sparse matrices horizontally (column wise) vstack - Stack sparse matrices vertically (row wise) rand - Random values in a given shape Sparse matrix tools: .. autosummary:: :toctree: generated/ find Identifying sparse matrices: .. autosummary:: :toctree: generated/ issparse isspmatrix isspmatrix_csc isspmatrix_csr isspmatrix_bsr isspmatrix_lil isspmatrix_dok isspmatrix_coo isspmatrix_dia Submodules ---------- .. autosummary:: :toctree: generated/ csgraph - Compressed sparse graph routines linalg - sparse linear algebra routines Exceptions ---------- .. autosummary:: :toctree: generated/ SparseEfficiencyWarning SparseWarning Usage information ================= There are seven available sparse matrix types: 1. csc_matrix: Compressed Sparse Column format 2. csr_matrix: Compressed Sparse Row format 3. bsr_matrix: Block Sparse Row format 4. lil_matrix: List of Lists format 5. dok_matrix: Dictionary of Keys format 6. coo_matrix: COOrdinate format (aka IJV, triplet format) 7. dia_matrix: DIAgonal format To construct a matrix efficiently, use either dok_matrix or lil_matrix. The lil_matrix class supports basic slicing and fancy indexing with a similar syntax to NumPy arrays. As illustrated below, the COO format may also be used to efficiently construct matrices. To perform manipulations such as multiplication or inversion, first convert the matrix to either CSC or CSR format. The lil_matrix format is row-based, so conversion to CSR is efficient, whereas conversion to CSC is less so. All conversions among the CSR, CSC, and COO formats are efficient, linear-time operations. Matrix vector product --------------------- To do a vector product between a sparse matrix and a vector simply use the matrix `dot` method, as described in its docstring: >>> import numpy as np >>> from scipy.sparse import csr_matrix >>> A = csr_matrix([[1, 2, 0], [0, 0, 3], [4, 0, 5]]) >>> v = np.array([1, 0, -1]) >>> A.dot(v) array([ 1, -3, -1], dtype=int64) .. warning:: As of NumPy 1.7, `np.dot` is not aware of sparse matrices, therefore using it will result on unexpected results or errors. The corresponding dense array should be obtained first instead: >>> np.dot(A.toarray(), v) array([ 1, -3, -1], dtype=int64) but then all the performance advantages would be lost. The CSR format is specially suitable for fast matrix vector products. Example 1 --------- Construct a 1000x1000 lil_matrix and add some values to it: >>> from scipy.sparse import lil_matrix >>> from scipy.sparse.linalg import spsolve >>> from numpy.linalg import solve, norm >>> from numpy.random import rand >>> A = lil_matrix((1000, 1000)) >>> A[0, :100] = rand(100) >>> A[1, 100:200] = A[0, :100] >>> A.setdiag(rand(1000)) Now convert it to CSR format and solve A x = b for x: >>> A = A.tocsr() >>> b = rand(1000) >>> x = spsolve(A, b) Convert it to a dense matrix and solve, and check that the result is the same: >>> x_ = solve(A.toarray(), b) Now we can compute norm of the error with: >>> err = norm(x-x_) >>> err < 1e-10 True It should be small :) Example 2 --------- Construct a matrix in COO format: >>> from scipy import sparse >>> from numpy import array >>> I = array([0,3,1,0]) >>> J = array([0,3,1,2]) >>> V = array([4,5,7,9]) >>> A = sparse.coo_matrix((V,(I,J)),shape=(4,4)) Notice that the indices do not need to be sorted. Duplicate (i,j) entries are summed when converting to CSR or CSC. >>> I = array([0,0,1,3,1,0,0]) >>> J = array([0,2,1,3,1,0,0]) >>> V = array([1,1,1,1,1,1,1]) >>> B = sparse.coo_matrix((V,(I,J)),shape=(4,4)).tocsr() This is useful for constructing finite-element stiffness and mass matrices. Further Details --------------- CSR column indices are not necessarily sorted. Likewise for CSC row indices. Use the .sorted_indices() and .sort_indices() methods when sorted indices are required (e.g. when passing data to other libraries). """
""" ======== Glossary ======== .. glossary:: along an axis Axes are defined for arrays with more than one dimension. A 2-dimensional array has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second running horizontally across columns (axis 1). Many operation can take place along one of these axes. For example, we can sum each row of an array, in which case we operate along columns, or axis 1:: >>> x = np.arange(12).reshape((3,4)) >>> x array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.sum(axis=1) array([ 6, 22, 38]) array A homogeneous container of numerical elements. Each element in the array occupies a fixed amount of memory (hence homogeneous), and can be a numerical element of a single type (such as float, int or complex) or a combination (such as ``(float, int, float)``). Each array has an associated data-type (or ``dtype``), which describes the numerical type of its elements:: >>> x = np.array([1, 2, 3], float) >>> x array([ 1., 2., 3.]) >>> x.dtype # floating point number, 64 bits of memory per element dtype('float64') # More complicated data type: each array element is a combination of # and integer and a floating point number >>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)]) array([(1, 2.0), (3, 4.0)], dtype=[('x', '<i4'), ('y', '<f8')]) Fast element-wise operations, called `ufuncs`_, operate on arrays. array_like Any sequence that can be interpreted as an ndarray. This includes nested lists, tuples, scalars and existing arrays. attribute A property of an object that can be accessed using ``obj.attribute``, e.g., ``shape`` is an attribute of an array:: >>> x = np.array([1, 2, 3]) >>> x.shape (3,) BLAS `Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_ broadcast NumPy can do operations on arrays whose shapes are mismatched:: >>> x = np.array([1, 2]) >>> y = np.array([[3], [4]]) >>> x array([1, 2]) >>> y array([[3], [4]]) >>> x + y array([[4, 5], [5, 6]]) See `doc.broadcasting`_ for more information. C order See `row-major` column-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In column-major order, the leftmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the column-major order as:: [1, 4, 2, 5, 3, 6] Column-major order is also known as the Fortran order, as the Fortran programming language uses it. decorator An operator that transforms a function. For example, a ``log`` decorator may be defined to print debugging information upon function execution:: >>> def log(f): ... def new_logging_func(*args, **kwargs): ... print "Logging call with parameters:", args, kwargs ... return f(*args, **kwargs) ... ... return new_logging_func Now, when we define a function, we can "decorate" it using ``log``:: >>> @log ... def add(a, b): ... return a + b Calling ``add`` then yields: >>> add(1, 2) Logging call with parameters: (1, 2) {} 3 dictionary Resembling a language dictionary, which provides a mapping between words and descriptions thereof, a Python dictionary is a mapping between two objects:: >>> x = {1: 'one', 'two': [1, 2]} Here, `x` is a dictionary mapping keys to values, in this case the integer 1 to the string "one", and the string "two" to the list ``[1, 2]``. The values may be accessed using their corresponding keys:: >>> x[1] 'one' >>> x['two'] [1, 2] Note that dictionaries are not stored in any specific order. Also, most mutable (see *immutable* below) objects, such as lists, may not be used as keys. For more information on dictionaries, read the `Python tutorial <http://docs.python.org/tut>`_. Fortran order See `column-major` flattened Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details. immutable An object that cannot be modified after execution is called immutable. Two common examples are strings and tuples. instance A class definition gives the blueprint for constructing an object:: >>> class House(object): ... wall_colour = 'white' Yet, we have to *build* a house before it exists:: >>> h = House() # build a house Now, ``h`` is called a ``House`` instance. An instance is therefore a specific realisation of a class. iterable A sequence that allows "walking" (iterating) over items, typically using a loop such as:: >>> x = [1, 2, 3] >>> [item**2 for item in x] [1, 4, 9] It is often used in combintion with ``enumerate``:: >>> keys = ['a','b','c'] >>> for n, k in enumerate(keys): ... print "Key %d: %s" % (n, k) ... Key 0: a Key 1: b Key 2: c list A Python container that can hold any number of objects or items. The items do not have to be of the same type, and can even be lists themselves:: >>> x = [2, 2.0, "two", [2, 2.0]] The list `x` contains 4 items, each which can be accessed individually:: >>> x[2] # the string 'two' 'two' >>> x[3] # a list, containing an integer 2 and a float 2.0 [2, 2.0] It is also possible to select more than one item at a time, using *slicing*:: >>> x[0:2] # or, equivalently, x[:2] [2, 2.0] In code, arrays are often conveniently expressed as nested lists:: >>> np.array([[1, 2], [3, 4]]) array([[1, 2], [3, 4]]) For more information, read the section on lists in the `Python tutorial <http://docs.python.org/tut>`_. For a mapping type (key-value), see *dictionary*. mask A boolean array, used to select only certain elements for an operation:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> mask = (x > 2) >>> mask array([False, False, False, True, True], dtype=bool) >>> x[mask] = -1 >>> x array([ 0, 1, 2, -1, -1]) masked array Array that suppressed values indicated by a mask:: >>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) >>> x masked_array(data = [-- 2.0 --], mask = [ True False True], fill_value = 1e+20) <BLANKLINE> >>> x + [1, 2, 3] masked_array(data = [-- 4.0 --], mask = [ True False True], fill_value = 1e+20) <BLANKLINE> Masked arrays are often used when operating on arrays containing missing or invalid entries. matrix A 2-dimensional ndarray that preserves its two-dimensional nature throughout operations. It has certain special operations, such as ``*`` (matrix multiplication) and ``**`` (matrix power), defined:: >>> x = np.mat([[1, 2], [3, 4]]) >>> x matrix([[1, 2], [3, 4]]) >>> x**2 matrix([[ 7, 10], [15, 22]]) method A function associated with an object. For example, each ndarray has a method called ``repeat``:: >>> x = np.array([1, 2, 3]) >>> x.repeat(2) array([1, 1, 2, 2, 3, 3]) ndarray See *array*. reference If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore, ``a`` and ``b`` are different names for the same Python object. row-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In row-major order, the rightmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the row-major order as:: [1, 2, 3, 4, 5, 6] Row-major order is also known as the C order, as the C programming language uses it. New Numpy arrays are by default in row-major order. self Often seen in method signatures, ``self`` refers to the instance of the associated class. For example: >>> class Paintbrush(object): ... color = 'blue' ... ... def paint(self): ... print "Painting the city %s!" % self.color ... >>> p = Paintbrush() >>> p.color = 'red' >>> p.paint() # self refers to 'p' Painting the city red! slice Used to select only certain elements from a sequence:: >>> x = range(5) >>> x [0, 1, 2, 3, 4] >>> x[1:3] # slice from 1 to 3 (excluding 3 itself) [1, 2] >>> x[1:5:2] # slice from 1 to 5, but skipping every second element [1, 3] >>> x[::-1] # slice a sequence in reverse [4, 3, 2, 1, 0] Arrays may have more than one dimension, each which can be sliced individually:: >>> x = np.array([[1, 2], [3, 4]]) >>> x array([[1, 2], [3, 4]]) >>> x[:, 1] array([2, 4]) tuple A sequence that may contain a variable number of types of any kind. A tuple is immutable, i.e., once constructed it cannot be changed. Similar to a list, it can be indexed and sliced:: >>> x = (1, 'one', [1, 2]) >>> x (1, 'one', [1, 2]) >>> x[0] 1 >>> x[:2] (1, 'one') A useful concept is "tuple unpacking", which allows variables to be assigned to the contents of a tuple:: >>> x, y = (1, 2) >>> x, y = 1, 2 This is often used when a function returns multiple values: >>> def return_many(): ... return 1, 'alpha', None >>> a, b, c = return_many() >>> a, b, c (1, 'alpha', None) >>> a 1 >>> b 'alpha' ufunc Universal function. A fast element-wise array operation. Examples include ``add``, ``sin`` and ``logical_or``. view An array that does not own its data, but refers to another array's data instead. For example, we may create a view that only shows every second element of another array:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> y = x[::2] >>> y array([0, 2, 4]) >>> x[0] = 3 # changing x changes y as well, since y is a view on x >>> y array([3, 2, 4]) wrapper Python is a high-level (highly abstracted, or English-like) language. This abstraction comes at a price in execution speed, and sometimes it becomes necessary to use lower level languages to do fast computations. A wrapper is code that provides a bridge between high and the low level languages, allowing, e.g., Python to execute code written in C or Fortran. Examples include ctypes, SWIG and Cython (which wraps C and C++) and f2py (which wraps Fortran). """
#!/usr/bin/env python # (c) 2013, NAME <paul.durivage@gmail.com> # # This file is part of Ansible. # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # # # Author: NAME <paul.durivage@gmail.com> # # Description: # This module queries local or remote Docker daemons and generates # inventory information. # # This plugin does not support targeting of specific hosts using the --host # flag. Instead, it queries the Docker API for each container, running # or not, and returns this data all once. # # The plugin returns the following custom attributes on Docker containers: # docker_args # docker_config # docker_created # docker_driver # docker_exec_driver # docker_host_config # docker_hostname_path # docker_hosts_path # docker_id # docker_image # docker_name # docker_network_settings # docker_path # docker_resolv_conf_path # docker_state # docker_volumes # docker_volumes_rw # # Requirements: # The docker-py module: https://github.com/dotcloud/docker-py # # Notes: # A config file can be used to configure this inventory module, and there # are several environment variables that can be set to modify the behavior # of the plugin at runtime: # DOCKER_CONFIG_FILE # DOCKER_HOST # DOCKER_VERSION # DOCKER_TIMEOUT # DOCKER_PRIVATE_SSH_PORT # DOCKER_DEFAULT_IP # # Environment Variables: # environment variable: DOCKER_CONFIG_FILE # description: # - A path to a Docker inventory hosts/defaults file in YAML format # - A sample file has been provided, colocated with the inventory # file called 'docker.yml' # required: false # default: Uses docker.docker.Client constructor defaults # environment variable: DOCKER_HOST # description: # - The socket on which to connect to a Docker daemon API # required: false # default: Uses docker.docker.Client constructor defaults # environment variable: DOCKER_VERSION # description: # - Version of the Docker API to use # default: Uses docker.docker.Client constructor defaults # required: false # environment variable: DOCKER_TIMEOUT # description: # - Timeout in seconds for connections to Docker daemon API # default: Uses docker.docker.Client constructor defaults # required: false # environment variable: DOCKER_PRIVATE_SSH_PORT # description: # - The private port (container port) on which SSH is listening # for connections # default: 22 # required: false # environment variable: DOCKER_DEFAULT_IP # description: # - This environment variable overrides the container SSH connection # IP address (aka, 'ansible_ssh_host') # # This option allows one to override the ansible_ssh_host whenever # Docker has exercised its default behavior of binding private ports # to all interfaces of the Docker host. This behavior, when dealing # with remote Docker hosts, does not allow Ansible to determine # a proper host IP address on which to connect via SSH to containers. # By default, this inventory module assumes all IP_ADDRESS-exposed # ports to be bound to localhost:<port>. To override this # behavior, for example, to bind a container's SSH port to the public # interface of its host, one must manually set this IP. # # It is preferable to begin to launch Docker containers with # ports exposed on publicly accessible IP addresses, particularly # if the containers are to be targeted by Ansible for remote # configuration, not accessible via localhost SSH connections. # # Docker containers can be explicitly exposed on IP addresses by # a) starting the daemon with the --ip argument # b) running containers with the -P/--publish ip::containerPort # argument # default: IP_ADDRESS if port exposed on IP_ADDRESS by Docker # required: false # # Examples: # Use the config file: # DOCKER_CONFIG_FILE=./docker.yml docker.py --list # # Connect to docker instance on localhost port 4243 # DOCKER_HOST=tcp://localhost:4243 docker.py --list # # Any container's ssh port exposed on IP_ADDRESS will mapped to # another IP address (where Ansible will attempt to connect via SSH) # DOCKER_DEFAULT_IP=1.2.3.4 docker.py --list
# #!/usr/bin/env python3 # # NOT USED FOR NOW (WE USE FTP INSTEAD OF SFTP) # # """Define a class that deal with the low level sftp manager.""" # # from socket import timeout # import paramiko # # # class SFTPError(Exception): # def __init__(self, message): # self.message = message # def __str__(self): # return repr('SFTP error: ' + self.message) # # # class SFTPManager(object): # """Class to connect to a sftp server with SSH using paramiko. # # :param host: hostname server # :type host: str # :param user: username to use for connection # :type user: str # :param PASSWORD: PASSWORD to use for connection # :type PASSWORD: str # # """ # def __init__(self, host, user='', PASSWORD='', port=2222): # self.host = host # self.user = user # self.PASSWORD = PASSWORD # self.port = port # # self.transport = None # self.sftp = None # # def connection(self): # """Connect to server.""" # try: # self.transport = paramiko.Transport((self.host, self.port)) # self.transport.connect(username=self.user, PASSWORD=self.PASSWORD) # self.sftp = paramiko.SFTPClient.from_transport(self.transport) # except timeout: # raise SFTPError('Timeout error') # except Exception as error: # raise SFTPError('connect; ' + str(error)) # # def disconnect(self): # """Close connection to sftp server.""" # self.sftp.close() # self.transport.close() # self.sftp = None # self.transport = None # # def cd(self, path): # """Set the current directory on the server. # # :param path: path to set # :type path: str # # """ # try: # self.sftp.chdir(path) # except Exception as error: # raise SFTPError('cd; {} {}'.format(path, str(error))) # # def mkdir(self, dirname): # """Create directory.""" # try: # self.sftp.mkdir(dirname) # except Exception as error: # raise SFTPError('mkdir; ' + str(error)) # # def listdir(self, path='.'): # """Return a list containing the names of the entries in the given path.""" # try: # result = self.sftp.listdir(path) # except Exception as error: # raise SFTPError('lisdir; ' + str(error)) # else: # return result # # def listdir_attr(self, path='.'): # """List the given path. # # Return the names and other informations about entries. # # """ # try: # result = self.sftp.listdir_attr(path) # except Exception as error: # raise SFTPError('lisdir attrs; ' + str(error)) # else: # return result # # def put(self, local_filename, server_filename): # """Upload a file into server. # # :param local_filename: local filename to upload # :type local_filename: str # :param server_filename: server filename to upload # :type server_filename: str # :return: ok or error message # # """ # try: # self.sftp.put(local_filename, server_filename) # except Exception as error: # raise SFTPError('upload; ' + str(error)) # # def get(self, local_filename, server_filename): # """Download a file from server. # # :param local_filename: local filename to create # :type local_filename: str # :param server_filename: server filename to download # :type server_filename: str # :return: server ok or error message # # """ # try: # self.sftp.get(server_filename, local_filename) # except Exception as error: # raise SFTPError('download; ' + str(error)) # # def countfiles(self, path='.'): # """Count the file in the given path. # # :param path: path to count # :type path: str # :return: number of files # # """ # nb_files = int() # infos = self.listdir_attr(path) # for info in infos: # if '.' in info.filename: # nb_files += 1 # else: # nb_files += self.countfiles(path + '/' + info.filename) # return nb_files
"""This module tests SyntaxErrors. Here's an example of the sort of thing that is tested. >>> def f(x): ... global x Traceback (most recent call last): SyntaxError: name 'x' is local and global (<doctest test.test_syntax[0]>, line 1) The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and symtable.c. The parser itself outlaws a lot of invalid syntax. None of these errors are tested here at the moment. We should add some tests; since there are infinitely many programs with invalid syntax, we would need to be judicious in selecting some. The compiler generates a synthetic module name for code executed by doctest. Since all the code comes from the same module, a suffix like [1] is appended to the module name, As a consequence, changing the order of tests in this module means renumbering all the errors after it. (Maybe we should enable the ellipsis option for these tests.) In ast.c, syntax errors are raised by calling ast_error(). Errors from set_context(): TODO(jhylton): "assignment to None" is inconsistent with other messages >>> obj.None = 1 Traceback (most recent call last): SyntaxError: assignment to None (<doctest test.test_syntax[1]>, line 1) >>> None = 1 Traceback (most recent call last): SyntaxError: assignment to None (<doctest test.test_syntax[2]>, line 1) It's a syntax error to assign to the empty tuple. Why isn't it an error to assign to the empty list? It will always raise some error at runtime. >>> () = 1 Traceback (most recent call last): SyntaxError: can't assign to () (<doctest test.test_syntax[3]>, line 1) >>> f() = 1 Traceback (most recent call last): SyntaxError: can't assign to function call (<doctest test.test_syntax[4]>, line 1) >>> del f() Traceback (most recent call last): SyntaxError: can't delete function call (<doctest test.test_syntax[5]>, line 1) >>> a + 1 = 2 Traceback (most recent call last): SyntaxError: can't assign to operator (<doctest test.test_syntax[6]>, line 1) >>> (x for x in x) = 1 Traceback (most recent call last): SyntaxError: can't assign to generator expression (<doctest test.test_syntax[7]>, line 1) >>> 1 = 1 Traceback (most recent call last): SyntaxError: can't assign to literal (<doctest test.test_syntax[8]>, line 1) >>> "abc" = 1 Traceback (most recent call last): SyntaxError: can't assign to literal (<doctest test.test_syntax[9]>, line 1) >>> `1` = 1 Traceback (most recent call last): SyntaxError: can't assign to repr (<doctest test.test_syntax[10]>, line 1) If the left-hand side of an assignment is a list or tuple, an illegal expression inside that contain should still cause a syntax error. This test just checks a couple of cases rather than enumerating all of them. >>> (a, "b", c) = (1, 2, 3) Traceback (most recent call last): SyntaxError: can't assign to literal (<doctest test.test_syntax[11]>, line 1) >>> [a, b, c + 1] = [1, 2, 3] Traceback (most recent call last): SyntaxError: can't assign to operator (<doctest test.test_syntax[12]>, line 1) >>> a if 1 else b = 1 Traceback (most recent call last): SyntaxError: can't assign to conditional expression (<doctest test.test_syntax[13]>, line 1) From compiler_complex_args(): >>> def f(None=1): ... pass Traceback (most recent call last): SyntaxError: assignment to None (<doctest test.test_syntax[14]>, line 1) From ast_for_arguments(): >>> def f(x, y=1, z): ... pass Traceback (most recent call last): SyntaxError: non-default argument follows default argument (<doctest test.test_syntax[15]>, line 1) >>> def f(x, None): ... pass Traceback (most recent call last): SyntaxError: assignment to None (<doctest test.test_syntax[16]>, line 1) >>> def f(*None): ... pass Traceback (most recent call last): SyntaxError: assignment to None (<doctest test.test_syntax[17]>, line 1) >>> def f(**None): ... pass Traceback (most recent call last): SyntaxError: assignment to None (<doctest test.test_syntax[18]>, line 1) From ast_for_funcdef(): >>> def None(x): ... pass Traceback (most recent call last): SyntaxError: assignment to None (<doctest test.test_syntax[19]>, line 1) From ast_for_call(): >>> def f(it, *varargs): ... return list(it) >>> L = range(10) >>> f(x for x in L) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(x for x in L, 1) Traceback (most recent call last): SyntaxError: Generator expression must be parenthesized if not sole argument (<doctest test.test_syntax[23]>, line 1) >>> f((x for x in L), 1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... i244, i245, i246, i247, i248, i249, i250, i251, i252, ... i253, i254, i255) Traceback (most recent call last): SyntaxError: more than 255 arguments (<doctest test.test_syntax[25]>, line 1) The actual error cases counts positional arguments, keyword arguments, and generator expression arguments separately. This test combines the three. >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... (x for x in i244), i245, i246, i247, i248, i249, i250, i251, ... i252=1, i253=1, i254=1, i255=1) Traceback (most recent call last): SyntaxError: more than 255 arguments (<doctest test.test_syntax[26]>, line 1) >>> f(lambda x: x[0] = 3) Traceback (most recent call last): SyntaxError: lambda cannot contain assignment (<doctest test.test_syntax[27]>, line 1) The grammar accepts any test (basically, any expression) in the keyword slot of a call site. Test a few different options. >>> f(x()=2) Traceback (most recent call last): SyntaxError: keyword can't be an expression (<doctest test.test_syntax[28]>, line 1) >>> f(a or b=1) Traceback (most recent call last): SyntaxError: keyword can't be an expression (<doctest test.test_syntax[29]>, line 1) >>> f(x.y=1) Traceback (most recent call last): SyntaxError: keyword can't be an expression (<doctest test.test_syntax[30]>, line 1) From ast_for_expr_stmt(): >>> (x for x in x) += 1 Traceback (most recent call last): SyntaxError: augmented assignment to generator expression not possible (<doctest test.test_syntax[31]>, line 1) >>> None += 1 Traceback (most recent call last): SyntaxError: assignment to None (<doctest test.test_syntax[32]>, line 1) >>> f() += 1 Traceback (most recent call last): SyntaxError: illegal expression for augmented assignment (<doctest test.test_syntax[33]>, line 1) Test continue in finally in weird combinations. continue in for loop under finally shouuld be ok. >>> def test(): ... try: ... pass ... finally: ... for abc in range(10): ... continue ... print abc >>> test() 9 Start simple, a continue in a finally should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[36]>, line 6) This is essentially a continue in a finally which should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... try: ... continue ... except: ... pass Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[37]>, line 7) >>> def foo(): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[38]>, line 5) >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[39]>, line 6) >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... try: ... continue ... finally: ... pass Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[40]>, line 7) >>> def foo(): ... for a in (): ... try: pass ... finally: ... try: ... pass ... except: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[41]>, line 8) There is one test for a break that is not in a loop. The compiler uses a single data structure to keep track of try-finally and loops, so we need to be sure that a break is actually inside a loop. If it isn't, there should be a syntax error. >>> try: ... print 1 ... break ... print 2 ... finally: ... print 3 Traceback (most recent call last): ... SyntaxError: 'break' outside loop (<doctest test.test_syntax[42]>, line 3) This should probably raise a better error than a SystemError (or none at all). In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 >>> while 1: ... while 2: ... while 3: ... while 4: ... while 5: ... while 6: ... while 8: ... while 9: ... while 10: ... while 11: ... while 12: ... while 13: ... while 14: ... while 15: ... while 16: ... while 17: ... while 18: ... while 19: ... while 20: ... while 21: ... while 22: ... break Traceback (most recent call last): ... SystemError: too many statically nested blocks This tests assignment-context; there was a bug in Python 2.5 where compiling a complex 'if' (one with 'elif') would fail to notice an invalid suite, leading to spurious errors. >>> if 1: ... x() = 1 ... elif 1: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call (<doctest test.test_syntax[44]>, line 2) >>> if 1: ... pass ... elif 1: ... x() = 1 Traceback (most recent call last): ... SyntaxError: can't assign to function call (<doctest test.test_syntax[45]>, line 4) >>> if 1: ... x() = 1 ... elif 1: ... pass ... else: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call (<doctest test.test_syntax[46]>, line 2) >>> if 1: ... pass ... elif 1: ... x() = 1 ... else: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call (<doctest test.test_syntax[47]>, line 4) >>> if 1: ... pass ... elif 1: ... pass ... else: ... x() = 1 Traceback (most recent call last): ... SyntaxError: can't assign to function call (<doctest test.test_syntax[48]>, line 6) >>> f(a=23, a=234) Traceback (most recent call last): ... SyntaxError: keyword argument repeated (<doctest test.test_syntax[49]>, line 1) """
""" Simple config ============= Although CherryPy uses the :mod:`Python logging module <logging>`, it does so behind the scenes so that simple logging is simple, but complicated logging is still possible. "Simple" logging means that you can log to the screen (i.e. console/stdout) or to a file, and that you can easily have separate error and access log files. Here are the simplified logging settings. You use these by adding lines to your config file or dict. You should set these at either the global level or per application (see next), but generally not both. * ``log.screen``: Set this to True to have both "error" and "access" messages printed to stdout. * ``log.access_file``: Set this to an absolute filename where you want "access" messages written. * ``log.error_file``: Set this to an absolute filename where you want "error" messages written. Many events are automatically logged; to log your own application events, call :func:`cherrypy.log`. Architecture ============ Separate scopes --------------- CherryPy provides log managers at both the global and application layers. This means you can have one set of logging rules for your entire site, and another set of rules specific to each application. The global log manager is found at :func:`cherrypy.log`, and the log manager for each application is found at :attr:`app.log<cherrypy._cptree.Application.log>`. If you're inside a request, the latter is reachable from ``cherrypy.request.app.log``; if you're outside a request, you'll have to obtain a reference to the ``app``: either the return value of :func:`tree.mount()<cherrypy._cptree.Tree.mount>` or, if you used :func:`quickstart()<cherrypy.quickstart>` instead, via ``cherrypy.tree.apps['/']``. By default, the global logs are named "cherrypy.error" and "cherrypy.access", and the application logs are named "cherrypy.error.2378745" and "cherrypy.access.2378745" (the number is the id of the Application object). This means that the application logs "bubble up" to the site logs, so if your application has no log handlers, the site-level handlers will still log the messages. Errors vs. Access ----------------- Each log manager handles both "access" messages (one per HTTP request) and "error" messages (everything else). Note that the "error" log is not just for errors! The format of access messages is highly formalized, but the error log isn't--it receives messages from a variety of sources (including full error tracebacks, if enabled). If you are logging the access log and error log to the same source, then there is a possibility that a specially crafted error message may replicate an access log message as described in CWE-117. In this case it is the application developer's responsibility to manually escape data before using CherryPy's log() functionality, or they may create an application that is vulnerable to CWE-117. This would be achieved by using a custom handler escape any special characters, and attached as described below. Custom Handlers =============== The simple settings above work by manipulating Python's standard :mod:`logging` module. So when you need something more complex, the full power of the standard module is yours to exploit. You can borrow or create custom handlers, formats, filters, and much more. Here's an example that skips the standard FileHandler and uses a RotatingFileHandler instead: :: #python log = app.log # Remove the default FileHandlers if present. log.error_file = "" log.access_file = "" maxBytes = getattr(log, "rot_maxBytes", 10000000) backupCount = getattr(log, "rot_backupCount", 1000) # Make a new RotatingFileHandler for the error log. fname = getattr(log, "rot_error_file", "error.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.error_log.addHandler(h) # Make a new RotatingFileHandler for the access log. fname = getattr(log, "rot_access_file", "access.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.access_log.addHandler(h) The ``rot_*`` attributes are pulled straight from the application log object. Since "log.*" config entries simply set attributes on the log object, you can add custom attributes to your heart's content. Note that these handlers are used ''instead'' of the default, simple handlers outlined above (so don't set the "log.error_file" config entry, for example). """
# # Written for Theano 0.6 and 0.7, needs some changes for more recent # # versions of Theano. # # # #### Libraries # # Standard library # import cPickle # import gzip # # # Third-party libraries # import numpy as np # import theano # import theano.tensor as T # from theano.tensor.nnet import conv # from theano.tensor.nnet import softmax # from theano.tensor import shared_randomstreams # from theano.tensor.signal import downsample # # # Activation functions for neurons # def linear(z): return z # def ReLU(z): return T.maximum(0.0, z) # from theano.tensor.nnet import sigmoid # from theano.tensor import tanh # # # #### Constants # GPU = True # if GPU: # print "Trying to run under a GPU. If this is not desired, then modify "+\ # "network3.py\nto set the GPU flag to False." # try: theano.config.device = 'gpu' # except: pass # it's already set # theano.config.floatX = 'float32' # else: # print "Running with a CPU. If this is not desired, then the modify "+\ # "network3.py to set\nthe GPU flag to True." # # #### Load the MNIST data # def load_data_shared(filename="../data/mnist.pkl.gz"): # f = gzip.open(filename, 'rb') # training_data, validation_data, test_data = cPickle.load(f) # f.close() # def shared(data): # """Place the data into shared variables. This allows Theano to copy # the data to the GPU, if one is available. # """ # shared_x = theano.shared( # np.asarray(data[0], dtype=theano.config.floatX), borrow=True) # shared_y = theano.shared( # np.asarray(data[1], dtype=theano.config.floatX), borrow=True) # return shared_x, T.cast(shared_y, "int32") # return [shared(training_data), shared(validation_data), shared(test_data)] # # #### Main class used to construct and train networks # class Network(object): # # def __init__(self, layers, mini_batch_size): # """Takes a list of `layers`, describing the network architecture, and # a value for the `mini_batch_size` to be used during training # by stochastic gradient descent. # """ # self.layers = layers # self.mini_batch_size = mini_batch_size # self.params = [param for layer in self.layers for param in layer.params] # self.x = T.matrix("x") # self.y = T.ivector("y") # init_layer = self.layers[0] # init_layer.set_inpt(self.x, self.x, self.mini_batch_size) # for j in xrange(1, len(self.layers)): # prev_layer, layer = self.layers[j-1], self.layers[j] # layer.set_inpt( # prev_layer.output, prev_layer.output_dropout, self.mini_batch_size) # self.output = self.layers[-1].output # self.output_dropout = self.layers[-1].output_dropout # # def SGD(self, training_data, epochs, mini_batch_size, eta, # validation_data, test_data, lmbda=0.0): # """Train the network using mini-batch stochastic gradient descent.""" # training_x, training_y = training_data # validation_x, validation_y = validation_data # test_x, test_y = test_data # # # compute number of minibatches for training, validation and testing # num_training_batches = size(training_data)/mini_batch_size # num_validation_batches = size(validation_data)/mini_batch_size # num_test_batches = size(test_data)/mini_batch_size # # # define the (regularized) cost function, symbolic gradients, and updates # l2_norm_squared = sum([(layer.w**2).sum() for layer in self.layers]) # cost = self.layers[-1].cost(self)+\ # 0.5*lmbda*l2_norm_squared/num_training_batches # grads = T.grad(cost, self.params) # updates = [(param, param-eta*grad) # for param, grad in zip(self.params, grads)] # # # define functions to train a mini-batch, and to compute the # # accuracy in validation and test mini-batches. # i = T.lscalar() # mini-batch index # train_mb = theano.function( # [i], cost, updates=updates, # givens={ # self.x: # training_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size], # self.y: # training_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size] # }) # validate_mb_accuracy = theano.function( # [i], self.layers[-1].accuracy(self.y), # givens={ # self.x: # validation_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size], # self.y: # validation_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size] # }) # test_mb_accuracy = theano.function( # [i], self.layers[-1].accuracy(self.y), # givens={ # self.x: # test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size], # self.y: # test_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size] # }) # self.test_mb_predictions = theano.function( # [i], self.layers[-1].y_out, # givens={ # self.x: # test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size] # }) # # Do the actual training # best_validation_accuracy = 0.0 # for epoch in xrange(epochs): # for minibatch_index in xrange(num_training_batches): # iteration = num_training_batches*epoch+minibatch_index # if iteration % 1000 == 0: # print("Training mini-batch number {0}".format(iteration)) # cost_ij = train_mb(minibatch_index) # if (iteration+1) % num_training_batches == 0: # validation_accuracy = np.mean( # [validate_mb_accuracy(j) for j in xrange(num_validation_batches)]) # print("Epoch {0}: validation accuracy {1:.2%}".format( # epoch, validation_accuracy)) # if validation_accuracy >= best_validation_accuracy: # print("This is the best validation accuracy to date.") # best_validation_accuracy = validation_accuracy # best_iteration = iteration # if test_data: # test_accuracy = np.mean( # [test_mb_accuracy(j) for j in xrange(num_test_batches)]) # print('The corresponding test accuracy is {0:.2%}'.format( # test_accuracy)) # print("Finished training network.") # print("Best validation accuracy of {0:.2%} obtained at iteration {1}".format( # best_validation_accuracy, best_iteration)) # print("Corresponding test accuracy of {0:.2%}".format(test_accuracy)) # # #### Define layer types # # class ConvPoolLayer(object): # Used to create a combination of a convolutional and a max-pooling # layer. A more sophisticated implementation would separate the # two, but for our purposes we'll always use them together, and it # simplifies the code, so it makes sense to combine them. # # # def __init__(self, filter_shape, image_shape, poolsize=(2, 2), # activation_fn=sigmoid): # `filter_shape` is a tuple of length 4, whose entries are the number # of filters, the number of input feature maps, the filter height, and the # filter width. # `image_shape` is a tuple of length 4, whose entries are the # mini-batch size, the number of input feature maps, the image # height, and the image width. # `poolsize` is a tuple of length 2, whose entries are the y and # x pooling sizes. # # self.filter_shape = filter_shape # self.image_shape = image_shape # self.poolsize = poolsize # self.activation_fn=activation_fn # # initialize weights and biases # n_out = (filter_shape[0]*np.prod(filter_shape[2:])/np.prod(poolsize)) # self.w = theano.shared( # np.asarray( # np.random.normal(loc=0, scale=np.sqrt(1.0/n_out), size=filter_shape), # dtype=theano.config.floatX), # borrow=True) # self.b = theano.shared( # np.asarray( # np.random.normal(loc=0, scale=1.0, size=(filter_shape[0],)), # dtype=theano.config.floatX), # borrow=True) # self.params = [self.w, self.b] # # def set_inpt(self, inpt, inpt_dropout, mini_batch_size): # self.inpt = inpt.reshape(self.image_shape) # conv_out = conv.conv2d( # input=self.inpt, filters=self.w, filter_shape=self.filter_shape, # image_shape=self.image_shape) # pooled_out = downsample.max_pool_2d( # input=conv_out, ds=self.poolsize, ignore_border=True) # self.output = self.activation_fn( # pooled_out + self.b.dimshuffle('x', 0, 'x', 'x')) # self.output_dropout = self.output # no dropout in the convolutional layers # # class FullyConnectedLayer(object): # # def __init__(self, n_in, n_out, activation_fn=sigmoid, p_dropout=0.0): # self.n_in = n_in # self.n_out = n_out # self.activation_fn = activation_fn # self.p_dropout = p_dropout # # Initialize weights and biases # self.w = theano.shared( # np.asarray( # np.random.normal( # loc=0.0, scale=np.sqrt(1.0/n_out), size=(n_in, n_out)), # dtype=theano.config.floatX), # name='w', borrow=True) # self.b = theano.shared( # np.asarray(np.random.normal(loc=0.0, scale=1.0, size=(n_out,)), # dtype=theano.config.floatX), # name='b', borrow=True) # self.params = [self.w, self.b] # # def set_inpt(self, inpt, inpt_dropout, mini_batch_size): # self.inpt = inpt.reshape((mini_batch_size, self.n_in)) # self.output = self.activation_fn( # (1-self.p_dropout)*T.dot(self.inpt, self.w) + self.b) # self.y_out = T.argmax(self.output, axis=1) # self.inpt_dropout = dropout_layer( # inpt_dropout.reshape((mini_batch_size, self.n_in)), self.p_dropout) # self.output_dropout = self.activation_fn( # T.dot(self.inpt_dropout, self.w) + self.b) # # def accuracy(self, y): # "Return the accuracy for the mini-batch." # return T.mean(T.eq(y, self.y_out)) # # class SoftmaxLayer(object): # # def __init__(self, n_in, n_out, p_dropout=0.0): # self.n_in = n_in # self.n_out = n_out # self.p_dropout = p_dropout # # Initialize weights and biases # self.w = theano.shared( # np.zeros((n_in, n_out), dtype=theano.config.floatX), # name='w', borrow=True) # self.b = theano.shared( # np.zeros((n_out,), dtype=theano.config.floatX), # name='b', borrow=True) # self.params = [self.w, self.b] # # def set_inpt(self, inpt, inpt_dropout, mini_batch_size): # self.inpt = inpt.reshape((mini_batch_size, self.n_in)) # self.output = softmax((1-self.p_dropout)*T.dot(self.inpt, self.w) + self.b) # self.y_out = T.argmax(self.output, axis=1) # self.inpt_dropout = dropout_layer( # inpt_dropout.reshape((mini_batch_size, self.n_in)), self.p_dropout) # self.output_dropout = softmax(T.dot(self.inpt_dropout, self.w) + self.b) # # def cost(self, net): # "Return the log-likelihood cost." # return -T.mean(T.log(self.output_dropout)[T.arange(net.y.shape[0]), net.y]) # # def accuracy(self, y): # "Return the accuracy for the mini-batch." # return T.mean(T.eq(y, self.y_out)) # # # #### Miscellanea # def size(data): # "Return the size of the dataset `data`." # return data[0].get_value(borrow=True).shape[0] # # def dropout_layer(layer, p_dropout): # srng = shared_randomstreams.RandomStreams( # np.random.RandomState(0).randint(999999)) # mask = srng.binomial(n=1, p=1-p_dropout, size=layer.shape)
""" Components/Banner ================= .. seealso:: `Material Design spec, Banner <https://material.io/components/banners>`_ .. rubric:: A banner displays a prominent message and related optional actions. .. image:: https://github.com/HeaTTheatR/KivyMD-data/raw/master/gallery/kivymddoc/banner.png :align: center Usage ===== .. code-block:: python from kivy.lang import Builder from kivy.factory import Factory from kivymd.app import MDApp Builder.load_string(''' <ExampleBanner@Screen> MDBanner: id: banner text: ["One line string text example without actions."] # The widget that is under the banner. # It will be shifted down to the height of the banner. over_widget: screen vertical_pad: toolbar.height MDToolbar: id: toolbar title: "Example Banners" elevation: 10 pos_hint: {'top': 1} BoxLayout: id: screen orientation: "vertical" size_hint_y: None height: Window.height - toolbar.height OneLineListItem: text: "Banner without actions" on_release: banner.show() Widget: ''') class Test(MDApp): def build(self): return Factory.ExampleBanner() Test().run() .. image:: https://github.com/HeaTTheatR/KivyMD-data/raw/master/gallery/kivymddoc/banner-example-1.gif :align: center .. rubric:: Banner type. By default, the banner is of the type ``'one-line'``: .. code-block:: kv MDBanner: text: ["One line string text example without actions."] .. image:: https://github.com/HeaTTheatR/KivyMD-data/raw/master/gallery/kivymddoc/banner-one-line.png :align: center To use a two-line banner, specify the ``'two-line'`` :attr:`MDBanner.type` for the banner and pass the list of two lines to the :attr:`MDBanner.text` parameter: .. code-block:: kv MDBanner: type: "two-line" text: ["One line string text example without actions.", "This is the second line of the banner message."] .. image:: https://github.com/HeaTTheatR/KivyMD-data/raw/master/gallery/kivymddoc/banner-two-line.png :align: center Similarly, create a three-line banner: .. code-block:: kv MDBanner: type: "three-line" text: ["One line string text example without actions.", "This is the second line of the banner message.", "and this is the third line of the banner message."] .. image:: https://github.com/HeaTTheatR/KivyMD-data/raw/master/gallery/kivymddoc/banner-three-line.png :align: center To add buttons to any type of banner, use the :attr:`MDBanner.left_action` and :attr:`MDBanner.right_action` parameters, which should take a list ['Button name', function]: .. code-block:: kv MDBanner: text: ["One line string text example without actions."] left_action: ["CANCEL", lambda x: None] Or two buttons: .. code-block:: kv MDBanner: text: ["One line string text example without actions."] left_action: ["CANCEL", lambda x: None] right_action: ["CLOSE", lambda x: None] .. image:: https://github.com/HeaTTheatR/KivyMD-data/raw/master/gallery/kivymddoc/banner-actions.png :align: center If you want to use the icon on the left in the banner, add the prefix `'-icon'` to the banner type: .. code-block:: kv MDBanner: type: "one-line-icon" icon: f"{images_path}/kivymd.png" text: ["One line string text example without actions."] .. image:: https://github.com/HeaTTheatR/KivyMD-data/raw/master/gallery/kivymddoc/banner-icon.png :align: center .. Note:: `See full example <https://github.com/kivymd/KivyMD/wiki/Components-Banner>`_ """
""" Low-level LAPACK functions ========================== This module contains low-level functions from the LAPACK library. .. versionadded:: 0.12.0 .. warning:: These functions do little to no error checking. It is possible to cause crashes by mis-using them, so prefer using the higher-level routines in `scipy.linalg`. Finding functions ================= .. autosummary:: get_lapack_funcs All functions ============= .. autosummary:: :toctree: generated/ sgbsv dgbsv cgbsv zgbsv sgbtrf dgbtrf cgbtrf zgbtrf sgbtrs dgbtrs cgbtrs zgbtrs sgebal dgebal cgebal zgebal sgees dgees cgees zgees sgeev dgeev cgeev zgeev sgeev_lwork dgeev_lwork cgeev_lwork zgeev_lwork sgegv dgegv cgegv zgegv sgehrd dgehrd cgehrd zgehrd sgehrd_lwork dgehrd_lwork cgehrd_lwork zgehrd_lwork sgelss dgelss cgelss zgelss sgelss_lwork dgelss_lwork cgelss_lwork zgelss_lwork sgelsd dgelsd cgelsd zgelsd sgelsd_lwork dgelsd_lwork cgelsd_lwork zgelsd_lwork sgelsy dgelsy cgelsy zgelsy sgelsy_lwork dgelsy_lwork cgelsy_lwork zgelsy_lwork sgeqp3 dgeqp3 cgeqp3 zgeqp3 sgeqrf dgeqrf cgeqrf zgeqrf sgerqf dgerqf cgerqf zgerqf sgesdd dgesdd cgesdd zgesdd sgesdd_lwork dgesdd_lwork cgesdd_lwork zgesdd_lwork sgesv dgesv cgesv zgesv sgetrf dgetrf cgetrf zgetrf sgetri dgetri cgetri zgetri sgetri_lwork dgetri_lwork cgetri_lwork zgetri_lwork sgetrs dgetrs cgetrs zgetrs sgges dgges cgges zgges sggev dggev cggev zggev chbevd zhbevd chbevx zhbevx cheev zheev cheevd zheevd cheevr zheevr chegv zhegv chegvd zhegvd chegvx zhegvx slarf dlarf clarf zlarf slarfg dlarfg clarfg zlarfg slartg dlartg clartg zlartg dlasd4 slasd4 slaswp dlaswp claswp zlaswp slauum dlauum clauum zlauum spbsv dpbsv cpbsv zpbsv spbtrf dpbtrf cpbtrf zpbtrf spbtrs dpbtrs cpbtrs zpbtrs sposv dposv cposv zposv spotrf dpotrf cpotrf zpotrf spotri dpotri cpotri zpotri spotrs dpotrs cpotrs zpotrs crot zrot strsyl dtrsyl ctrsyl ztrsyl strtri dtrtri ctrtri ztrtri strtrs dtrtrs ctrtrs ztrtrs cunghr zunghr cungqr zungqr cungrq zungrq cunmqr zunmqr sgtsv dgtsv cgtsv zgtsv sptsv dptsv cptsv zptsv slamch dlamch sorghr dorghr sorgqr dorgqr sorgrq dorgrq sormqr dormqr ssbev dsbev ssbevd dsbevd ssbevx dsbevx ssyev dsyev ssyevd dsyevd ssyevr dsyevr ssygv dsygv ssygvd dsygvd ssygvx dsygvx slange dlange clange zlange """
""" [2016-03-23] Challenge #259 [Intermediate] Mahjong Hands https://www.reddit.com/r/dailyprogrammer/comments/4bmdwz/20160323_challenge_259_intermediate_mahjong_hands/ # Description You are the biggest, baddest mahjong player around. Your enemies tremble at your presence on the battlefield, and you can barely walk ten steps before a fan begs you for an autograph. However, you have a dark secret that would ruin you if it ever came to light. You're terrible at determining whether a hand is a winning hand. For now, you've been able to bluff and bluster your way, but you know that one day you won't be able to get away with it. As such, you've decided to write a program to assist you! ## Further Details Mahjong (not to be confused with [mahjong solitaire](http://en.wikipedia.org/wiki/Mahjong_solitaire)) is a game where hands are composed from combinations of tiles. There are a number of variants of mahjong, but for this challenge, we will consider a simplified variant of Japanese Mahjong which is also known as NAME Basic Version There are three suits in this variant, "Bamboo", "Circle" and "Character". Every tile that belongs to these suits has a value that ranges from 1 - 9. To complete a hand, tiles are organised into groups. If every tile in a hand belongs to a single group (and each tile can only be used once), the hand is a winning hand. For now, we shall consider the groups "Pair", "Set" and "Sequence". They are composed as follows: Pair - Two tiles with the same suit and value Set - Three tiles with the same suit and value Sequence - Three tiles with the same suit, and which increment in value, such as "Circle 2, Circle 3, Circle 4". There is no value wrapping so "Circle 9, Circle 1, Circle 2" would not be considered valid. A hand is composed of 14 tiles. ## Bonus 1 - Adding Quads There is actually a fourth group called a "Quad". It is just like a pair and a set, except it is composed of four tiles. What makes this group special is that a hand containing quads will actually have a hand larger than 14, 1 for every quad. This is fine, as long as there is *1, and only 1 pair*. ## Bonus 2 - Adding Honour Tiles In addition to the tiles belonging to the three suits, there are 7 additional tiles. These tiles have no value, and are collectively known as "honour" tiles. As they have no value, they cannot be members of a sequence. Furthermore, they can only be part of a set or pair with tiles that are exactly the same. For example, "Red Dragon, Red Dragon, Red Dragon" would be a valid set, but "Red Dragon, Green Dragon, Red Dragon" would not. These additional tiles are: * Green Dragon * Red Dragon * White Dragon * North Wind * East Wind * South Wind * West Wind ## Bonus 3 - Seven Pairs There are a number of special hands that are an exception to the above rules. One such hand is "Seven Pairs". As the name suggests, it is a hand composed of seven pairs. # Formal Inputs & Outputs ## Input description ### Basic You will be provided with N on a single line, followed by N lines of the following format: <tile suit>,<value> ### Bonus 2 In addition, the lines may be of the format: <honour tile> ## Output description You should output whether the hand is a winning hand or not. # Sample Inputs and Outputs ## Sample Input (Standard) 14 Circle,4 Circle,5 Circle,6 Bamboo,1 Bamboo,2 Bamboo,3 Character,2 Character,2 Character,2 Circle,1 Circle,1 Bamboo,7 Bamboo,8 Bamboo,9 ## Sample Output (Standard) Winning hand ## Sample Input (Standard) 14 Circle,4 Bamboo,1 Circle,5 Bamboo,2 Character,2 Bamboo,3 Character,2 Circle,6 Character,2 Circle,1 Bamboo,8 Circle,1 Bamboo,7 Bamboo,9 ## Sample Output (Standard) Winning hand ## Sample Input (Standard) 14 Circle,4 Circle,5 Circle,6 Circle,4 Circle,5 Circle,6 Circle,1 Circle,1 Bamboo,7 Bamboo,8 Bamboo,9 Circle,4 Circle,5 Circle,6 ## Sample Output (Standard) Winning hand ## Sample Input (Bonus 1) 15 Circle,4 Circle,5 Circle,6 Bamboo,1 Bamboo,2 Bamboo,3 Character,2 Character,2 Character,2 Character,2 Circle,1 Circle,1 Bamboo,7 Bamboo,8 Bamboo,9 ## Sample Output (Bonus 1) Winning hand ## Sample Input (Bonus 1) 16 Circle,4 Circle,5 Circle,6 Bamboo,1 Bamboo,2 Bamboo,3 Character,2 Character,2 Character,2 Character,2 Circle,1 Circle,1 Circle,1 Bamboo,7 Bamboo,8 Bamboo,9 ## Sample Output (Bonus 1) Not a winning hand ## Sample Input (Bonus 2) 14 Circle,4 Circle,5 Circle,6 Bamboo,1 Bamboo,2 Bamboo,3 Red Dragon Red Dragon Red Dragon Circle,1 Circle,1 Bamboo,7 Bamboo,8 Bamboo,9 ## Sample Output (Bonus 2) Winning hand ## Sample Input (Bonus 2) 14 Circle,4 Circle,5 Circle,6 Bamboo,1 Bamboo,2 Bamboo,3 Red Dragon Green Dragon White Dragon Circle,1 Circle,1 Bamboo,7 Bamboo,8 Bamboo,9 ## Sample Output (Bonus 2) Not a winning hand ## Sample Input (Bonus 3) 14 Circle,4 Circle,4 Character,5 Character,5 Bamboo,5 Bamboo,5 Circle,5 Circle,5 Circle,7 Circle,7 Circle,9 Circle,9 Circle,9 Circle,9 ## Sample Output (Bonus 3) Winning hand # Notes None of the bonus components depend on each other, and can be implemented in any order. The test cases do not presume completion of earlier bonus components. The order is just the recommended implementation order. Many thanks to Redditor /u/oketa for this submission to /r/dailyprogrammer_ideas. If you have any ideas, please submit them there! """
""" ============= Miscellaneous ============= IEEE 754 Floating Point Special Values -------------------------------------- Special values defined in numpy: nan, inf, NaNs can be used as a poor-man's mask (if you don't care what the original value was) Note: cannot use equality to test NaNs. E.g.: :: >>> myarr = np.array([1., 0., np.nan, 3.]) >>> np.where(myarr == np.nan) >>> np.nan == np.nan # is always False! Use special numpy functions instead. False >>> myarr[myarr == np.nan] = 0. # doesn't work >>> myarr array([ 1., 0., NaN, 3.]) >>> myarr[np.isnan(myarr)] = 0. # use this instead find >>> myarr array([ 1., 0., 0., 3.]) Other related special value functions: :: isinf(): True if value is inf isfinite(): True if not nan or inf nan_to_num(): Map nan to 0, inf to max float, -inf to min float The following corresponds to the usual functions except that nans are excluded from the results: :: nansum() nanmax() nanmin() nanargmax() nanargmin() >>> x = np.arange(10.) >>> x[3] = np.nan >>> x.sum() nan >>> np.nansum(x) 42.0 How numpy handles numerical exceptions -------------------------------------- The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow`` and ``'ignore'`` for ``underflow``. But this can be changed, and it can be set individually for different kinds of exceptions. The different behaviors are: - 'ignore' : Take no action when the exception occurs. - 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module). - 'raise' : Raise a `FloatingPointError`. - 'call' : Call a function specified using the `seterrcall` function. - 'print' : Print a warning directly to ``stdout``. - 'log' : Record error in a Log object specified by `seterrcall`. These behaviors can be set for all kinds of errors or specific ones: - all : apply to all numeric exceptions - invalid : when NaNs are generated - divide : divide by zero (for integers as well!) - overflow : floating point overflows - underflow : floating point underflows Note that integer divide-by-zero is handled by the same machinery. These behaviors are set on a per-thread basis. Examples -------- :: >>> oldsettings = np.seterr(all='warn') >>> np.zeros(5,dtype=np.float32)/0. invalid value encountered in divide >>> j = np.seterr(under='ignore') >>> np.array([1.e-100])**10 >>> j = np.seterr(invalid='raise') >>> np.sqrt(np.array([-1.])) FloatingPointError: invalid value encountered in sqrt >>> def errorhandler(errstr, errflag): ... print "saw stupid error!" >>> np.seterrcall(errorhandler) <function err_handler at 0x...> >>> j = np.seterr(all='call') >>> np.zeros(5, dtype=np.int32)/0 FloatingPointError: invalid value encountered in divide saw stupid error! >>> j = np.seterr(**oldsettings) # restore previous ... # error-handling settings Interfacing to C ---------------- Only a survey of the choices. Little detail on how each works. 1) Bare metal, wrap your own C-code manually. - Plusses: - Efficient - No dependencies on other tools - Minuses: - Lots of learning overhead: - need to learn basics of Python C API - need to learn basics of numpy C API - need to learn how to handle reference counting and love it. - Reference counting often difficult to get right. - getting it wrong leads to memory leaks, and worse, segfaults - API will change for Python 3.0! 2) Cython - Plusses: - avoid learning C API's - no dealing with reference counting - can code in pseudo python and generate C code - can also interface to existing C code - should shield you from changes to Python C api - has become the de-facto standard within the scientific Python community - fast indexing support for arrays - Minuses: - Can write code in non-standard form which may become obsolete - Not as flexible as manual wrapping 4) ctypes - Plusses: - part of Python standard library - good for interfacing to existing sharable libraries, particularly Windows DLLs - avoids API/reference counting issues - good numpy support: arrays have all these in their ctypes attribute: :: a.ctypes.data a.ctypes.get_strides a.ctypes.data_as a.ctypes.shape a.ctypes.get_as_parameter a.ctypes.shape_as a.ctypes.get_data a.ctypes.strides a.ctypes.get_shape a.ctypes.strides_as - Minuses: - can't use for writing code to be turned into C extensions, only a wrapper tool. 5) SWIG (automatic wrapper generator) - Plusses: - around a long time - multiple scripting language support - C++ support - Good for wrapping large (many functions) existing C libraries - Minuses: - generates lots of code between Python and the C code - can cause performance problems that are nearly impossible to optimize out - interface files can be hard to write - doesn't necessarily avoid reference counting issues or needing to know API's 7) scipy.weave - Plusses: - can turn many numpy expressions into C code - dynamic compiling and loading of generated C code - can embed pure C code in Python module and have weave extract, generate interfaces and compile, etc. - Minuses: - Future very uncertain: it's the only part of Scipy not ported to Python 3 and is effectively deprecated in favor of Cython. 8) Psyco - Plusses: - Turns pure python into efficient machine code through jit-like optimizations - very fast when it optimizes well - Minuses: - Only on intel (windows?) - Doesn't do much for numpy? Interfacing to Fortran: ----------------------- The clear choice to wrap Fortran code is `f2py <http://docs.scipy.org/doc/numpy-dev/f2py/>`_. Pyfort is an older alternative, but not supported any longer. Fwrap is a newer project that looked promising but isn't being developed any longer. Interfacing to C++: ------------------- 1) Cython 2) CXX 3) Boost.python 4) SWIG 5) SIP (used mainly in PyQT) """
""" This is a procedural interface to the matplotlib object-oriented plotting library. The following plotting commands are provided; the majority have Matlab(TM) analogs and similar argument. _Plotting commands acorr - plot the autocorrelation function annotate - annotate something in the figure arrow - add an arrow to the axes axes - Create a new axes axhline - draw a horizontal line across axes axvline - draw a vertical line across axes axhspan - draw a horizontal bar across axes axvspan - draw a vertical bar across axes axis - Set or return the current axis limits bar - make a bar chart barh - a horizontal bar chart broken_barh - a set of horizontal bars with gaps box - set the axes frame on/off state boxplot - make a box and whisker plot cla - clear current axes clabel - label a contour plot clf - clear a figure window clim - adjust the color limits of the current image close - close a figure window colorbar - add a colorbar to the current figure cohere - make a plot of coherence contour - make a contour plot contourf - make a filled contour plot csd - make a plot of cross spectral density delaxes - delete an axes from the current figure draw - Force a redraw of the current figure errorbar - make an errorbar graph figlegend - make legend on the figure rather than the axes figimage - make a figure image figtext - add text in figure coords figure - create or change active figure fill - make filled polygons findobj - recursively find all objects matching some criteria gca - return the current axes gcf - return the current figure gci - get the current image, or None getp - get a handle graphics property grid - set whether gridding is on hist - make a histogram hold - set the axes hold state ioff - turn interaction mode off ion - turn interaction mode on isinteractive - return True if interaction mode is on imread - load image file into array imshow - plot image data ishold - return the hold state of the current axes legend - make an axes legend loglog - a log log plot matshow - display a matrix in a new figure preserving aspect pcolor - make a pseudocolor plot pcolormesh - make a pseudocolor plot using a quadrilateral mesh pie - make a pie chart plot - make a line plot plot_date - plot dates plotfile - plot column data from an ASCII tab/space/comma delimited file pie - pie charts polar - make a polar plot on a PolarAxes psd - make a plot of power spectral density quiver - make a direction field (arrows) plot rc - control the default params rgrids - customize the radial grids and labels for polar savefig - save the current figure scatter - make a scatter plot setp - set a handle graphics property semilogx - log x axis semilogy - log y axis show - show the figures specgram - a spectrogram plot spy - plot sparsity pattern using markers or image stem - make a stem plot subplot - make a subplot (numrows, numcols, axesnum) subplots_adjust - change the params controlling the subplot positions of current figure subplot_tool - launch the subplot configuration tool suptitle - add a figure title table - add a table to the plot text - add some text at location x,y to the current axes thetagrids - customize the radial theta grids and labels for polar title - add a title to the current axes xcorr - plot the autocorrelation function of x and y xlim - set/get the xlimits ylim - set/get the ylimits xticks - set/get the xticks yticks - set/get the yticks xlabel - add an xlabel to the current axes ylabel - add a ylabel to the current axes autumn - set the default colormap to autumn bone - set the default colormap to bone cool - set the default colormap to cool copper - set the default colormap to copper flag - set the default colormap to flag gray - set the default colormap to gray hot - set the default colormap to hot hsv - set the default colormap to hsv jet - set the default colormap to jet pink - set the default colormap to pink prism - set the default colormap to prism spring - set the default colormap to spring summer - set the default colormap to summer winter - set the default colormap to winter spectral - set the default colormap to spectral _Event handling connect - register an event handler disconnect - remove a connected event handler _Matrix commands cumprod - the cumulative product along a dimension cumsum - the cumulative sum along a dimension detrend - remove the mean or besdt fit line from an array diag - the k-th diagonal of matrix diff - the n-th differnce of an array eig - the eigenvalues and eigen vectors of v eye - a matrix where the k-th diagonal is ones, else zero find - return the indices where a condition is nonzero fliplr - flip the rows of a matrix up/down flipud - flip the columns of a matrix left/right linspace - a linear spaced vector of N values from min to max inclusive logspace - a log spaced vector of N values from min to max inclusive meshgrid - repeat x and y to make regular matrices ones - an array of ones rand - an array from the uniform distribution [0,1] randn - an array from the normal distribution rot90 - rotate matrix k*90 degress counterclockwise squeeze - squeeze an array removing any dimensions of length 1 tri - a triangular matrix tril - a lower triangular matrix triu - an upper triangular matrix vander - the Vandermonde matrix of vector x svd - singular value decomposition zeros - a matrix of zeros _Probability levypdf - The levy probability density function from the char. func. normpdf - The Gaussian probability density function rand - random numbers from the uniform distribution randn - random numbers from the normal distribution _Statistics corrcoef - correlation coefficient cov - covariance matrix amax - the maximum along dimension m mean - the mean along dimension m median - the median along dimension m amin - the minimum along dimension m norm - the norm of vector x prod - the product along dimension m ptp - the max-min along dimension m std - the standard deviation along dimension m asum - the sum along dimension m _Time series analysis bartlett - M-point Bartlett window blackman - M-point Blackman window cohere - the coherence using average periodiogram csd - the cross spectral density using average periodiogram fft - the fast Fourier transform of vector x hamming - M-point Hamming window hanning - M-point Hanning window hist - compute the histogram of x kaiser - M length Kaiser window psd - the power spectral density using average periodiogram sinc - the sinc function of array x _Dates date2num - convert python datetimes to numeric representation drange - create an array of numbers for date plots num2date - convert numeric type (float days since 0001) to datetime _Other angle - the angle of a complex array griddata - interpolate irregularly distributed data to a regular grid load - load ASCII data into array polyfit - fit x, y to an n-th order polynomial polyval - evaluate an n-th order polynomial roots - the roots of the polynomial coefficients in p save - save an array to an ASCII file trapz - trapezoidal integration __end """
""" ============================= Byteswapping and byte order ============================= Introduction to byte ordering and ndarrays ========================================== The ``ndarray`` is an object that provide a python array interface to data in memory. It often happens that the memory that you want to view with an array is not of the same byte ordering as the computer on which you are running Python. For example, I might be working on a computer with a little-endian CPU - such as an Intel Pentium, but I have loaded some data from a file written by a computer that is big-endian. Let's say I have loaded 4 bytes from a file written by a Sun (big-endian) computer. I know that these 4 bytes represent two 16-bit integers. On a big-endian machine, a two-byte integer is stored with the Most Significant Byte (MSB) first, and then the Least Significant Byte (LSB). Thus the bytes are, in memory order: #. MSB integer 1 #. LSB integer 1 #. MSB integer 2 #. LSB integer 2 Let's say the two integers were in fact 1 and 770. Because 770 = 256 * 3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2. The bytes I have loaded from the file would have these contents: >>> big_end_str = chr(0) + chr(1) + chr(3) + chr(2) >>> big_end_str '\\x00\\x01\\x03\\x02' We might want to use an ``ndarray`` to access these integers. In that case, we can create an array around this memory, and tell numpy that there are two integers, and that they are 16 bit and big-endian: >>> import numpy as np >>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_str) >>> big_end_arr[0] 1 >>> big_end_arr[1] 770 Note the array ``dtype`` above of ``>i2``. The ``>`` means 'big-endian' (``<`` is little-endian) and ``i2`` means 'signed 2-byte integer'. For example, if our data represented a single unsigned 4-byte little-endian integer, the dtype string would be ``<u4``. In fact, why don't we try that? >>> little_end_u4 = np.ndarray(shape=(1,),dtype='<u4', buffer=big_end_str) >>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3 True Returning to our ``big_end_arr`` - in this case our underlying data is big-endian (data endianness) and we've set the dtype to match (the dtype is also big-endian). However, sometimes you need to flip these around. .. warning:: Scalars currently do not include byte order information, so extracting a scalar from an array will return an integer in native byte order. Hence: >>> big_end_arr[0].dtype.byteorder == little_end_u4[0].dtype.byteorder True Changing byte ordering ====================== As you can imagine from the introduction, there are two ways you can affect the relationship between the byte ordering of the array and the underlying memory it is looking at: * Change the byte-ordering information in the array dtype so that it interprets the undelying data as being in a different byte order. This is the role of ``arr.newbyteorder()`` * Change the byte-ordering of the underlying data, leaving the dtype interpretation as it was. This is what ``arr.byteswap()`` does. The common situations in which you need to change byte ordering are: #. Your data and dtype endianess don't match, and you want to change the dtype so that it matches the data. #. Your data and dtype endianess don't match, and you want to swap the data so that they match the dtype #. Your data and dtype endianess match, but you want the data swapped and the dtype to reflect this Data and dtype endianness don't match, change dtype to match data ----------------------------------------------------------------- We make something where they don't match: >>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='<i2', buffer=big_end_str) >>> wrong_end_dtype_arr[0] 256 The obvious fix for this situation is to change the dtype so it gives the correct endianness: >>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder() >>> fixed_end_dtype_arr[0] 1 Note the the array has not changed in memory: >>> fixed_end_dtype_arr.tobytes() == big_end_str True Data and type endianness don't match, change data to match dtype ---------------------------------------------------------------- You might want to do this if you need the data in memory to be a certain ordering. For example you might be writing the memory out to a file that needs a certain byte ordering. >>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap() >>> fixed_end_mem_arr[0] 1 Now the array *has* changed in memory: >>> fixed_end_mem_arr.tobytes() == big_end_str False Data and dtype endianness match, swap data and dtype ---------------------------------------------------- You may have a correctly specified array dtype, but you need the array to have the opposite byte order in memory, and you want the dtype to match so the array values make sense. In this case you just do both of the previous operations: >>> swapped_end_arr = big_end_arr.byteswap().newbyteorder() >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_str False An easier way of casting the data to a specific dtype and byte ordering can be achieved with the ndarray astype method: >>> swapped_end_arr = big_end_arr.astype('<i2') >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_str False """
""" Discrete Fourier Transform (:mod:`numpy.fft`) ============================================= .. currentmodule:: numpy.fft Standard FFTs ------------- .. autosummary:: :toctree: generated/ fft Discrete Fourier transform. ifft Inverse discrete Fourier transform. fft2 Discrete Fourier transform in two dimensions. ifft2 Inverse discrete Fourier transform in two dimensions. fftn Discrete Fourier transform in N-dimensions. ifftn Inverse discrete Fourier transform in N dimensions. Real FFTs --------- .. autosummary:: :toctree: generated/ rfft Real discrete Fourier transform. irfft Inverse real discrete Fourier transform. rfft2 Real discrete Fourier transform in two dimensions. irfft2 Inverse real discrete Fourier transform in two dimensions. rfftn Real discrete Fourier transform in N dimensions. irfftn Inverse real discrete Fourier transform in N dimensions. Hermitian FFTs -------------- .. autosummary:: :toctree: generated/ hfft Hermitian discrete Fourier transform. ihfft Inverse Hermitian discrete Fourier transform. Helper routines --------------- .. autosummary:: :toctree: generated/ fftfreq Discrete Fourier Transform sample frequencies. rfftfreq DFT sample frequencies (for usage with rfft, irfft). fftshift Shift zero-frequency component to center of spectrum. ifftshift Inverse of fftshift. Background information ---------------------- Fourier analysis is fundamentally a method for expressing a function as a sum of periodic components, and for recovering the function from those components. When both the function and its Fourier transform are replaced with discretized counterparts, it is called the discrete Fourier transform (DFT). The DFT has become a mainstay of numerical computing in part because of a very fast algorithm for computing it, called the Fast Fourier Transform (FFT), which was known to Gauss (1805) and was brought to light in its current form by NAME and NAME [CT]_. Press et al. [NR]_ provide an accessible introduction to Fourier analysis and its applications. Because the discrete Fourier transform separates its input into components that contribute at discrete frequencies, it has a great number of applications in digital signal processing, e.g., for filtering, and in this context the discretized input to the transform is customarily referred to as a *signal*, which exists in the *time domain*. The output is called a *spectrum* or *transform* and exists in the *frequency domain*. Implementation details ---------------------- There are many ways to define the DFT, varying in the sign of the exponent, normalization, etc. In this implementation, the DFT is defined as .. math:: A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\} \\qquad k = 0,\\ldots,n-1. The DFT is in general defined for complex inputs and outputs, and a single-frequency component at linear frequency :math:`f` is represented by a complex exponential :math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t` is the sampling interval. The values in the result follow so-called "standard" order: If ``A = fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the mean of the signal), which is always purely real for real inputs. Then ``A[1:n/2]`` contains the positive-frequency terms, and ``A[n/2+1:]`` contains the negative-frequency terms, in order of decreasingly negative frequency. For an even number of input points, ``A[n/2]`` represents both positive and negative Nyquist frequency, and is also purely real for real input. For an odd number of input points, ``A[(n-1)/2]`` contains the largest positive frequency, while ``A[(n+1)/2]`` contains the largest negative frequency. The routine ``np.fft.fftfreq(n)`` returns an array giving the frequencies of corresponding elements in the output. The routine ``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes that shift. When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)`` is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum. The phase spectrum is obtained by ``np.angle(A)``. The inverse DFT is defined as .. math:: a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\} \\qquad m = 0,\\ldots,n-1. It differs from the forward transform by the sign of the exponential argument and the normalization by :math:`1/n`. Real and Hermitian transforms ----------------------------- When the input is purely real, its transform is Hermitian, i.e., the component at frequency :math:`f_k` is the complex conjugate of the component at frequency :math:`-f_k`, which means that for real inputs there is no information in the negative frequency components that is not already available from the positive frequency components. The family of `rfft` functions is designed to operate on real inputs, and exploits this symmetry by computing only the positive frequency components, up to and including the Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex output points. The inverses of this family assumes the same symmetry of its input, and for an output of ``n`` points uses ``n/2+1`` input points. Correspondingly, when the spectrum is purely real, the signal is Hermitian. The `hfft` family of functions exploits this symmetry by using ``n/2+1`` complex points in the input (time) domain for ``n`` real points in the frequency domain. In higher dimensions, FFTs are used, e.g., for image analysis and filtering. The computational efficiency of the FFT means that it can also be a faster way to compute large convolutions, using the property that a convolution in the time domain is equivalent to a point-by-point multiplication in the frequency domain. Higher dimensions ----------------- In two dimensions, the DFT is defined as .. math:: A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1} a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\} \\qquad k = 0, \\ldots, M-1;\\quad l = 0, \\ldots, N-1, which extends in the obvious way to higher dimensions, and the inverses in higher dimensions also extend in the same way. References ---------- .. [CT] NAME, NAME and John W. NAME, 1965, "An algorithm for the machine calculation of complex Fourier series," *Math. Comput.* 19: 297-301. .. [NR] NAME NAME NAME and NAME 2007, *Numerical Recipes: The Art of Scientific Computing*, ch. 12-13. Cambridge Univ. Press, Cambridge, UK. Examples -------- For examples, see the various functions. """
""" ======================== Broadcasting over arrays ======================== The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is "broadcast" across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation. NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape, as in the following example: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = np.array([2.0, 2.0, 2.0]) >>> a * b array([ 2., 4., 6.]) NumPy's broadcasting rule relaxes this constraint when the arrays' shapes meet certain constraints. The simplest broadcasting example occurs when an array and a scalar value are combined in an operation: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = 2.0 >>> a * b array([ 2., 4., 6.]) The result is equivalent to the previous example where ``b`` was an array. We can think of the scalar ``b`` being *stretched* during the arithmetic operation into an array with the same shape as ``a``. The new elements in ``b`` are simply copies of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies, so that broadcasting operations are as memory and computationally efficient as possible. The code in the second example is more efficient than that in the first because broadcasting moves less memory around during the multiplication (``b`` is a scalar rather than an array). General Broadcasting Rules ========================== When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when 1) they are equal, or 2) one of them is 1 If these conditions are not met, a ``ValueError: frames are not aligned`` exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays. Arrays do not need to have the same *number* of dimensions. For example, if you have a ``256x256x3`` array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:: Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 When either of the dimensions compared is one, the larger of the two is used. In other words, the smaller of two axes is stretched or "copied" to match the other. In the following example, both the ``A`` and ``B`` arrays have axes with length one that are expanded to a larger size during the broadcast operation:: A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 Result (4d array): 8 x 7 x 6 x 5 Here are some more examples:: A (2d array): 5 x 4 B (1d array): 1 Result (2d array): 5 x 4 A (2d array): 5 x 4 B (1d array): 4 Result (2d array): 5 x 4 A (3d array): 15 x 3 x 5 B (3d array): 15 x 1 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 1 Result (3d array): 15 x 3 x 5 Here are examples of shapes that do not broadcast:: A (1d array): 3 B (1d array): 4 # trailing dimensions do not match A (2d array): 2 x 1 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched An example of broadcasting in practice:: >>> x = np.arange(4) >>> xx = x.reshape(4,1) >>> y = np.ones(5) >>> z = np.ones((3,4)) >>> x.shape (4,) >>> y.shape (5,) >>> x + y <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape >>> xx.shape (4, 1) >>> y.shape (5,) >>> (xx + y).shape (4, 5) >>> xx + y array([[ 1., 1., 1., 1., 1.], [ 2., 2., 2., 2., 2.], [ 3., 3., 3., 3., 3.], [ 4., 4., 4., 4., 4.]]) >>> x.shape (4,) >>> z.shape (3, 4) >>> (x + z).shape (3, 4) >>> x + z array([[ 1., 2., 3., 4.], [ 1., 2., 3., 4.], [ 1., 2., 3., 4.]]) Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays:: >>> a = np.array([0.0, 10.0, 20.0, 30.0]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a[:, np.newaxis] + b array([[ 1., 2., 3.], [ 11., 12., 13.], [ 21., 22., 23.], [ 31., 32., 33.]]) Here the ``newaxis`` index operator inserts a new axis into ``a``, making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array with ``b``, which has shape ``(3,)``, yields a ``4x3`` array. See `this article <http://www.scipy.org/EricsBroadcastingDoc>`_ for illustrations of broadcasting concepts. """
""" ================= Structured Arrays ================= Introduction ============ NumPy provides powerful capabilities to create arrays of structured datatype. These arrays permit one to manipulate the data by named fields. A simple example will show what is meant.: :: >>> x = np.array([(1,2.,'Hello'), (2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> x array([(1, 2.0, 'Hello'), (2, 3.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) Here we have created a one-dimensional array of length 2. Each element of this array is a structure that contains three items, a 32-bit integer, a 32-bit float, and a string of length 10 or less. If we index this array at the second position we get the second structure: :: >>> x[1] (2,3.,"World") Conveniently, one can access any field of the array by indexing using the string that names that field. :: >>> y = x['bar'] >>> y array([ 2., 3.], dtype=float32) >>> y[:] = 2*y >>> y array([ 4., 6.], dtype=float32) >>> x array([(1, 4.0, 'Hello'), (2, 6.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) In these examples, y is a simple float array consisting of the 2nd field in the structured type. But, rather than being a copy of the data in the structured array, it is a view, i.e., it shares exactly the same memory locations. Thus, when we updated this array by doubling its values, the structured array shows the corresponding values as doubled as well. Likewise, if one changes the structured array, the field view also changes: :: >>> x[1] = (-1,-1.,"Master") >>> x array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) >>> y array([ 4., -1.], dtype=float32) Defining Structured Arrays ========================== One defines a structured array through the dtype object. There are **several** alternative ways to define the fields of a record. Some of these variants provide backward compatibility with Numeric, numarray, or another module, and should not be used except for such purposes. These will be so noted. One specifies record structure in one of four alternative ways, using an argument (as supplied to a dtype function keyword or a dtype object constructor itself). This argument must be one of the following: 1) string, 2) tuple, 3) list, or 4) dictionary. Each of these is briefly described below. 1) String argument. In this case, the constructor expects a comma-separated list of type specifiers, optionally with extra shape information. The fields are given the default names 'f0', 'f1', 'f2' and so on. The type specifiers can take 4 different forms: :: a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n> (representing bytes, ints, unsigned ints, floats, complex and fixed length strings of specified byte lengths) b) int8,...,uint8,...,float16, float32, float64, complex64, complex128 (this time with bit sizes) c) older Numeric/numarray type specifications (e.g. Float32). Don't use these in new code! d) Single character type specifiers (e.g H for unsigned short ints). Avoid using these unless you must. Details can be found in the NumPy book These different styles can be mixed within the same string (but why would you want to do that?). Furthermore, each type specifier can be prefixed with a repetition number, or a shape. In these cases an array element is created, i.e., an array within a record. That array is still referred to as a single field. An example: :: >>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64') >>> x array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])], dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))]) By using strings to define the record structure, it precludes being able to name the fields in the original definition. The names can be changed as shown later, however. 2) Tuple argument: The only relevant tuple case that applies to record structures is when a structure is mapped to an existing data type. This is done by pairing in a tuple, the existing data type with a matching dtype definition (using any of the variants being described here). As an example (using a definition using a list, so see 3) for further details): :: >>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')])) >>> x array([0, 0, 0]) >>> x['r'] array([0, 0, 0], dtype=uint8) In this case, an array is produced that looks and acts like a simple int32 array, but also has definitions for fields that use only one byte of the int32 (a bit like Fortran equivalencing). 3) List argument: In this case the record structure is defined with a list of tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field ('' is permitted), 2) the type of the field, and 3) the shape (optional). For example:: >>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) >>> x array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])], dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))]) 4) Dictionary argument: two different forms are permitted. The first consists of a dictionary with two required keys ('names' and 'formats'), each having an equal sized list of values. The format list contains any type/shape specifier allowed in other contexts. The names must be strings. There are two optional keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to the required two where offsets contain integer offsets for each field, and titles are objects containing metadata for each field (these do not have to be strings), where the value of None is permitted. As an example: :: >>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[('col1', '>i4'), ('col2', '>f4')]) The other dictionary form permitted is a dictionary of name keys with tuple values specifying type, offset, and an optional title. :: >>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')]) Accessing and modifying field names =================================== The field names are an attribute of the dtype object defining the structure. For the last example: :: >>> x.dtype.names ('col1', 'col2') >>> x.dtype.names = ('x', 'y') >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')]) >>> x.dtype.names = ('x', 'y', 'z') # wrong number of names <type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2 Accessing field titles ==================================== The field titles provide a standard place to put associated info for fields. They do not have to be strings. :: >>> x.dtype.fields['x'][2] 'title 1' Accessing multiple fields at once ==================================== You can access multiple fields at once using a list of field names: :: >>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))], dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) Notice that `x` is created with a list of tuples. :: >>> x[['x','y']] array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)], dtype=[('x', '<f4'), ('y', '<f4')]) >>> x[['x','value']] array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]), (1.0, [[2.0, 6.0], [2.0, 6.0]])], dtype=[('x', '<f4'), ('value', '<f4', (2, 2))]) The fields are returned in the order they are asked for.:: >>> x[['y','x']] array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)], dtype=[('y', '<f4'), ('x', '<f4')]) Filling structured arrays ========================= Structured arrays can be filled by field or row by row. :: >>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')]) >>> arr['var1'] = np.arange(5) If you fill it in row by row, it takes a take a tuple (but not a list or array!):: >>> arr[0] = (10,20) >>> arr array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)], dtype=[('var1', '<f8'), ('var2', '<f8')]) Record Arrays ============= For convenience, numpy provides "record arrays" which allow one to access fields of structured arrays by attribute rather than by index. Record arrays are structured arrays wrapped using a subclass of ndarray, :class:`numpy.recarray`, which allows field access by attribute on the array object, and record arrays also use a special datatype, :class:`numpy.record`, which allows field access by attribute on the individual elements of the array. The simplest way to create a record array is with :func:`numpy.rec.array`: :: >>> recordarr = np.rec.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> recordarr.bar array([ 2., 3.], dtype=float32) >>> recordarr[1:2] rec.array([(2, 3.0, 'World')], dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]) >>> recordarr[1:2].foo array([2], dtype=int32) >>> recordarr.foo[1:2] array([2], dtype=int32) >>> recordarr[1].baz 'World' numpy.rec.array can convert a wide variety of arguments into record arrays, including normal structured arrays: :: >>> arr = array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')]) >>> recordarr = np.rec.array(arr) The numpy.rec module provides a number of other convenience functions for creating record arrays, see :ref:`record array creation routines <routines.array-creation.rec>`. A record array representation of a structured array can be obtained using the appropriate :ref:`view`: :: >>> arr = np.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')]) >>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)), ... type=np.recarray) For convenience, viewing an ndarray as type `np.recarray` will automatically convert to `np.record` datatype, so the dtype can be left out of the view: :: >>> recordarr = arr.view(np.recarray) >>> recordarr.dtype dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])) To get back to a plain ndarray both the dtype and type must be reset. The following view does so, taking into account the unusual case that the recordarr was not a structured type: :: >>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray) Record array fields accessed by index or by attribute are returned as a record array if the field has a structured type but as a plain ndarray otherwise. :: >>> recordarr = np.rec.array([('Hello', (1,2)),("World", (3,4))], ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])]) >>> type(recordarr.foo) <type 'numpy.ndarray'> >>> type(recordarr.bar) <class 'numpy.core.records.recarray'> Note that if a field has the same name as an ndarray attribute, the ndarray attribute takes precedence. Such fields will be inaccessible by attribute but may still be accessed by index. """
""" ==================================== Linear algebra (:mod:`scipy.linalg`) ==================================== .. currentmodule:: scipy.linalg Linear algebra functions. .. seealso:: `numpy.linalg` for more linear algebra functions. Note that although `scipy.linalg` imports most of them, identically named functions from `scipy.linalg` may offer more or slightly differing functionality. Basics ====== .. autosummary:: :toctree: generated/ inv - Find the inverse of a square matrix solve - Solve a linear system of equations solve_banded - Solve a banded linear system solveh_banded - Solve a Hermitian or symmetric banded system solve_circulant - Solve a circulant system solve_triangular - Solve a triangular matrix solve_toeplitz - Solve a toeplitz matrix det - Find the determinant of a square matrix norm - Matrix and vector norm lstsq - Solve a linear least-squares problem pinv - Pseudo-inverse (Moore-Penrose) using lstsq pinv2 - Pseudo-inverse using svd pinvh - Pseudo-inverse of hermitian matrix kron - Kronecker product of two arrays tril - Construct a lower-triangular matrix from a given matrix triu - Construct an upper-triangular matrix from a given matrix orthogonal_procrustes - Solve an orthogonal Procrustes problem matrix_balance - Balance matrix entries with a similarity transformation LinAlgError Eigenvalue Problems =================== .. autosummary:: :toctree: generated/ eig - Find the eigenvalues and eigenvectors of a square matrix eigvals - Find just the eigenvalues of a square matrix eigh - Find the e-vals and e-vectors of a Hermitian or symmetric matrix eigvalsh - Find just the eigenvalues of a Hermitian or symmetric matrix eig_banded - Find the eigenvalues and eigenvectors of a banded matrix eigvals_banded - Find just the eigenvalues of a banded matrix Decompositions ============== .. autosummary:: :toctree: generated/ lu - LU decomposition of a matrix lu_factor - LU decomposition returning unordered matrix and pivots lu_solve - Solve Ax=b using back substitution with output of lu_factor svd - Singular value decomposition of a matrix svdvals - Singular values of a matrix diagsvd - Construct matrix of singular values from output of svd orth - Construct orthonormal basis for the range of A using svd cholesky - Cholesky decomposition of a matrix cholesky_banded - Cholesky decomp. of a sym. or Hermitian banded matrix cho_factor - Cholesky decomposition for use in solving a linear system cho_solve - Solve previously factored linear system cho_solve_banded - Solve previously factored banded linear system polar - Compute the polar decomposition. qr - QR decomposition of a matrix qr_multiply - QR decomposition and multiplication by Q qr_update - Rank k QR update qr_delete - QR downdate on row or column deletion qr_insert - QR update on row or column insertion rq - RQ decomposition of a matrix qz - QZ decomposition of a pair of matrices ordqz - QZ decomposition of a pair of matrices with reordering schur - Schur decomposition of a matrix rsf2csf - Real to complex Schur form hessenberg - Hessenberg form of a matrix .. seealso:: `scipy.linalg.interpolative` -- Interpolative matrix decompositions Matrix Functions ================ .. autosummary:: :toctree: generated/ expm - Matrix exponential logm - Matrix logarithm cosm - Matrix cosine sinm - Matrix sine tanm - Matrix tangent coshm - Matrix hyperbolic cosine sinhm - Matrix hyperbolic sine tanhm - Matrix hyperbolic tangent signm - Matrix sign sqrtm - Matrix square root funm - Evaluating an arbitrary matrix function expm_frechet - Frechet derivative of the matrix exponential expm_cond - Relative condition number of expm in the Frobenius norm fractional_matrix_power - Fractional matrix power Matrix Equation Solvers ======================= .. autosummary:: :toctree: generated/ solve_sylvester - Solve the Sylvester matrix equation solve_continuous_are - Solve the continuous-time algebraic Riccati equation solve_discrete_are - Solve the discrete-time algebraic Riccati equation solve_discrete_lyapunov - Solve the discrete-time Lyapunov equation solve_lyapunov - Solve the (continous-time) Lyapunov equation Special Matrices ================ .. autosummary:: :toctree: generated/ block_diag - Construct a block diagonal matrix from submatrices circulant - Circulant matrix companion - Companion matrix dft - Discrete Fourier transform matrix hadamard - Hadamard matrix of order 2**n hankel - Hankel matrix helmert - Helmert matrix hilbert - Hilbert matrix invhilbert - Inverse Hilbert matrix leslie - Leslie matrix pascal - Pascal matrix invpascal - Inverse Pascal matrix toeplitz - Toeplitz matrix tri - Construct a matrix filled with ones at and below a given diagonal Low-level routines ================== .. autosummary:: :toctree: generated/ get_blas_funcs get_lapack_funcs find_best_blas_type .. seealso:: `scipy.linalg.blas` -- Low-level BLAS functions `scipy.linalg.lapack` -- Low-level LAPACK functions `scipy.linalg.cython_blas` -- Low-level BLAS functions for Cython `scipy.linalg.cython_lapack` -- Low-level LAPACK functions for Cython """
#!/usr/bin/env python # (c) 2013, NAME <paul.durivage@gmail.com> # # This file is part of Ansible. # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # # # Author: NAME <paul.durivage@gmail.com> # # Description: # This module queries local or remote Docker daemons and generates # inventory information. # # This plugin does not support targeting of specific hosts using the --host # flag. Instead, it queries the Docker API for each container, running # or not, and returns this data all once. # # The plugin returns the following custom attributes on Docker containers: # docker_args # docker_config # docker_created # docker_driver # docker_exec_driver # docker_host_config # docker_hostname_path # docker_hosts_path # docker_id # docker_image # docker_name # docker_network_settings # docker_path # docker_resolv_conf_path # docker_state # docker_volumes # docker_volumes_rw # # Requirements: # The docker-py module: https://github.com/dotcloud/docker-py # # Notes: # A config file can be used to configure this inventory module, and there # are several environment variables that can be set to modify the behavior # of the plugin at runtime: # DOCKER_CONFIG_FILE # DOCKER_HOST # DOCKER_VERSION # DOCKER_TIMEOUT # DOCKER_PRIVATE_SSH_PORT # DOCKER_DEFAULT_IP # # Environment Variables: # environment variable: DOCKER_CONFIG_FILE # description: # - A path to a Docker inventory hosts/defaults file in YAML format # - A sample file has been provided, colocated with the inventory # file called 'docker.yml' # required: false # default: Uses docker.docker.Client constructor defaults # environment variable: DOCKER_HOST # description: # - The socket on which to connect to a Docker daemon API # required: false # default: Uses docker.docker.Client constructor defaults # environment variable: DOCKER_VERSION # description: # - Version of the Docker API to use # default: Uses docker.docker.Client constructor defaults # required: false # environment variable: DOCKER_TIMEOUT # description: # - Timeout in seconds for connections to Docker daemon API # default: Uses docker.docker.Client constructor defaults # required: false # environment variable: DOCKER_PRIVATE_SSH_PORT # description: # - The private port (container port) on which SSH is listening # for connections # default: 22 # required: false # environment variable: DOCKER_DEFAULT_IP # description: # - This environment variable overrides the container SSH connection # IP address (aka, 'ansible_ssh_host') # # This option allows one to override the ansible_ssh_host whenever # Docker has exercised its default behavior of binding private ports # to all interfaces of the Docker host. This behavior, when dealing # with remote Docker hosts, does not allow Ansible to determine # a proper host IP address on which to connect via SSH to containers. # By default, this inventory module assumes all IP_ADDRESS-exposed # ports to be bound to localhost:<port>. To override this # behavior, for example, to bind a container's SSH port to the public # interface of its host, one must manually set this IP. # # It is preferable to begin to launch Docker containers with # ports exposed on publicly accessible IP addresses, particularly # if the containers are to be targeted by Ansible for remote # configuration, not accessible via localhost SSH connections. # # Docker containers can be explicitly exposed on IP addresses by # a) starting the daemon with the --ip argument # b) running containers with the -P/--publish ip::containerPort # argument # default: IP_ADDRESS if port exposed on IP_ADDRESS by Docker # required: false # # Examples: # Use the config file: # DOCKER_CONFIG_FILE=./docker.yml docker.py --list # # Connect to docker instance on localhost port 4243 # DOCKER_HOST=tcp://localhost:4243 docker.py --list # # Any container's ssh port exposed on IP_ADDRESS will mapped to # another IP address (where Ansible will attempt to connect via SSH) # DOCKER_DEFAULT_IP=1.2.3.4 docker.py --list
""" =================== Universal Functions =================== Ufuncs are, generally speaking, mathematical functions or operations that are applied element-by-element to the contents of an array. That is, the result in each output array element only depends on the value in the corresponding input array (or arrays) and on no other array elements. Numpy comes with a large suite of ufuncs, and scipy extends that suite substantially. The simplest example is the addition operator: :: >>> np.array([0,2,3,4]) + np.array([1,1,-1,2]) array([1, 3, 2, 6]) The unfunc module lists all the available ufuncs in numpy. Additional ufuncts available in xxx in scipy. Documentation on the specific ufuncs may be found in those modules. This documentation is intended to address the more general aspects of unfuncs common to most of them. All of the ufuncs that make use of Python operators (e.g., +, -, etc.) have equivalent functions defined (e.g. add() for +) Type coercion ============= What happens when a binary operator (e.g., +,-,\\*,/, etc) deals with arrays of two different types? What is the type of the result? Typically, the result is the higher of the two types. For example: :: float32 + float64 -> float64 int8 + int32 -> int32 int16 + float32 -> float32 float32 + complex64 -> complex64 There are some less obvious cases generally involving mixes of types (e.g. uints, ints and floats) where equal bit sizes for each are not capable of saving all the information in a different type of equivalent bit size. Some examples are int32 vs float32 or uint32 vs int32. Generally, the result is the higher type of larger size than both (if available). So: :: int32 + float32 -> float64 uint32 + int32 -> int64 Finally, the type coercion behavior when expressions involve Python scalars is different than that seen for arrays. Since Python has a limited number of types, combining a Python int with a dtype=np.int8 array does not coerce to the higher type but instead, the type of the array prevails. So the rules for Python scalars combined with arrays is that the result will be that of the array equivalent the Python scalar if the Python scalar is of a higher 'kind' than the array (e.g., float vs. int), otherwise the resultant type will be that of the array. For example: :: Python int + int8 -> int8 Python float + int8 -> float64 ufunc methods ============= Binary ufuncs support 4 methods. These methods are explained in detail in xxx (or are they, I don't see anything in the ufunc docstring that is useful?). **.reduce(arr)** applies the binary operator to elements of the array in sequence. For example: :: >>> np.add.reduce(np.arange(10)) # adds all elements of array 45 For multidimensional arrays, the first dimension is reduced by default: :: >>> np.add.reduce(np.arange(10).reshape(2,5)) array([ 5, 7, 9, 11, 13]) The axis keyword can be used to specify different axes to reduce: :: >>> np.add.reduce(np.arange(10).reshape(2,5),axis=1) array([10, 35]) **.accumulate(arr)** applies the binary operator and generates an an equivalently shaped array that includes the accumulated amount for each element of the array. A couple examples: :: >>> np.add.accumulate(np.arange(10)) array([ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45]) >>> np.multiply.accumulate(np.arange(1,9)) array([ 1, 2, 6, 24, 120, 720, 5040, 40320]) The behavior for multidimensional arrays is the same as for .reduce(), as is the use of the axis keyword). **.reduceat(arr,indices)** allows one to apply reduce to selected parts of an array. It is a difficult method to understand. See the documentation at: **.outer(arr1,arr2)** generates an outer operation on the two arrays arr1 and arr2. It will work on multidimensional arrays (the shape of the result is the concatenation of the two input shapes.: :: >>> np.multiply.outer(np.arange(3),np.arange(4)) array([[0, 0, 0, 0], [0, 1, 2, 3], [0, 2, 4, 6]]) Output arguments ================ All ufuncs accept an optional output array. The array must be of the expected output shape. Beware that if the type of the output array is of a different (and lower) type than the output result, the results may be silently truncated or otherwise corrupted in the downcast to the lower type. This usage is useful when one wants to avoid creating large temporary arrays and instead allows one to reuse the same array memory repeatedly (at the expense of not being able to use more convenient operator notation in expressions). Note that when the output argument is used, the ufunc still returns a reference to the result. >>> x = np.arange(2) >>> np.add(np.arange(2),np.arange(2.),x) array([0, 2]) >>> x array([0, 2]) and & or as ufuncs ================== Invariably people try to use the python 'and' and 'or' as logical operators (and quite understandably). But these operators do not behave as normal operators since Python treats these quite differently. They cannot be overloaded with array equivalents. Thus using 'and' or 'or' with an array results in an error. There are two alternatives: 1) use the ufunc functions logical_and() and logical_or(). 2) use the bitwise operators & and \\|. The drawback of these is that if the arguments to these operators are not boolean arrays, the result is likely incorrect. On the other hand, most usages of logical_and and logical_or are with boolean arrays. As long as one is careful, this is a convenient way to apply these operators. """
# -*- encoding: utf-8 -*- ############################################################################## # # OpenERP, Open Source Management Solution # Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>). # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as # published by the Free Software Foundation, either version 3 of the # License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # ############################################################################## # SKR03 # ===== # Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03. # Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig. # Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel # grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder # Sachkonten oder zu Partnern. # Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der # Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung # (Kategorie: Umsatzsteuer). # Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit # der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter # Finanzbuchhaltung (Kategorie: Vorsteuer). # Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch # für den Ein- und Verkauf aus und in Drittländer sollten beim Partner # (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland # des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als # die Zuordnung bei Produkten und überschreibt diese im Einzelfall. # # Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften # erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten # (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU') # zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant). # Die Rechnungsbuchung beim Einkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer # Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer # 19%). Durch multidimensionale Hierachien können verschiedene Positionen # zusammengefasst werden und dann in Form eines Reports ausgegeben werden. # # Die Rechnungsbuchung beim Verkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag # (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer' # (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können # verschiedene Positionen zusammengefasst werden. # Die zugewiesenen Steuerausweise können auf Ebene der einzelnen # Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden, # und dort gegebenenfalls angepasst werden. # Rechnungsgutschriften führen zu einer Korrektur (Gegenposition) # der Steuerbuchung, in Form einer spiegelbildlichen Buchung. # SKR04 # ===== # Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04. # Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig, # d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu # Steuerschlüsseln. # Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel # grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder # Sachkonten oder zu Partnern. # Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der # Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung # (Kategorie: Umsatzsteuer). # Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit # der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter # Finanzbuchhaltung (Kategorie: Vorsteuer). # Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch # für den Ein- und Verkauf aus und in Drittländer sollten beim Partner # (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland # des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als # die Zuordnung bei Produkten und überschreibt diese im Einzelfall. # # Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften # erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten # (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU') # zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant). # Die Rechnungsbuchung beim Einkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer # Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer # 19%). Durch multidimensionale Hierachien können verschiedene Positionen # zusammengefasst werden und dann in Form eines Reports ausgegeben werden. # # Die Rechnungsbuchung beim Verkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag # (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer' # (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können # verschiedene Positionen zusammengefasst werden. # Die zugewiesenen Steuerausweise können auf Ebene der einzelnen # Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden, # und dort gegebenenfalls angepasst werden. # Rechnungsgutschriften führen zu einer Korrektur (Gegenposition) # der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I an trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # Copyright (c), NAME <michael.dehaan@gmail.com>, 2012-2013 # Copyright (c), NAME <tkuratomi@ansible.com>, 2015 # All rights reserved. # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # The match_hostname function and supporting code is under the terms and # conditions of the Python Software Foundation License. They were taken from # the Python3 standard library and adapted for use in Python2. See comments in the # source for which code precisely is under this License. PSF License text # follows: # # PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 # -------------------------------------------- # # 1. This LICENSE AGREEMENT is between the Python Software Foundation # ("PSF"), and the Individual or Organization ("Licensee") accessing and # otherwise using this software ("Python") in source or binary form and # its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, PSF hereby # grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, # analyze, test, perform and/or display publicly, prepare derivative works, # distribute, and otherwise use Python alone or in any derivative version, # provided, however, that PSF's License Agreement and PSF's notice of copyright, # i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are # retained in Python alone or in any derivative version prepared by Licensee. # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python. # # 4. PSF is making Python available to Licensee on an "AS IS" # basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. Nothing in this License Agreement shall be deemed to create any # relationship of agency, partnership, or joint venture between PSF and # Licensee. This License Agreement does not grant permission to use PSF # trademarks or trade name in a trademark sense to endorse or promote # products or services of Licensee, or any third party. # # 8. By copying, installing or otherwise using Python, Licensee # agrees to be bound by the terms and conditions of this License # Agreement.
""" Language and whisper obfuscation system Evennia contrib - Griatch 2015 This module is intented to be used with an emoting system (such as contrib/rpsystem.py). It offers the ability to obfuscate spoken words in the game in various ways: - Language: The language functionality defines a pseudo-language map to any number of languages. The string will be obfuscated depending on a scaling that (most likely) will be input as a weighted average of the language skill of the speaker and listener. - Whisper: The whisper functionality will gradually "fade out" a whisper along as scale 0-1, where the fading is based on gradually removing sections of the whisper that is (supposedly) easier to overhear (for example "s" sounds tend to be audible even when no other meaning can be determined). Usage: ```python from evennia.contrib import rplanguages # need to be done once, here we create the "default" lang rplanguages.add_language() say = "This is me talking." whisper = "This is me whispering. print rplanguages.obfuscate_language(say, level=0.0) <<< "This is me talking." print rplanguages.obfuscate_language(say, level=0.5) <<< "This is me byngyry." print rplanguages.obfuscate_language(say, level=1.0) <<< "Daly ly sy byngyry." result = rplanguages.obfuscate_whisper(whisper, level=0.0) <<< "This is me whispering" result = rplanguages.obfuscate_whisper(whisper, level=0.2) <<< "This is m- whisp-ring" result = rplanguages.obfuscate_whisper(whisper, level=0.5) <<< "---s -s -- ---s------" result = rplanguages.obfuscate_whisper(whisper, level=0.7) <<< "---- -- -- ----------" result = rplanguages.obfuscate_whisper(whisper, level=1.0) <<< "..." ``` To set up new languages, import and use the `add_language()` helper method in this module. This allows you to customize the "feel" of the semi-random language you are creating. Especially the `word_length_variance` helps vary the length of translated words compared to the original and can help change the "feel" for the language you are creating. You can also add your own dictionary and "fix" random words for a list of input words. Below is an example of "elvish", using "rounder" vowels and sounds: ```python phonemes = "oi oh ee ae aa eh ah ao aw ay er ey ow ia ih iy " \ "oy ua uh uw y p b t d f v t dh s z sh zh ch jh k " \ "ng g m n l r w", vowels = "eaoiuy" grammar = "v vv vvc vcc vvcc cvvc vccv vvccv vcvccv vcvcvcc vvccvvcc " \ "vcvvccvvc cvcvvcvvcc vcvcvvccvcvv", word_length_variance = 1 noun_postfix = "'la" manual_translations = {"the":"y'e", "we":"uyi", "she":"semi", "he":"emi", "you": "do", 'me':'mi','i':'me', 'be':"hy'e", 'and':'y'} rplanguages.add_language(key="elvish", phonemes=phonemes, grammar=grammar, word_length_variance=word_length_variance, noun_postfix=noun_postfix, vowels=vowels, manual_translations=manual_translations auto_translations="my_word_file.txt") ``` This will produce a decicively more "rounded" and "soft" language than the default one. The few manual_translations also make sure to make it at least look superficially "reasonable". The `auto_translations` keyword is useful, this accepts either a list or a path to a file of words (one per line) to automatically create fixed translations for according to the grammatical rules. This allows to quickly build a large corpus of translated words that never change (if this is desired). """
"""Yahoo Search Web Services This module implements a set of classes and functions to work with the Yahoo Search Web Services. All results from these services are properly formatted XML, and this package facilitates for proper parsing of these result sets. Some of the features include: * Extendandable API, with replaceable backend XML parsers, and I/O interface. * Type and value checking on search parameters, including automatic type conversion (when appropriate and possible) * Flexible return format, including DOM objects, or fully parsed result objects You can either instantiate a search object directly, or use the factory function create_search() in this module (see below). The supported classes of searches are: VideoSearch - Video Search ImageSearch - Image Search WebSearch - Web Search NewsSearch - News Search LocalSearch - Local Search RelatedSuggestion - Web Search Related Suggestion SpellingSuggestion - Web Search Spelling Suggestion TermExtraction - Term Extraction service ContextSearch - Web Search with a context The different sub-classes of Search supports different sets of query parameters. They all require an application ID parameter (app_id). The following tables describes all other allowed parameters for each of the supported services: Web Related Spelling Context Term ----- ------- -------- ------- ------ query [X] [X] [X] [X] [X] type [X] . . [X] results [X] [X] . [X] start [X] . . [X] format [X] . . [X] adult_ok [X] . . [X] similar_ok [X] . . [X] language [X] . . [X] country [X] . . [X] context . . . [X] [X] Image Video News Local ----- ----- ----- ----- query [X] [X] [X] [X] type [X] [X] [X] . results [X] [X] [X] [X] start [X] [X] [X] [X] format [X] [X] . . adult_ok [X] [X] . . language . . . [X] country . . . . sort . . [X] [X] coloration [X] . . . radius . . . [X] street . . . [X] city . . . [X] state . . . [X] zip . . . [X] location . . . [X] longitude . . . [X] latitude . . . [X] List Folders List URLs ------------ --------- folder . [X] yahooid [X] [X] results [X] [X] start [X] [X] Each of these parameter is implemented as an attribute of each respective class. For example, you can set parameters like: from yahoo.search.webservices import WebSearch app_id = "something" srch = WebSearch(app_id) srch.query = "Leif NAME srch.results = 40 or, if you are using the factory function: from yahoo.search.webservices import create_search app_id = "something" srch = create_search("Web", app_id, query="Leif NAME, results=40) or, the last alternative, a combination of the previous two: from yahoo.search.webservices import WebSearch app_id = "something" srch = WebSearch(app_id, query="Leif NAME, results=40) To retrieve a certain parameter value, simply access it as any normal attribute: print "Searched for ", srch.query For more information on these parameters, and their allowed values, please see the official Yahoo Search Services documentation (XXX missing URL?) Once the webservice object has been created, you can retrieve a parsed object (typically a DOM object) using the get_results() method: dom = srch.get_results() This DOM object contains all results, and can be used as is. For easier use of the results, you can use the built-in results factory, which will traverse the entire DOM object, and create a list of results objects. results = srch.parse_results(dom) or, by using the implicit call to get_results(): results = srch.parse_results() The default XML parser and results factories should be adequate for most users, so use the parse_results() when possible. However, both the XML parser and the results parser can easily be overriden. EXAMPLE: #!/usr/bin/python import sys from yahoo.search.webservices import create_search service = argv[1] query = " ".join(sys.argv[2:]) app_id = "something" x = create_search(service, app_id, query=query, results=5) if x is None: x = create_search("Web", app_id, query=query, results=5) dom = srch.get_results() results = srch.parse_results(dom) for res in results: url = res.Url summary = res['Summary'] print "%s -> %s" (summary, url) """
# In the 20×20 grid below, four numbers along a diagonal line have been marked in red. # # 08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08 # 49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00 # 81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65 # 52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91 # 22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80 # 24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50 # 32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70 # 67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21 # 24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72 # 21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95 # 78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92 # 16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57 # 86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58 # 19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40 # 04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66 # 88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69 # 04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36 # 20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16 # 20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54 # 01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48 # # The product of these numbers is 26 × 63 × 78 × 14 = 1788696. # # What is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20×20 grid? #
#import unittest # #from DIRAC.Core.Base import Script #Script.parseCommandLine() # #from DIRAC.ResourceStatusSystem.Utilities.mock import Mock #from DIRAC.ResourceStatusSystem.Client.JobsClient import JobsClient #from DIRAC.ResourceStatusSystem.Client.PilotsClient import PilotsClient #from DIRAC.ResourceStatusSystem.Client.ResourceStatusClient import ResourceStatusClient #from DIRAC.ResourceStatusSystem.Client.ResourceManagementClient import ResourceManagementClient # #from DIRAC.ResourceStatusSystem.Utilities import CS # #ValidRes = CS.getTypedDictRootedAt("GeneralConfig")['Resource'] #ValidStatus = CS.getTypedDictRootedAt("GeneralConfig")['Status'] # ############################################################################## # #class ClientsTestCase( unittest.TestCase ): # """ Base class for the clients test cases # """ # def setUp( self ): # # self.mockRSS = Mock() # # self.RSCli = ResourceStatusClient( serviceIn = self.mockRSS ) # self.RMCli = ResourceManagementClient( serviceIn = self.mockRSS ) # self.PilotsCli = PilotsClient() # self.JobsCli = JobsClient() # ############################################################################## # #class ResourceStatusClientSuccess( ClientsTestCase ): # # def test_getPeriods( self ): # self.mockRSS.getPeriods.return_value = {'OK':True, 'Value':[]} # for granularity in ValidRes: # for status in ValidStatus: # res = self.RSCli.getPeriods( granularity, 'XX', status, 20 ) # self.assertEqual(res['OK'], True) # self.assertEqual( res['Value'], [] ) # # def test_getServiceStats( self ): # self.mockRSS.getServiceStats.return_value = {'OK':True, 'Value':[]} # res = self.RSCli.getServiceStats( 'Site', '' ) # self.assertEqual( res['Value'], [] ) # # def test_getResourceStats( self ): # self.mockRSS.getResourceStats.return_value = {'OK':True, 'Value':[]} # res = self.RSCli.getResourceStats( 'Site', '' ) # self.assertEqual( res['Value'], [] ) # res = self.RSCli.getResourceStats( 'Service', '' ) # self.assertEqual( res['Value'], [] ) # # def test_getStorageElementsStats( self ): # self.mockRSS.getStorageElementsStats.return_value = {'OK':True, 'Value':[]} # res = self.RSCli.getStorageElementsStats( 'Site', '', "Read" ) # self.assertEqual( res['Value'], [] ) # res = self.RSCli.getStorageElementsStats( 'Resource', '', "Read") # self.assertEqual( res['Value'], [] ) # # def test_getMonitoredStatus( self ): # self.mockRSS.getSitesStatusWeb.return_value = {'OK':True, 'Value': {'Records': [['', '', '', '', 'Active', '']]}} # self.mockRSS.getServicesStatusWeb.return_value = {'OK':True, 'Value':{'Records': [['', '', '', '', 'Active', '']]}} # self.mockRSS.getResourcesStatusWeb.return_value = {'OK':True, 'Value':{'Records': [['', '', '', '', '', 'Active', '']]}} # self.mockRSS.getStorageElementsStatusWeb.return_value = {'OK':True, 'Value':{'Records': [['', '', '', '', 'Active', '']]}} # for g in ValidRes: # res = self.RSCli.getMonitoredStatus( g, 'a' ) # self.assertEqual( res['Value'], ['Active'] ) # res = self.RSCli.getMonitoredStatus( g, ['a'] ) # self.assertEqual( res['Value'], ['Active'] ) # res = self.RSCli.getMonitoredStatus( g, ['a', 'b'] ) # self.assertEqual( res['Value'], ['Active', 'Active'] ) # # def test_getCachedAccountingResult( self ): # self.mockRSS.getCachedAccountingResult.return_value = {'OK':True, 'Value':[]} # res = self.RMCli.getCachedAccountingResult( 'XX', 'pippo', 'ZZ' ) # self.assertEqual( res['Value'], [] ) # # def test_getCachedResult( self ): # self.mockRSS.getCachedResult.return_value = {'OK':True, 'Value':[]} # res = self.RMCli.getCachedResult( 'XX', 'pippo', 'ZZ', 1 ) # self.assertEqual( res['Value'], [] ) # # def test_getCachedIDs( self ): # self.mockRSS.getCachedIDs.return_value = {'OK':True, # 'Value':[78805473L, 78805473L, 78805473L, 78805473L]} # res = self.RMCli.getCachedIDs( 'XX', 'pippo' ) # self.assertEqual( res['Value'], [78805473L, 78805473L, 78805473L, 78805473L] ) # # # ############################################################################## # #class JobsClientSuccess( ClientsTestCase ): # # def test_getJobsSimpleEff( self ): # WMS_Mock = Mock() # WMS_Mock.getSiteSummaryWeb.return_value = {'OK': True, # 'rpcStub': ( ( 'WorkloadManagement/WMSAdministrator', # {'skipCACheck': True, # 'delegatedGroup': 'diracAdmin', # 'delegatedDN': '/DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=fstagni/CN=693025/CN=Federico Stagni', 'timeout': 600} ), # 'getSiteSummaryWeb', ( {'Site': 'LCG.CERN.ch'}, [], 0, 500 ) ), # 'Value': {'TotalRecords': 1, # 'ParameterNames': ['Site', 'GridType', 'Country', 'Tier', 'MaskStatus', 'Received', 'Checking', 'Staging', 'Waiting', 'Matched', 'Running', 'Stalled', 'Done', 'Completed', 'Failed', 'Efficiency', 'Status'], # 'Extras': {'ru': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'fr': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 12L, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'ch': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 4L, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 1L}, 'nl': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'uk': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'Unknown': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'de': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 1L, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'it': {'Received': 0, 'Staging': 0, 'Checking': 1L, 'Completed': 0, 'Waiting': 2L, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'hu': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'cy': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'bg': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'au': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 10L, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'il': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'br': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'ie': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'pl': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 0, 'Stalled': 0, 'Matched': 0}, 'es': {'Received': 0, 'Staging': 0, 'Checking': 0, 'Completed': 0, 'Waiting': 0, 'Failed': 0, 'Running': 0, 'Done': 2L, 'Stalled': 0, 'Matched': 0}}, # 'Records': [['LCG.CERN.ch', 'LCG', 'ch', 'Tier-1', 'Active', 0, 0, 0, 4L, 1L, 0, 0, 0, 0, 0, '0.0', 'Idle']]}} # res = self.JobsCli.getJobsSimpleEff( 'XX', RPCWMSAdmin = WMS_Mock ) # self.assertEqual( res, {'LCG.CERN.ch': 'Idle'} ) # ############################################################################## # #class PilotsClientSuccess( ClientsTestCase ): # ## def test_getPilotsStats(self): ## self.mockRSS.getPeriods.return_value = {'OK':True, 'Value':[]} ## for granularity in ValidRes: ## for status in ValidStatus: ## res = self.RSCli.getPeriods(granularity, 'XX', status, 20) ## self.assertEqual(res['Periods'], []) # # def test_getPilotsSimpleEff( self ): # #self.mockRSS.getPilotsSimpleEff.return_value = {'OK':True, 'Value':{'Records': [['', '', 0, 3L, 0, 0, 0, 283L, 66L, 0, 0, 352L, '1.00', '81.25', 'Fair', 'Yes']]}} # # WMS_Mock = Mock() # WMS_Mock.getPilotSummaryWeb.return_value = {'OK': True, # 'rpcStub': ( ( 'WorkloadManagement/WMSAdministrator', # {'skipCACheck': True, # 'delegatedGroup': 'diracAdmin', # 'delegatedDN': '/DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=fstagni/CN=693025/CN=Federico Stagni', 'timeout': 600} ), # 'getPilotSummaryWeb', ( {'GridSite': 'LCG.Ferrara.it'}, [], 0, 500 ) ), # 'Value': { # 'TotalRecords': 0, # 'ParameterNames': ['Site', 'CE', 'Submitted', 'Ready', 'Scheduled', 'Waiting', 'Running', 'Done', 'Aborted', 'Done_Empty', 'Aborted_Hour', 'Total', 'PilotsPerJob', 'PilotJobEff', 'Status', 'InMask'], # 'Extras': {'Scheduled': 0, 'Status': 'Poor', 'Aborted_Hour': 20L, 'Waiting': 59L, 'Submitted': 6L, 'PilotsPerJob': '1.03', 'Ready': 0, 'Running': 0, 'PilotJobEff': '39.34', 'Done': 328L, 'Aborted': 606L, 'Done_Empty': 9L, 'Total': 999L}, # 'Records': []}} # # res = self.PilotsCli.getPilotsSimpleEff( 'Site', 'LCG.Ferrara.it', RPCWMSAdmin = WMS_Mock ) # self.assertEqual( res, None ) # res = self.PilotsCli.getPilotsSimpleEff( 'Resource', 'grid0.fe.infn.it', 'LCG.Ferrara.it', RPCWMSAdmin = WMS_Mock ) # self.assertEqual( res, None ) # ############################################################################## # #if __name__ == '__main__': # suite = unittest.defaultTestLoader.loadTestsFromTestCase( ClientsTestCase ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( ResourceStatusClientSuccess ) ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( JobsClientSuccess ) ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( PilotsClientSuccess ) ) # testResult = unittest.TextTestRunner( verbosity = 2 ).run( suite )
""" Artifactor Artifactor is used to collect artifacts from a number of different plugins and put them into one place. Artifactor works around a series of events and is geared towards unit testing, though it is extensible and customizable enough that it can be used for a variety of purposes. The main guts of Artifactor is around the plugins. Before Artifactor can do anything it must have a configured plugin. This plugin is then configured to bind certain functions inside itself to certain events. When Artifactor is triggered to handle a certain event, it will tell the plugin that that particular event has happened and the plugin will respond accordingly. In addition to the plugins, Artifactor can also run certain callback functions before and after the hook function itself. These are call pre and post hook callbacks. Artifactor allows multiple pre and post hook callbacks to be defined per event, but does not guarantee the order that they are executed in. To allow data to be passed to and from hooks, Artifactor has the idea of global and event local values. The global values persist in the Artifactor instance for its lifetime, but the event local values are destroyed at the end of each event. Let's take the example of using the unit testing suite py.test as an example for Artifactor. Suppose we have a number of tests that run as part of a test suite and we wish to store a text file that holds the time the test was run and its result. This information is required to reside in a folder that is relevant to the test itself. This type of job is what Artifactor was designed for. To begin with, we need to create a plugin for Artifactor. Consider the following piece of code:: from artifactor import ArtifactorBasePlugin import time class Test(ArtifactorBasePlugin): def plugin_initialize(self): self.register_plugin_hook('start_test', self.start_test) self.register_plugin_hook('finish_test', self.finish_test) def start_test(self, test_name, test_location, artifact_path): filename = artifact_path + "-" + self.ident + ".log" with open(filename, "w") as f: f.write(test_name + "\n") f.write(str(time.time()) + "\n") def finish_test(self, test_name, artifact_path, test_result): filename = artifact_path + "-" + self.ident + ".log" with open(filename, "w+") as f: f.write(test_result) This is a typical plugin in Artifactor, it consists of 2 things. The first item is the special function called ``plugin_initialize()``. This is important and is equivilent to the ``__init__()`` that would usually be found in a class definition. Artifactor calls ``plugin_initialize()`` for each plugin as it loads it. Inside this section we register the hook functions to their associated events. Each event can only have a single function associated with it. Event names are able to be freely assigned so you can customize plugins to work to specific events for your use case. The ``register_plugin_hook()`` takes an event name as a string and a function to callback when that event is experienced. Next we have the hook functions themselves, ``start_test()`` and ``finish_test()``. These have arguments in their prototypes and these arguments are supplied by Artifactor and are created either as arguments to the ``fire_hook()`` function, which is responsible for actually telling Artifactor that an even has occured, or they are created in the pre hook script. Artifactor uses the global and local values referenced earlier to store these argument values. When a pre, post or hook callback finishes, it has the opportunity to supply updates to both the global and local values dictionaries. In doing this, a pre-hook script can prepare data, which will could be stored in the locals dictionary and then passed to the actual plugin hook as a keyword argument. local values override global values. We need to look at an example of this, but first we must configure artifactor and the plugin:: log_dir: /home/me/artiout per_run: run #test, run, None overwrite: True artifacts: test: enabled: True plugin: test Here we have defined a ``log_dir`` which will be the root of all of our artifacts. We have asked Artifactor to group the artifacts by run, which means that it will try to create a directory under the ``log_dir`` which indicates which test "run" this was. We can also specify a value of "test" here, which will move the test run identifying folder up to the leaf in the tree. The ``log_dir`` and contents of the config are stored in global values as ``log_dir`` and ``artifactor_config`` respectively. These are the only two global values which are setup by Artifactor. This data is then passed to artifactor as a dict, we will assume a variable name of ``config`` here. Let's consider how we would run this test art = artifactor.artifactor art.set_config(config) art.register_plugin(test.Test, "test") artifactor.initialize() a.fire_hook('start_session', run_id=2235) a.fire_hook('start_test', test_name="my_test", test_location="tests/mytest.py") a.fire_hook('finish_test', test_name="my_test", test_location="tests/mytest.py", test_result="FAILED") a.fire_hook('finish_session') The art.register_plugin is used to bind a plugin name to a class definition. Notice in the config section earlier, we have a ``plugin: test`` field. This name ``test`` is what Artifactor will look for when trying to find the appropriate plugin. When we register the plugin with the ``register_plugin`` function, we take the ``test.Test`` class and essentially give it the name ``test`` so that the names will tie up and the plugin will be used. Notice that we have sent some information to along with the request to fire the hook. Ignoring the ``start_session`` event for a minute, the ``start_test`` event sends a ``test_name`` and a ``test_location``. However, the ``start_test`` hook also required an argument called ``argument_path``. This is not supplied by the hook, and isn't setup as a global value, so how does it get there? Inside Artifactor, by default, a pre_hook callback called ``start_test()`` is bound to the ``start_test`` event. This callback returns a local values update which includes ``artifact_path``. This is how the artifact_path is returned. This hook can be removed, by running a ``unregister_hook_callback`` with the name of the hook callback. """
""" Display date and time. This module allows one or more datetimes to be displayed. All datetimes share the same format_time but can set their own timezones. Timezones are defined in the `format` using the TZ name in squiggly brackets eg `{GMT}`, `{Portugal}`, `{Europe/Paris}`, `{America/Argentina/Buenos_Aires}`. ISO-3166 two letter country codes eg `{de}` can also be used but if more than one timezone exists for the country eg `{us}` the first one will be selected. `{Local}` can be used for the local settings of your computer. Note: Timezones are case sensitive A full list of timezones can be found at https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Configuration parameters: block_hours: length of time period for all blocks in hours (default 12) blocks: a string, where each character represents time period from the start of a time period. (default '🕛🕧🕐🕜🕑🕝🕒🕞🕓🕟🕔🕠🕕🕡🕖🕢🕗🕣🕘🕤🕙🕥🕚🕦') button_change_format: button that switches format used setting to None disables (default 1) button_change_time_format: button that switches format_time used. Setting to None disables (default 2) button_reset: button that switches display to the first timezone. Setting to None disables (default 3) cycle: If more than one display then how many seconds between changing the display (default 0) format: defines the timezones displayed. This can be a single string or a list. If a list is supplied then the formats can be cycled through using `cycle` or by button click. (default '{Local}') format_time: format to use for the time, strftime directives such as `%H` can be used this can be either a string or to allow multiple formats as a list. The one used can be changed by button click. *(default ['[{name_unclear} ]%c', '[{name_unclear} ]%x %X', '[{name_unclear} ]%a %H:%M', '[{name_unclear} ]{icon}'])* locale: Override the system locale. Examples: when set to 'fr_FR' %a on Tuesday is 'mar.'. (default None) round_to_nearest_block: defines how a block icon is chosen. Examples: when set to True, '13:14' is '🕐', '13:16' is '🕜' and '13:31' is '🕜'; when set to False, '13:14' is '🕐', '13:16' is '🕐' and '13:31' is '🕜'. (default True) Format placeholders: {icon} a character representing the time from `blocks` {name} friendly timezone name eg `Buenos Aires` {name_unclear} friendly timezone name eg `Buenos Aires` but is empty if only one timezone is provided {timezone} full timezone name eg `America/Argentina/Buenos_Aires` {timezone_unclear} full timezone name eg `America/Argentina/Buenos_Aires` but is empty if only one timezone is provided Requires: pytz: cross platform time zone library for python tzlocal: tzinfo object for the local timezone Examples: ``` # cycling through London, Warsaw, Tokyo clock { cycle = 30 format = ["{Europe/London}", "{Europe/Warsaw}", "{Asia/Tokyo}"] format_time = "{name} %H:%M" } # Show the time and date in New York clock { format = "Big Apple {America/New_York}" format_time = "%Y-%m-%d %H:%M:%S" } # wall clocks clock { format = "{Asia/Calcutta} {Africa/Nairobi} {Asia/Bangkok}" format_time = "{name} {icon}" } ``` @author USERNAME BSD SAMPLE OUTPUT {'full_text': 'Sun 15 Jan 2017 23:27:17 GMT'} london {'full_text': 'Thursday Feb 23 1:42 AM London'} """
"""CPStats, a package for collecting and reporting on program statistics. Overview ======== Statistics about program operation are an invaluable monitoring and debugging tool. Unfortunately, the gathering and reporting of these critical values is usually ad-hoc. This package aims to add a centralized place for gathering statistical performance data, a structure for recording that data which provides for extrapolation of that data into more useful information, and a method of serving that data to both human investigators and monitoring software. Let's examine each of those in more detail. Data Gathering -------------- Just as Python's `logging` module provides a common importable for gathering and sending messages, performance statistics would benefit from a similar common mechanism, and one that does *not* require each package which wishes to collect stats to import a third-party module. Therefore, we choose to re-use the `logging` module by adding a `statistics` object to it. That `logging.statistics` object is a nested dict. It is not a custom class, because that would: 1. require libraries and applications to import a third-party module in order to participate 2. inhibit innovation in extrapolation approaches and in reporting tools, and 3. be slow. There are, however, some specifications regarding the structure of the dict.:: { +----"SQLAlchemy": { | "Inserts": 4389745, | "Inserts per Second": | lambda s: s["Inserts"] / (time() - s["Start"]), | C +---"Table Statistics": { | o | "widgets": {-----------+ N | l | "Rows": 1.3M, | Record a | l | "Inserts": 400, | m | e | },---------------------+ e | c | "froobles": { s | t | "Rows": 7845, p | i | "Inserts": 0, a | o | }, c | n +---}, e | "Slow Queries": | [{"Query": "SELECT * FROM widgets;", | "Processing Time": 47.840923343, | }, | ], +----}, } The `logging.statistics` dict has four levels. The topmost level is nothing more than a set of names to introduce modularity, usually along the lines of package names. If the SQLAlchemy project wanted to participate, for example, it might populate the item `logging.statistics['SQLAlchemy']`, whose value would be a second-layer dict we call a "namespace". Namespaces help multiple packages to avoid collisions over key names, and make reports easier to read, to boot. The maintainers of SQLAlchemy should feel free to use more than one namespace if needed (such as 'SQLAlchemy ORM'). Note that there are no case or other syntax constraints on the namespace names; they should be chosen to be maximally readable by humans (neither too short nor too long). Each namespace, then, is a dict of named statistical values, such as 'Requests/sec' or 'Uptime'. You should choose names which will look good on a report: spaces and capitalization are just fine. In addition to scalars, values in a namespace MAY be a (third-layer) dict, or a list, called a "collection". For example, the CherryPy :class:`StatsTool` keeps track of what each request is doing (or has most recently done) in a 'Requests' collection, where each key is a thread ID; each value in the subdict MUST be a fourth dict (whew!) of statistical data about each thread. We call each subdict in the collection a "record". Similarly, the :class:`StatsTool` also keeps a list of slow queries, where each record contains data about each slow query, in order. Values in a namespace or record may also be functions, which brings us to: Extrapolation ------------- The collection of statistical data needs to be fast, as close to unnoticeable as possible to the host program. That requires us to minimize I/O, for example, but in Python it also means we need to minimize function calls. So when you are designing your namespace and record values, try to insert the most basic scalar values you already have on hand. When it comes time to report on the gathered data, however, we usually have much more freedom in what we can calculate. Therefore, whenever reporting tools (like the provided :class:`StatsPage` CherryPy class) fetch the contents of `logging.statistics` for reporting, they first call `extrapolate_statistics` (passing the whole `statistics` dict as the only argument). This makes a deep copy of the statistics dict so that the reporting tool can both iterate over it and even change it without harming the original. But it also expands any functions in the dict by calling them. For example, you might have a 'Current Time' entry in the namespace with the value "lambda scope: time.time()". The "scope" parameter is the current namespace dict (or record, if we're currently expanding one of those instead), allowing you access to existing static entries. If you're truly evil, you can even modify more than one entry at a time. However, don't try to calculate an entry and then use its value in further extrapolations; the order in which the functions are called is not guaranteed. This can lead to a certain amount of duplicated work (or a redesign of your schema), but that's better than complicating the spec. After the whole thing has been extrapolated, it's time for: Reporting --------- The :class:`StatsPage` class grabs the `logging.statistics` dict, extrapolates it all, and then transforms it to HTML for easy viewing. Each namespace gets its own header and attribute table, plus an extra table for each collection. This is NOT part of the statistics specification; other tools can format how they like. You can control which columns are output and how they are formatted by updating StatsPage.formatting, which is a dict that mirrors the keys and nesting of `logging.statistics`. The difference is that, instead of data values, it has formatting values. Use None for a given key to indicate to the StatsPage that a given column should not be output. Use a string with formatting (such as '%.3f') to interpolate the value(s), or use a callable (such as lambda v: v.isoformat()) for more advanced formatting. Any entry which is not mentioned in the formatting dict is output unchanged. Monitoring ---------- Although the HTML output takes pains to assign unique id's to each <td> with statistical data, you're probably better off fetching /cpstats/data, which outputs the whole (extrapolated) `logging.statistics` dict in JSON format. That is probably easier to parse, and doesn't have any formatting controls, so you get the "original" data in a consistently-serialized format. Note: there's no treatment yet for datetime objects. Try time.time() instead for now if you can. Nagios will probably thank you. Turning Collection Off ---------------------- It is recommended each namespace have an "Enabled" item which, if False, stops collection (but not reporting) of statistical data. Applications SHOULD provide controls to pause and resume collection by setting these entries to False or True, if present. Usage ===== To collect statistics on CherryPy applications:: from cherrypy.lib import cpstats appconfig['/']['tools.cpstats.on'] = True To collect statistics on your own code:: import logging # Initialize the repository if not hasattr(logging, 'statistics'): logging.statistics = {} # Initialize my namespace mystats = logging.statistics.setdefault('My Stuff', {}) # Initialize my namespace's scalars and collections mystats.update({ 'Enabled': True, 'Start Time': time.time(), 'Important Events': 0, 'Events/Second': lambda s: ( (s['Important Events'] / (time.time() - s['Start Time']))), }) ... for event in events: ... # Collect stats if mystats.get('Enabled', False): mystats['Important Events'] += 1 To report statistics:: root.cpstats = cpstats.StatsPage() To format statistics reports:: See 'Reporting', above. """
"""Doctest for method/function calls. We're going the use these types for extra testing >>> from UserList import UserList >>> from UserDict import UserDict We're defining four helper functions >>> def e(a,b): ... print a, b >>> def f(*a, **k): ... print a, test_support.sortdict(k) >>> def g(x, *y, **z): ... print x, y, test_support.sortdict(z) >>> def h(j=1, a=2, h=3): ... print j, a, h Argument list examples >>> f() () {} >>> f(1) (1,) {} >>> f(1, 2) (1, 2) {} >>> f(1, 2, 3) (1, 2, 3) {} >>> f(1, 2, 3, *(4, 5)) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *[4, 5]) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *UserList([4, 5])) (1, 2, 3, 4, 5) {} Here we add keyword arguments >>> f(1, 2, 3, **{'a':4, 'b':5}) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *[4, 5], **{'a':6, 'b':7}) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **{'a':8, 'b': 9}) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} >>> f(1, 2, 3, **UserDict(a=4, b=5)) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *(4, 5), **UserDict(a=6, b=7)) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **UserDict(a=8, b=9)) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} Examples with invalid arguments (TypeErrors). We're also testing the function names in the exception messages. Verify clearing of SF bug #733667 >>> e(c=4) Traceback (most recent call last): ... TypeError: e() got an unexpected keyword argument 'c' >>> g() Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(*()) Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(*(), **{}) Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(1) 1 () {} >>> g(1, 2) 1 (2,) {} >>> g(1, 2, 3) 1 (2, 3) {} >>> g(1, 2, 3, *(4, 5)) 1 (2, 3, 4, 5) {} >>> class Nothing: pass ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence, not instance >>> class Nothing: ... def __len__(self): return 5 ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence, not instance >>> class Nothing(): ... def __len__(self): return 5 ... def __getitem__(self, i): ... if i<3: return i ... else: raise IndexError(i) ... >>> g(*Nothing()) 0 (1, 2) {} >>> class Nothing: ... def __init__(self): self.c = 0 ... def __iter__(self): return self ... def next(self): ... if self.c == 4: ... raise StopIteration ... c = self.c ... self.c += 1 ... return c ... >>> g(*Nothing()) 0 (1, 2, 3) {} Make sure that the function doesn't stomp the dictionary >>> d = {'a': 1, 'b': 2, 'c': 3} >>> d2 = d.copy() >>> g(1, d=4, **d) 1 () {'a': 1, 'b': 2, 'c': 3, 'd': 4} >>> d == d2 True What about willful misconduct? >>> def saboteur(**kw): ... kw['x'] = 'm' ... return kw >>> d = {} >>> kw = saboteur(a=1, **d) >>> d {} >>> g(1, 2, 3, **{'x': 4, 'y': 5}) Traceback (most recent call last): ... TypeError: g() got multiple values for keyword argument 'x' >>> f(**{1:2}) Traceback (most recent call last): ... TypeError: f() keywords must be strings >>> h(**{'e': 2}) Traceback (most recent call last): ... TypeError: h() got an unexpected keyword argument 'e' >>> h(*h) Traceback (most recent call last): ... TypeError: h() argument after * must be a sequence, not function >>> dir(*h) Traceback (most recent call last): ... TypeError: dir() argument after * must be a sequence, not function >>> None(*h) Traceback (most recent call last): ... TypeError: NoneType object argument after * must be a sequence, \ not function >>> h(**h) Traceback (most recent call last): ... TypeError: h() argument after ** must be a mapping, not function >>> dir(**h) Traceback (most recent call last): ... TypeError: dir() argument after ** must be a mapping, not function >>> None(**h) Traceback (most recent call last): ... TypeError: NoneType object argument after ** must be a mapping, \ not function >>> dir(b=1, **{'b': 1}) Traceback (most recent call last): ... TypeError: dir() got multiple values for keyword argument 'b' Another helper function >>> def f2(*a, **b): ... return a, b >>> d = {} >>> for i in xrange(512): ... key = 'k%d' % i ... d[key] = i >>> a, b = f2(1, *(2,3), **d) >>> len(a), len(b), b == d (3, 512, True) >>> class Foo: ... def method(self, arg1, arg2): ... return arg1+arg2 >>> x = Foo() >>> Foo.method(*(x, 1, 2)) 3 >>> Foo.method(x, *(1, 2)) 3 >>> Foo.method(*(1, 2, 3)) Traceback (most recent call last): ... TypeError: unbound method method() must be called with Foo instance as \ first argument (got int instance instead) >>> Foo.method(1, *[2, 3]) Traceback (most recent call last): ... TypeError: unbound method method() must be called with Foo instance as \ first argument (got int instance instead) A PyCFunction that takes only positional parameters shoud allow an empty keyword dictionary to pass without a complaint, but raise a TypeError if te dictionary is not empty >>> try: ... silence = id(1, *{}) ... True ... except: ... False True >>> id(1, **{'foo': 1}) Traceback (most recent call last): ... TypeError: id() takes no keyword arguments """
# RUN: %{lit} %{inputs}/discovery | FileCheck --check-prefix=CHECK-BASIC %s # CHECK-BASIC: Testing: 5 tests # Check that we exit with an error if we do not discover any tests, even with --allow-empty-runs. # # RUN: not %{lit} %{inputs}/nonexistent 2>&1 | FileCheck --check-prefix=CHECK-BAD-PATH %s # RUN: not %{lit} %{inputs}/nonexistent --allow-empty-runs 2>&1 | FileCheck --check-prefix=CHECK-BAD-PATH %s # CHECK-BAD-PATH: error: did not discover any tests for provided path(s) # Check that we exit with an error if we filter out all tests, but allow it with --allow-empty-runs. # Check that we exit with an error if we skip all tests, but allow it with --allow-empty-runs. # # RUN: not %{lit} --filter 'nonexistent' %{inputs}/discovery 2>&1 | FileCheck --check-prefixes=CHECK-BAD-FILTER,CHECK-BAD-FILTER-ERROR %s # RUN: %{lit} --filter 'nonexistent' --allow-empty-runs %{inputs}/discovery 2>&1 | FileCheck --check-prefixes=CHECK-BAD-FILTER,CHECK-BAD-FILTER-ALLOW %s # RUN: not %{lit} --filter-out '.*' %{inputs}/discovery 2>&1 | FileCheck --check-prefixes=CHECK-BAD-FILTER,CHECK-BAD-FILTER-ERROR %s # RUN: %{lit} --filter-out '.*' --allow-empty-runs %{inputs}/discovery 2>&1 | FileCheck --check-prefixes=CHECK-BAD-FILTER,CHECK-BAD-FILTER-ALLOW %s # CHECK-BAD-FILTER: error: filter did not match any tests (of 5 discovered). # CHECK-BAD-FILTER-ERROR: Use '--allow-empty-runs' to suppress this error. # CHECK-BAD-FILTER-ALLOW: Suppressing error because '--allow-empty-runs' was specified. # Check that regex-filtering works, is case-insensitive, and can be configured via env var. # # RUN: %{lit} --filter 'o[a-z]e' %{inputs}/discovery | FileCheck --check-prefix=CHECK-FILTER %s # RUN: %{lit} --filter 'O[A-Z]E' %{inputs}/discovery | FileCheck --check-prefix=CHECK-FILTER %s # RUN: env LIT_FILTER='o[a-z]e' %{lit} %{inputs}/discovery | FileCheck --check-prefix=CHECK-FILTER %s # RUN: %{lit} --filter-out 'test-t[a-z]' %{inputs}/discovery | FileCheck --check-prefix=CHECK-FILTER %s # RUN: %{lit} --filter-out 'test-t[A-Z]' %{inputs}/discovery | FileCheck --check-prefix=CHECK-FILTER %s # RUN: env LIT_FILTER_OUT='test-t[a-z]' %{lit} %{inputs}/discovery | FileCheck --check-prefix=CHECK-FILTER %s # CHECK-FILTER: Testing: 2 of 5 tests # CHECK-FILTER: Excluded: 3 # Check that maximum counts work # # RUN: %{lit} --max-tests 3 %{inputs}/discovery | FileCheck --check-prefix=CHECK-MAX %s # CHECK-MAX: Testing: 3 of 5 tests # CHECK-MAX: Excluded: 2 # Check that sharding partitions the testsuite in a way that distributes the # rounding error nicely (i.e. 5/3 => 2 2 1, not 1 1 3 or whatever) # # RUN: %{lit} --num-shards 3 --run-shard 1 %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD0-ERR < %t.err %s # RUN: FileCheck --check-prefix=CHECK-SHARD0-OUT < %t.out %s # CHECK-SHARD0-ERR: note: Selecting shard 1/3 = size 2/5 = tests #(3*k)+1 = [1, 4] # CHECK-SHARD0-OUT: Testing: 2 of 5 tests # CHECK-SHARD0-OUT: Excluded: 3 # # RUN: %{lit} --num-shards 3 --run-shard 2 %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD1-ERR < %t.err %s # RUN: FileCheck --check-prefix=CHECK-SHARD1-OUT < %t.out %s # CHECK-SHARD1-ERR: note: Selecting shard 2/3 = size 2/5 = tests #(3*k)+2 = [2, 5] # CHECK-SHARD1-OUT: Testing: 2 of 5 tests # # RUN: %{lit} --num-shards 3 --run-shard 3 %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD2-ERR < %t.err %s # RUN: FileCheck --check-prefix=CHECK-SHARD2-OUT < %t.out %s # CHECK-SHARD2-ERR: note: Selecting shard 3/3 = size 1/5 = tests #(3*k)+3 = [3] # CHECK-SHARD2-OUT: Testing: 1 of 5 tests # Check that sharding via env vars works. # # RUN: env LIT_NUM_SHARDS=3 LIT_RUN_SHARD=1 %{lit} %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD0-ENV-ERR < %t.err %s # RUN: FileCheck --check-prefix=CHECK-SHARD0-ENV-OUT < %t.out %s # CHECK-SHARD0-ENV-ERR: note: Selecting shard 1/3 = size 2/5 = tests #(3*k)+1 = [1, 4] # CHECK-SHARD0-ENV-OUT: Testing: 2 of 5 tests # # RUN: env LIT_NUM_SHARDS=3 LIT_RUN_SHARD=2 %{lit} %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD1-ENV-ERR < %t.err %s # RUN: FileCheck --check-prefix=CHECK-SHARD1-ENV-OUT < %t.out %s # CHECK-SHARD1-ENV-ERR: note: Selecting shard 2/3 = size 2/5 = tests #(3*k)+2 = [2, 5] # CHECK-SHARD1-ENV-OUT: Testing: 2 of 5 tests # # RUN: env LIT_NUM_SHARDS=3 LIT_RUN_SHARD=3 %{lit} %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD2-ENV-ERR < %t.err %s # RUN: FileCheck --check-prefix=CHECK-SHARD2-ENV-OUT < %t.out %s # CHECK-SHARD2-ENV-ERR: note: Selecting shard 3/3 = size 1/5 = tests #(3*k)+3 = [3] # CHECK-SHARD2-ENV-OUT: Testing: 1 of 5 tests # Check that providing more shards than tests results in 1 test per shard # until we run out, then 0. # # RUN: %{lit} --num-shards 100 --run-shard 2 %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD-BIG-ERR1 < %t.err %s # RUN: FileCheck --check-prefix=CHECK-SHARD-BIG-OUT1 < %t.out %s # CHECK-SHARD-BIG-ERR1: note: Selecting shard 2/100 = size 1/5 = tests #(100*k)+2 = [2] # CHECK-SHARD-BIG-OUT1: Testing: 1 of 5 tests # # RUN: %{lit} --num-shards 100 --run-shard 6 %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD-BIG-ERR2 < %t.err %s # CHECK-SHARD-BIG-ERR2: note: Selecting shard 6/100 = size 0/5 = tests #(100*k)+6 = [] # CHECK-SHARD-BIG-ERR2: warning: shard does not contain any tests. Consider decreasing the number of shards. # # RUN: %{lit} --num-shards 100 --run-shard 50 %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD-BIG-ERR3 < %t.err %s # CHECK-SHARD-BIG-ERR3: note: Selecting shard 50/100 = size 0/5 = tests #(100*k)+50 = [] # CHECK-SHARD-BIG-ERR3: warning: shard does not contain any tests. Consider decreasing the number of shards. # Check that range constraints are enforced # # RUN: not %{lit} --num-shards 0 --run-shard 2 %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD-ERR < %t.err %s # CHECK-SHARD-ERR: error: argument --num-shards: requires positive integer, but found '0' # # RUN: not %{lit} --num-shards 3 --run-shard 4 %{inputs}/discovery >%t.out 2>%t.err # RUN: FileCheck --check-prefix=CHECK-SHARD-ERR2 < %t.err %s # CHECK-SHARD-ERR2: error: --run-shard must be between 1 and --num-shards (inclusive)
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
# Test 64-bit COMPARE AND BRANCH in cases where the sheer number of # instructions causes some branches to be out of range. # RUN: python %s | llc -mtriple=s390x-linux-gnu | FileCheck %s # Construct: # # before0: # conditional branch to after0 # ... # beforeN: # conditional branch to after0 # main: # 0xffcc bytes, from MVIY instructions # conditional branch to main # after0: # ... # conditional branch to main # afterN: # # Each conditional branch sequence occupies 12 bytes if it uses a short # branch and 16 if it uses a long one. The ones before "main:" have to # take the branch length into account, which is 6 for short branches, # so the final (0x34 - 6) / 12 == 3 blocks can use short branches. # The ones after "main:" do not, so the first 0x34 / 12 == 4 blocks # can use short branches. The conservative algorithm we use makes # one of the forward branches unnecessarily long, as noted in the # check output below. # # CHECK: lgb [[REG:%r[0-5]]], 0(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL:\.L[^ ]*]] # CHECK: lgb [[REG:%r[0-5]]], 1(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 2(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 3(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 4(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # ...as mentioned above, the next one could be a CGRJE instead... # CHECK: lgb [[REG:%r[0-5]]], 5(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 6(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 7(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # ...main goes here... # CHECK: lgb [[REG:%r[0-5]]], 25(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL:\.L[^ ]*]] # CHECK: lgb [[REG:%r[0-5]]], 26(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 27(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 28(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 29(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 30(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 31(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 32(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]]
# Sketch - A Python-based interactive drawing program # Copyright (C) 1996, 1997, 1998 by NAME This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Library General Public # License as published by the Free Software Foundation; either # version 2 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Library General Public License for more details. # # You should have received a copy of the GNU Library General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # Functions to manipulate selection info # # Representation # # The set of currently selected objects in Sketch is represented as a # list of tuples. Each of the tuples has the form: # # (PATH, OBJ) # # where OBJ is a selected object and PATH is a tuple of ints describing # the path through the hierarchy of objects to OBJ, usually starting # from the document at the top of the hierarchy. Each item in PATH is # the index of the next object in the path. For example, the second # object in the first layer has the PATH (0, 1) (indexes start from 0). # # This representation serves two purposes: # # 1. storing the path to the object allows fast access to the # parents of the selected object. # # 2. it allows sorting the list by path, which results in a list # with the objects lowest in the stack of objects at the front. # # A sorted list is important when changing the stacking order of # objects, since the indices, i.e. the path elements, may change # during the operation. # # Sorting the list also allows to make sure that each selected # object is listed exactly once in the list. # # This representation, if the list is sorted, is called the _standard # representation_. # # Alternative Representations: # # There are several alternative representations that are mainly useful # in the methods of compound objects that rearrange the children. In # those methods, the path is usually taken relative to self. Where # selection info has to be passed to children, to rearrange their # children, the first component of the path is stripped, so that the # path is relative to the child. # # All of the alternative representations are lists of tuples sorted at # least by the first item of the tuples. # # Tree: # # An alternative representation of the selection info is a list of # tuples of the form: # # (INDEX, LIST) # # where INDEX is just the first part of the PATH of the standard # representation and LIST is a list of selection info in standard # representation but with each PATH stripped of its first component # which is INDEX. That is, LIST is selection info in standard form # relative to the compound object given by INDEX. # # Tree2: # # Just like Tree1, but if LIST would contain just one item with an empty # PATH (an empty tuple), LIST is replaced by the object. # # Sliced Tree: # # A variant of Tree2, where consecutive items with an object (i.e. # something that is no list) are replaced by a tuple `(start, end)' # where start is the lowest INDEX and end the highest. Consecutive items # are items where the INDEX parts are consecutive integers. # # # Creating Selection Info: # # Selecting objects is done for instance by the GraphicsObject method # SelectSubobject. In a compound object, when it has determined that a # certain non compound child obj is to be selected, this method # constructs a selection info tuple by calling build_info: # # info1 = build_info(idx1, obj) # # idx is the index of obj in the compound object's list of children. # info1 will then be just a tuple: ((idx1,) obj) This info is returned # to the caller, its parent, which is often another compound object. # This parent then extends the selection info with # # info2 = prepend_idx(idx2, info) # # This results in a new tuple new_info: ((idx2, idx1), obj). idx2 is, of # course, the index of the compound object in its parent's list of # children. # # Finally, the document object receives such a selection info tuple from # one of its layers, prepends that layer's index to the info and puts it # into the list of selected objects. #
#!/usr/bin/env python # -*- coding: utf-8 -*- # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the terms and conditions of this license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for further * # * information. * # * * # * If you have received a written license agreement or contract for * # * Covered Software stating terms other than these, you may choose to use * # * and redistribute Covered Software under those terms instead of these. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
""" Beta diversity measures (:mod:`skbio.diversity.beta`) ===================================================== .. currentmodule:: skbio.diversity.beta This package contains helper functions for working with scipy's pairwise distance (``pdist``) functions in scikit-bio, and will eventually be expanded to contain pairwise distance/dissimilarity methods that are not implemented (or planned to be implemented) in scipy. The functions in this package currently support applying ``pdist`` functions to all pairs of samples in a sample by observation count or abundance matrix and returning an ``skbio.DistanceMatrix`` object. This application is illustrated below for a few different forms of input. Functions --------- .. autosummary:: :toctree: generated/ pw_distances pw_distances_from_table Examples -------- Create a table containing 7 OTUs and 6 samples: .. plot:: :context: >>> from skbio.diversity.beta import pw_distances >>> import numpy as np >>> data = [[23, 64, 14, 0, 0, 3, 1], ... [0, 3, 35, 42, 0, 12, 1], ... [0, 5, 5, 0, 40, 40, 0], ... [44, 35, 9, 0, 1, 0, 0], ... [0, 2, 8, 0, 35, 45, 1], ... [0, 0, 25, 35, 0, 19, 0]] >>> ids = list('ABCDEF') Compute Bray-Curtis distances between all pairs of samples and return a ``DistanceMatrix`` object: >>> bc_dm = pw_distances(data, ids, "braycurtis") >>> print(bc_dm) 6x6 distance matrix IDs: 'A', 'B', 'C', 'D', 'E', 'F' Data: [[ 0. 0.78787879 0.86666667 0.30927835 0.85714286 0.81521739] [ 0.78787879 0. 0.78142077 0.86813187 0.75 0.1627907 ] [ 0.86666667 0.78142077 0. 0.87709497 0.09392265 0.71597633] [ 0.30927835 0.86813187 0.87709497 0. 0.87777778 0.89285714] [ 0.85714286 0.75 0.09392265 0.87777778 0. 0.68235294] [ 0.81521739 0.1627907 0.71597633 0.89285714 0.68235294 0. ]] Compute Jaccard distances between all pairs of samples and return a ``DistanceMatrix`` object: >>> j_dm = pw_distances(data, ids, "jaccard") >>> print(j_dm) 6x6 distance matrix IDs: 'A', 'B', 'C', 'D', 'E', 'F' Data: [[ 0. 0.83333333 1. 1. 0.83333333 1. ] [ 0.83333333 0. 1. 1. 0.83333333 1. ] [ 1. 1. 0. 1. 1. 1. ] [ 1. 1. 1. 0. 1. 1. ] [ 0.83333333 0.83333333 1. 1. 0. 1. ] [ 1. 1. 1. 1. 1. 0. ]] Determine if the resulting distance matrices are significantly correlated by computing the Mantel correlation between them. Then determine if the p-value is significant based on an alpha of 0.05: >>> from skbio.stats.distance import mantel >>> r, p_value, n = mantel(j_dm, bc_dm) >>> print(r) -0.209362157621 >>> print(p_value < 0.05) False Compute PCoA for both distance matrices, and then find the Procrustes M-squared value that results from comparing the coordinate matrices. >>> from skbio.stats.ordination import PCoA >>> bc_pc = PCoA(bc_dm).scores() >>> j_pc = PCoA(j_dm).scores() >>> from skbio.stats.spatial import procrustes >>> print(procrustes(bc_pc.site, j_pc.site)[2]) 0.466134984787 All of this only gets interesting in the context of sample metadata, so let's define some: >>> import pandas as pd >>> try: ... # not necessary for normal use ... pd.set_option('show_dimensions', True) ... except KeyError: ... pass >>> sample_md = { ... 'A': {'body_site': 'gut', 'subject': 's1'}, ... 'B': {'body_site': 'skin', 'subject': 's1'}, ... 'C': {'body_site': 'tongue', 'subject': 's1'}, ... 'D': {'body_site': 'gut', 'subject': 's2'}, ... 'E': {'body_site': 'tongue', 'subject': 's2'}, ... 'F': {'body_site': 'skin', 'subject': 's2'}} >>> sample_md = pd.DataFrame.from_dict(sample_md, orient='index') >>> sample_md subject body_site A s1 gut B s1 skin C s1 tongue D s2 gut E s2 tongue F s2 skin <BLANKLINE> [6 rows x 2 columns] Now let's plot our PCoA results, coloring each sample by the subject it was taken from: >>> fig = bc_pc.plot(sample_md, 'subject', ... axis_labels=('PC 1', 'PC 2', 'PC 3'), ... title='Samples colored by subject', cmap='jet', s=50) .. plot:: :context: We don't see any clustering/grouping of samples. If we were to instead color the samples by the body site they were taken from, we see that the samples form three separate groups: >>> import matplotlib.pyplot as plt >>> plt.close('all') # not necessary for normal use >>> fig = bc_pc.plot(sample_md, 'body_site', ... axis_labels=('PC 1', 'PC 2', 'PC 3'), ... title='Samples colored by body site', cmap='jet', s=50) Ordination techniques, such as PCoA, are useful for exploratory analysis. The next step is to quantify the strength of the grouping/clustering that we see in ordination plots. There are many statistical methods available to accomplish this; many operate on distance matrices. Let's use ANOSIM to quantify the strength of the clustering we see in the ordination plots above, using our Bray-Curtis distance matrix and sample metadata. First test the grouping of samples by subject: >>> from skbio.stats.distance import anosim >>> results = anosim(bc_dm, sample_md, column='subject', permutations=999) >>> results['test statistic'] -0.4074074074074075 >>> results['p-value'] < 0.1 False The negative value of ANOSIM's R statistic indicates anti-clustering and the p-value is insignificant at an alpha of 0.1. Now let's test the grouping of samples by body site: >>> results = anosim(bc_dm, sample_md, column='body_site', permutations=999) >>> results['test statistic'] 1.0 >>> results['p-value'] < 0.1 True The R statistic of 1.0 indicates strong separation of samples based on body site. The p-value is significant at an alpha of 0.1. References ---------- .. [1] http://matplotlib.org/examples/mplot3d/scatter3d_demo.html """
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I an trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
"""Request body processing for CherryPy. .. versionadded:: 3.2 Application authors have complete control over the parsing of HTTP request entities. In short, :attr:`cherrypy.request.body<cherrypy._cprequest.Request.body>` is now always set to an instance of :class:`RequestBody<cherrypy._cpreqbody.RequestBody>`, and *that* class is a subclass of :class:`Entity<cherrypy._cpreqbody.Entity>`. When an HTTP request includes an entity body, it is often desirable to provide that information to applications in a form other than the raw bytes. Different content types demand different approaches. Examples: * For a GIF file, we want the raw bytes in a stream. * An HTML form is better parsed into its component fields, and each text field decoded from bytes to unicode. * A JSON body should be deserialized into a Python dict or list. When the request contains a Content-Type header, the media type is used as a key to look up a value in the :attr:`request.body.processors<cherrypy._cpreqbody.Entity.processors>` dict. If the full media type is not found, then the major type is tried; for example, if no processor is found for the 'image/jpeg' type, then we look for a processor for the 'image' types altogether. If neither the full type nor the major type has a matching processor, then a default processor is used (:func:`default_proc<cherrypy._cpreqbody.Entity.default_proc>`). For most types, this means no processing is done, and the body is left unread as a raw byte stream. Processors are configurable in an 'on_start_resource' hook. Some processors, especially those for the 'text' types, attempt to decode bytes to unicode. If the Content-Type request header includes a 'charset' parameter, this is used to decode the entity. Otherwise, one or more default charsets may be attempted, although this decision is up to each processor. If a processor successfully decodes an Entity or Part, it should set the :attr:`charset<cherrypy._cpreqbody.Entity.charset>` attribute on the Entity or Part to the name of the successful charset, so that applications can easily re-encode or transcode the value if they wish. If the Content-Type of the request entity is of major type 'multipart', then the above parsing process, and possibly a decoding process, is performed for each part. For both the full entity and multipart parts, a Content-Disposition header may be used to fill :attr:`name<cherrypy._cpreqbody.Entity.name>` and :attr:`filename<cherrypy._cpreqbody.Entity.filename>` attributes on the request.body or the Part. .. _custombodyprocessors: Custom Processors ================= You can add your own processors for any specific or major MIME type. Simply add it to the :attr:`processors<cherrypy._cprequest.Entity.processors>` dict in a hook/tool that runs at ``on_start_resource`` or ``before_request_body``. Here's the built-in JSON tool for an example:: def json_in(force=True, debug=False): request = cherrypy.serving.request def json_processor(entity): '''Read application/json data into request.json.''' if not entity.headers.get("Content-Length", ""): raise cherrypy.HTTPError(411) body = entity.fp.read() try: request.json = json_decode(body) except ValueError: raise cherrypy.HTTPError(400, 'Invalid JSON document') if force: request.body.processors.clear() request.body.default_proc = cherrypy.HTTPError( 415, 'Expected an application/json content type') request.body.processors['application/json'] = json_processor We begin by defining a new ``json_processor`` function to stick in the ``processors`` dictionary. All processor functions take a single argument, the ``Entity`` instance they are to process. It will be called whenever a request is received (for those URI's where the tool is turned on) which has a ``Content-Type`` of "application/json". First, it checks for a valid ``Content-Length`` (raising 411 if not valid), then reads the remaining bytes on the socket. The ``fp`` object knows its own length, so it won't hang waiting for data that never arrives. It will return when all data has been read. Then, we decode those bytes using Python's built-in ``json`` module, and stick the decoded result onto ``request.json`` . If it cannot be decoded, we raise 400. If the "force" argument is True (the default), the ``Tool`` clears the ``processors`` dict so that request entities of other ``Content-Types`` aren't parsed at all. Since there's no entry for those invalid MIME types, the ``default_proc`` method of ``cherrypy.request.body`` is called. But this does nothing by default (usually to provide the page handler an opportunity to handle it.) But in our case, we want to raise 415, so we replace ``request.body.default_proc`` with the error (``HTTPError`` instances, when called, raise themselves). If we were defining a custom processor, we can do so without making a ``Tool``. Just add the config entry:: request.body.processors = {'application/json': json_processor} Note that you can only replace the ``processors`` dict wholesale this way, not update the existing one. """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I an trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
""" #Cyclic 4 R.<x1,x2,x3,x4> = QQ[] polys = [x1+x2+x3+x4,x1*x2+x2*x3+x3*x4+x4*x1,x1*x2*x3+x2*x3*x4+x3*x4*x1+x4*x1*x2] TropicalPrevariety(polys) #Should be equivalent (up to homogenization) to: R.ideal(polys).groebner_fan().tropical_intersection().rays() #Reduced cyclic 8 R.<y_1,y_2,y_3,y_4,y_5,y_6,y_7> = QQ[] polys = [1 + y_1 + y_2 + y_3 + y_4 + y_5 + y_6 + y_7,y_1 + y_1*y_2 + y_2*y_3 + y_3*y_4 + y_4*y_5 + y_5*y_6 + y_6*y_7 + y_7,y_1*y_2 + y_1*y_2*y_3 + y_2*y_3*y_4 + y_3*y_4*y_5 + y_4*y_5*y_6 + y_5*y_6*y_7 + y_6*y_7 + y_7*y_1,y_1*y_2*y_3 + y_1*y_2*y_3*y_4 + y_2*y_3*y_4*y_5 + y_3*y_4*y_5*y_6 + y_4*y_5*y_6*y_7 + y_5*y_6*y_7 + y_6*y_7*y_1 + y_7*y_1*y_2,y_1*y_2*y_3*y_4 + y_1*y_2*y_3*y_4*y_5 + y_2*y_3*y_4*y_5*y_6 + y_3*y_4*y_5*y_6*y_7 + y_4*y_5*y_6*y_7 + y_5*y_6*y_7*y_1 + y_6*y_7*y_1*y_2 + y_7*y_1*y_2*y_3,y_1*y_2*y_3*y_4*y_5 + y_1*y_2*y_3*y_4*y_5*y_6 + y_2*y_3*y_4*y_5*y_6*y_7 + y_3*y_4*y_5*y_6*y_7 + y_4*y_5*y_6*y_7*y_1 + y_5*y_6*y_7*y_1*y_2 + y_6*y_7*y_1*y_2*y_3 + y_7*y_1*y_2*y_3*y_4,y_1*y_2*y_3*y_4*y_5*y_6 + y_1*y_2*y_3*y_4*y_5*y_6*y_7 + y_2*y_3*y_4*y_5*y_6*y_7+ y_3*y_4*y_5*y_6*y_7*y_1 + y_4*y_5*y_6*y_7*y_1*y_2 + y_5*y_6*y_7*y_1*y_2*y_3+ y_6*y_7*y_1*y_2*y_3*y_4 + y_7*y_1*y_2*y_3*y_4*y_5] TropicalPrevariety(polys) """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to read all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <lkcl@samba.org> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
""" <Program> namespace.py <Started> September 2009 <Author> NAME This is the namespace layer that ensures separation of the namespaces of untrusted code and our code. It provides a single public function to be used to setup the context in which untrusted code is exec'd (that is, the context that is seen as the __builtins__ by the untrusted code). The general idea is that any function or object that is available between trusted and untrusted code gets wrapped in a function or object that does validation when the function or object is used. In general, if user code is not calling any functions improperly, neither the user code nor our trusted code should ever notice that the objects and functions they are dealing with have been wrapped by this namespace layer. All of our own api functions are wrapped in NamespaceAPIFunctionWrapper objects whose wrapped_function() method is mapped in to the untrusted code's context. When called, the wrapped_function() method performs argument, return value, and exception validation as well as additional wrapping and unwrapping, as needed, that is specific to the function that was ultimately being called. If the return value or raised exceptions are not considered acceptable, a NamespaceViolationError is raised. If the arguments are not acceptable, a TypeError is raised. Note that callback functions that are passed from untrusted user code to trusted code are also wrapped (these are arguments to wrapped API functions, so we get to wrap them before calling the underlying function). The reason we wrap these is so that we can intercept calls to the callback functions and wrap arguments passed to them, making sure that handles passed as arguments to the callbacks get wrapped before user code sees them. The function and object wrappers have been defined based on the API as documented at https://seattle.cs.washington.edu/wiki/RepyLibrary Example of using this module (this is really the only way to use the module): import namespace usercontext = {} namespace.wrap_and_insert_api_functions(usercontext) safe.safe_exec(usercode, usercontext) The above code will result in the dict usercontext being populated with keys that are the names of the functions available to the untrusted code (such as 'open') and the values are the wrapped versions of the actual functions to be called (such as 'emulfile.emulated_open'). Note that some functions wrapped by this module lose some python argument flexibility. Wrapped functions can generally only have keyword args in situations where the arguments are optional. Using keyword arguments for required args may not be supported, depending on the implementation of the specific argument check/wrapping/unwrapping helper functions for that particular wrapped function. If this becomes a problem, it can be dealt with by complicating some of the argument checking/wrapping/unwrapping code in this module to make the checking functions more flexible in how they take their arguments. Implementation details: The majority of the code in this module is made up of helper functions to do argument checking, etc. for specific wrapped functions. The most important parts to look at in this module for maintenance and auditing are the following: USERCONTEXT_WRAPPER_INFO The USERCONTEXT_WRAPPER_INFO is a dictionary that defines the API functions that are wrapped and inserted into the user context when wrap_and_insert_api_functions() is called. FILE_OBJECT_WRAPPER_INFO LOCK_OBJECT_WRAPPER_INFO TCP_SOCKET_OBJECT_WRAPPER_INFO TCP_SERVER_SOCKET_OBJECT_WRAPPER_INFO UDP_SERVER_SOCKET_OBJECT_WRAPPER_INFO VIRTUAL_NAMESPACE_OBJECT_WRAPPER_INFO The above four dictionaries define the methods available on the wrapped objects that are returned by wrapped functions. Additionally, timerhandle and commhandle objects are wrapped but instances of these do not have any public methods and so no *_WRAPPER_INFO dictionaries are defined for them. NamespaceObjectWrapper NamespaceAPIFunctionWrapper The above two classes are the only two types of objects that will be allowed in untrusted code. In fact, instances of NamespaceAPIFunctionWrapper are never actually allowed in untrusted code. Rather, each function that is wrapped has a single NamespaceAPIFunctionWrapper instance created when wrap_and_insert_api_functions() is called and what is actually made available to the untrusted code is the wrapped_function() method of each of the corresponding NamespaceAPIFunctionWrapper instances. NamespaceInternalError If this error is raised anywhere (along with any other unexpected exceptions), it should result in termination of the running program (see the except blocks in NamespaceAPIFunctionWrapper.wrapped_function). """
""" ================= Structured Arrays ================= Introduction ============ Numpy provides powerful capabilities to create arrays of structured datatype. These arrays permit one to manipulate the data by named fields. A simple example will show what is meant.: :: >>> x = np.array([(1,2.,'Hello'), (2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> x array([(1, 2.0, 'Hello'), (2, 3.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) Here we have created a one-dimensional array of length 2. Each element of this array is a structure that contains three items, a 32-bit integer, a 32-bit float, and a string of length 10 or less. If we index this array at the second position we get the second structure: :: >>> x[1] (2,3.,"World") Conveniently, one can access any field of the array by indexing using the string that names that field. :: >>> y = x['foo'] >>> y array([ 2., 3.], dtype=float32) >>> y[:] = 2*y >>> y array([ 4., 6.], dtype=float32) >>> x array([(1, 4.0, 'Hello'), (2, 6.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) In these examples, y is a simple float array consisting of the 2nd field in the structured type. But, rather than being a copy of the data in the structured array, it is a view, i.e., it shares exactly the same memory locations. Thus, when we updated this array by doubling its values, the structured array shows the corresponding values as doubled as well. Likewise, if one changes the structured array, the field view also changes: :: >>> x[1] = (-1,-1.,"Master") >>> x array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) >>> y array([ 4., -1.], dtype=float32) Defining Structured Arrays ========================== One defines a structured array through the dtype object. There are **several** alternative ways to define the fields of a record. Some of these variants provide backward compatibility with Numeric, numarray, or another module, and should not be used except for such purposes. These will be so noted. One specifies record structure in one of four alternative ways, using an argument (as supplied to a dtype function keyword or a dtype object constructor itself). This argument must be one of the following: 1) string, 2) tuple, 3) list, or 4) dictionary. Each of these is briefly described below. 1) String argument. In this case, the constructor expects a comma-separated list of type specifiers, optionally with extra shape information. The fields are given the default names 'f0', 'f1', 'f2' and so on. The type specifiers can take 4 different forms: :: a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n> (representing bytes, ints, unsigned ints, floats, complex and fixed length strings of specified byte lengths) b) int8,...,uint8,...,float16, float32, float64, complex64, complex128 (this time with bit sizes) c) older Numeric/numarray type specifications (e.g. Float32). Don't use these in new code! d) Single character type specifiers (e.g H for unsigned short ints). Avoid using these unless you must. Details can be found in the Numpy book These different styles can be mixed within the same string (but why would you want to do that?). Furthermore, each type specifier can be prefixed with a repetition number, or a shape. In these cases an array element is created, i.e., an array within a record. That array is still referred to as a single field. An example: :: >>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64') >>> x array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])], dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))]) By using strings to define the record structure, it precludes being able to name the fields in the original definition. The names can be changed as shown later, however. 2) Tuple argument: The only relevant tuple case that applies to record structures is when a structure is mapped to an existing data type. This is done by pairing in a tuple, the existing data type with a matching dtype definition (using any of the variants being described here). As an example (using a definition using a list, so see 3) for further details): :: >>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')])) >>> x array([0, 0, 0]) >>> x['r'] array([0, 0, 0], dtype=uint8) In this case, an array is produced that looks and acts like a simple int32 array, but also has definitions for fields that use only one byte of the int32 (a bit like Fortran equivalencing). 3) List argument: In this case the record structure is defined with a list of tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field ('' is permitted), 2) the type of the field, and 3) the shape (optional). For example:: >>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) >>> x array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])], dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))]) 4) Dictionary argument: two different forms are permitted. The first consists of a dictionary with two required keys ('names' and 'formats'), each having an equal sized list of values. The format list contains any type/shape specifier allowed in other contexts. The names must be strings. There are two optional keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to the required two where offsets contain integer offsets for each field, and titles are objects containing metadata for each field (these do not have to be strings), where the value of None is permitted. As an example: :: >>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[('col1', '>i4'), ('col2', '>f4')]) The other dictionary form permitted is a dictionary of name keys with tuple values specifying type, offset, and an optional title. :: >>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')]) Accessing and modifying field names =================================== The field names are an attribute of the dtype object defining the structure. For the last example: :: >>> x.dtype.names ('col1', 'col2') >>> x.dtype.names = ('x', 'y') >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')]) >>> x.dtype.names = ('x', 'y', 'z') # wrong number of names <type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2 Accessing field titles ==================================== The field titles provide a standard place to put associated info for fields. They do not have to be strings. :: >>> x.dtype.fields['x'][2] 'title 1' Accessing multiple fields at once ==================================== You can access multiple fields at once using a list of field names: :: >>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))], dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) Notice that `x` is created with a list of tuples. :: >>> x[['x','y']] array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)], dtype=[('x', '<f4'), ('y', '<f4')]) >>> x[['x','value']] array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]), (1.0, [[2.0, 6.0], [2.0, 6.0]])], dtype=[('x', '<f4'), ('value', '<f4', (2, 2))]) The fields are returned in the order they are asked for.:: >>> x[['y','x']] array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)], dtype=[('y', '<f4'), ('x', '<f4')]) Filling structured arrays ========================= Structured arrays can be filled by field or row by row. :: >>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')]) >>> arr['var1'] = np.arange(5) If you fill it in row by row, it takes a take a tuple (but not a list or array!):: >>> arr[0] = (10,20) >>> arr array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)], dtype=[('var1', '<f8'), ('var2', '<f8')]) Record Arrays ============= For convenience, numpy provides "record arrays" which allow one to access fields of structured arrays by attribute rather than by index. Record arrays are structured arrays wrapped using a subclass of ndarray, :class:`numpy.recarray`, which allows field access by attribute on the array object, and record arrays also use a special datatype, :class:`numpy.record`, which allows field access by attribute on the individual elements of the array. The simplest way to create a record array is with :func:`numpy.rec.array`: :: >>> recordarr = np.rec.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> recordarr.bar array([ 2., 3.], dtype=float32) >>> recordarr[1:2] rec.array([(2, 3.0, 'World')], dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]) >>> recordarr[1:2].foo array([2], dtype=int32) >>> recordarr.foo[1:2] array([2], dtype=int32) >>> recordarr[1].baz 'World' numpy.rec.array can convert a wide variety of arguments into record arrays, including normal structured arrays: :: >>> arr = array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')]) >>> recordarr = np.rec.array(arr) The numpy.rec module provides a number of other convenience functions for creating record arrays, see :ref:`record array creation routines <routines.array-creation.rec>`. A record array representation of a structured array can be obtained using the appropriate :ref:`view`: :: >>> arr = np.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')]) >>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)), ... type=np.recarray) For convenience, viewing an ndarray as type `np.recarray` will automatically convert to `np.record` datatype, so the dtype can be left out of the view: :: >>> recordarr = arr.view(np.recarray) >>> recordarr.dtype dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])) To get back to a plain ndarray both the dtype and type must be reset. The following view does so, taking into account the unusual case that the recordarr was not a structured type: :: >>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray) Record array fields accessed by index or by attribute are returned as a record array if the field has a structured type but as a plain ndarray otherwise. :: >>> recordarr = np.rec.array([('Hello', (1,2)),("World", (3,4))], ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])]) >>> type(recordarr.foo) <type 'numpy.ndarray'> >>> type(recordarr.bar) <class 'numpy.core.records.recarray'> Note that if a field has the same name as an ndarray attribute, the ndarray attribute takes precedence. Such fields will be inaccessible by attribute but may still be accessed by index. """
#import os #import mock #from uuid import uuid4 #from urllib2 import urlopen # #from django.test import TestCase #from django.core.files.base import ContentFile #from django.conf import settings #from django.core.files.storage import FileSystemStorage # #from boto.s3.key import Key # #from storages.backends import s3boto # #__all__ = ( # 'SafeJoinTest', # 'S3BotoStorageTests', # #'S3BotoStorageFileTests', #) # #class S3BotoTestCase(TestCase): # @mock.patch('storages.backends.s3boto.S3Connection') # def setUp(self, S3Connection): # self.storage = s3boto.S3BotoStorage() # # #class SafeJoinTest(TestCase): # def test_normal(self): # path = s3boto.safe_join("", "path/to/somewhere", "other", "path/to/somewhere") # self.assertEquals(path, "path/to/somewhere/other/path/to/somewhere") # # def test_with_dot(self): # path = s3boto.safe_join("", "path/./somewhere/../other", "..", # ".", "to/./somewhere") # self.assertEquals(path, "path/to/somewhere") # # def test_base_url(self): # path = s3boto.safe_join("base_url", "path/to/somewhere") # self.assertEquals(path, "base_url/path/to/somewhere") # # def test_base_url_with_slash(self): # path = s3boto.safe_join("base_url/", "path/to/somewhere") # self.assertEquals(path, "base_url/path/to/somewhere") # # def test_suspicious_operation(self): # self.assertRaises(ValueError, # s3boto.safe_join, "base", "../../../../../../../etc/passwd") # #class S3BotoStorageTests(S3BotoTestCase): # # def test_storage_save(self): # """ # Test saving a file # """ # name = 'test_storage_save.txt' # content = ContentFile('new content') # self.storage.save(name, content) # self.storage.bucket.get_key.assert_called_once_with(name) # # key = self.storage.bucket.get_key.return_value # key.set_metadata.assert_called_with('Content-Type', 'text/plain') # key.set_contents_from_file.assert_called_with( # content, # headers={}, # policy=self.storage.acl, # reduced_redundancy=self.storage.reduced_redundancy, # ) # # def test_storage_save_gzip(self): # """ # Test saving a file with gzip enabled. # """ # if not s3boto.IS_GZIPPED: # Gzip not available. # return # name = 'test_storage_save.css' # content = ContentFile("I should be gzip'd") # self.storage.save(name, content) # key = self.storage.bucket.get_key.return_value # key.set_metadata.assert_called_with('Content-Type', 'text/css') # key.set_contents_from_file.assert_called_with( # content, # headers={'Content-Encoding': 'gzip'}, # policy=self.storage.acl, # reduced_redundancy=self.storage.reduced_redundancy, # ) # # def test_compress_content_len(self): # """ # Test that file returned by _compress_content() is readable. # """ # if not s3boto.IS_GZIPPED: # Gzip not available. # return # content = ContentFile("I should be gzip'd") # content = self.storage._compress_content(content) # self.assertTrue(len(content.read()) > 0) # # def test_storage_open_write(self): # """ # Test opening a file in write mode # """ # name = 'test_open_for_writing.txt' # content = 'new content' # # # Set the ACL header used when creating/writing data. # self.storage.bucket.connection.provider.acl_header = 'x-amz-acl' # # Set the mocked key's bucket # self.storage.bucket.get_key.return_value.bucket = self.storage.bucket # # Set the name of the mock object # self.storage.bucket.get_key.return_value.name = name # # file = self.storage.open(name, 'w') # self.storage.bucket.get_key.assert_called_with(name) # # file.write(content) # self.storage.bucket.initiate_multipart_upload.assert_called_with( # name, # headers={'x-amz-acl': 'public-read'}, # reduced_redundancy=self.storage.reduced_redundancy, # ) # # # Save the internal file before closing # _file = file.file # file.close() # file._multipart.upload_part_from_file.assert_called_with( # _file, 1, headers=self.storage.headers, # ) # file._multipart.complete_upload.assert_called_once() # # #def test_storage_exists_and_delete(self): # # # show file does not exist # # name = self.prefix_path('test_exists.txt') # # self.assertFalse(self.storage.exists(name)) # # # # # create the file # # content = 'new content' # # file = self.storage.open(name, 'w') # # file.write(content) # # file.close() # # # # # show file exists # # self.assertTrue(self.storage.exists(name)) # # # # # delete the file # # self.storage.delete(name) # # # # # show file does not exist # # self.assertFalse(self.storage.exists(name)) # # def test_storage_listdir_base(self): # file_names = ["some/path/1.txt", "2.txt", "other/path/3.txt", "4.txt"] # # self.storage.bucket.list.return_value = [] # for p in file_names: # key = mock.MagicMock(spec=Key) # key.name = p # self.storage.bucket.list.return_value.append(key) # # dirs, files = self.storage.listdir("") # # self.assertEqual(len(dirs), 2) # for directory in ["some", "other"]: # self.assertTrue(directory in dirs, # """ "%s" not in directory list "%s".""" % ( # directory, dirs)) # # self.assertEqual(len(files), 2) # for filename in ["2.txt", "4.txt"]: # self.assertTrue(filename in files, # """ "%s" not in file list "%s".""" % ( # filename, files)) # # def test_storage_listdir_subdir(self): # file_names = ["some/path/1.txt", "some/2.txt"] # # self.storage.bucket.list.return_value = [] # for p in file_names: # key = mock.MagicMock(spec=Key) # key.name = p # self.storage.bucket.list.return_value.append(key) # # dirs, files = self.storage.listdir("some/") # self.assertEqual(len(dirs), 1) # self.assertTrue('path' in dirs, # """ "path" not in directory list "%s".""" % (dirs,)) # # self.assertEqual(len(files), 1) # self.assertTrue('2.txt' in files, # """ "2.txt" not in files list "%s".""" % (files,)) # # #def test_storage_size(self): # # name = self.prefix_path('test_storage_size.txt') # # content = 'new content' # # f = ContentFile(content) # # self.storage.save(name, f) # # self.assertEqual(self.storage.size(name), f.size) # # # #def test_storage_url(self): # # name = self.prefix_path('test_storage_size.txt') # # content = 'new content' # # f = ContentFile(content) # # self.storage.save(name, f) # # self.assertEqual(content, urlopen(self.storage.url(name)).read()) # ##class S3BotoStorageFileTests(S3BotoTestCase): ## def test_multipart_upload(self): ## nparts = 2 ## name = self.prefix_path("test_multipart_upload.txt") ## mode = 'w' ## f = s3boto.S3BotoStorageFile(name, mode, self.storage) ## content_length = 1024 * 1024# 1 MB ## content = 'a' * content_length ## ## bytes = 0 ## target = f._write_buffer_size * nparts ## while bytes < target: ## f.write(content) ## bytes += content_length ## ## # make the buffer roll over so f._write_counter ## # is incremented ## f.write("finished") ## ## # verify upload was multipart and correctly partitioned ## self.assertEqual(f._write_counter, nparts) ## ## # complete the upload ## f.close() ## ## # verify that the remaining buffered bytes were ## # uploaded when the file was closed. ## self.assertEqual(f._write_counter, nparts+1)
""" .. index:: multidimensional scaling (mds) .. index:: single: projection; multidimensional scaling (mds) ********************************** Multidimensional scaling (``mds``) ********************************** The functionality to perform multidimensional scaling (http://en.wikipedia.org/wiki/Multidimensional_scaling). The main class to perform multidimensional scaling is :class:`Orange.projection.mds.MDS` .. autoclass:: Orange.projection.mds.MDS :members: :exclude-members: Torgerson, get_distance, get_stress, calc_stress, run .. automethod:: calc_stress(stress_func=SgnRelStress) .. automethod:: run(iter, stress_func=SgnRelStress, eps=1e-3, progress_callback=None) Stress functions ================ Stress functions that can be used for MDS have to be implemented as functions or callable classes: .. method:: \ __call__(correct, current, weight=1.0) Compute the stress using the correct and the current distance value (the :obj:`Orange.projection.mds.MDS.distances` and :obj:`Orange.projection.mds.MDS.projected_distances` elements). :param correct: correct (actual) distance between elements, represented by the two points. :type correct: float :param current: current distance between the points in the MDS space. :type current: float This module provides the following stress functions: * :obj:`SgnRelStress` * :obj:`KruskalStress` * :obj:`SammonStress` * :obj:`SgnSammonStress` Examples ======== MDS Scatterplot --------------- The following script computes the Euclidean distance between the data instances and runs MDS. Final coordinates are plotted with matplotlib (not included with orange, http://matplotlib.sourceforge.net/). Example (:download:`mds-scatterplot.py <code/mds-scatterplot.py>`) .. literalinclude:: code/mds-scatterplot.py :lines: 7- The script produces a file *mds-scatterplot.py.png*. Color denotes the class. Iris is a relatively simple data set with respect to classification; to no surprise we see that MDS finds such instance placement in 2D where instances of different classes are well separated. Note that MDS has no knowledge of points' classes. .. image:: files/mds-scatterplot.png A more advanced example ----------------------- The following script performs 10 steps of Smacof optimization before computing the stress. This is suitable if you have a large dataset and want to save some time. Example (:download:`mds-advanced.py <code/mds-advanced.py>`) .. literalinclude:: code/mds-advanced.py :lines: 7- A few representative lines of the output are:: <-0.633911848068, 0.112218663096> [5.1, 3.5, 1.4, 0.2, 'Iris-setosa'] <-0.624193906784, -0.111143872142> [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'] ... <0.265250980854, 0.237793982029> [7.0, 3.2, 4.7, 1.4, 'Iris-versicolor'] <0.208580598235, 0.116296850145> [6.4, 3.2, 4.5, 1.5, 'Iris-versicolor'] ... <0.635814905167, 0.238721415401> [6.3, 3.3, 6.0, 2.5, 'Iris-virginica'] <0.356859534979, -0.175976261497> [5.8, 2.7, 5.1, 1.9, 'Iris-virginica'] ... """
# Test 64-bit COMPARE AND BRANCH in cases where the sheer number of # instructions causes some branches to be out of range. # RUN: python %s | llc -mtriple=s390x-linux-gnu | FileCheck %s # Construct: # # before0: # conditional branch to after0 # ... # beforeN: # conditional branch to after0 # main: # 0xffcc bytes, from MVIY instructions # conditional branch to main # after0: # ... # conditional branch to main # afterN: # # Each conditional branch sequence occupies 12 bytes if it uses a short # branch and 16 if it uses a long one. The ones before "main:" have to # take the branch length into account, which is 6 for short branches, # so the final (0x34 - 6) / 12 == 3 blocks can use short branches. # The ones after "main:" do not, so the first 0x34 / 12 == 4 blocks # can use short branches. The conservative algorithm we use makes # one of the forward branches unnecessarily long, as noted in the # check output below. # # CHECK: lgb [[REG:%r[0-5]]], 0(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL:\.L[^ ]*]] # CHECK: lgb [[REG:%r[0-5]]], 1(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 2(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 3(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 4(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # ...as mentioned above, the next one could be a CGRJE instead... # CHECK: lgb [[REG:%r[0-5]]], 5(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 6(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 7(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # ...main goes here... # CHECK: lgb [[REG:%r[0-5]]], 25(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL:\.L[^ ]*]] # CHECK: lgb [[REG:%r[0-5]]], 26(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 27(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 28(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 29(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 30(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 31(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 32(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]]
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
""" This is a procedural interface to the matplotlib object-oriented plotting library. The following plotting commands are provided; the majority have Matlab(TM) analogs and similar argument. _Plotting commands acorr - plot the autocorrelation function annotate - annotate something in the figure arrow - add an arrow to the axes axes - Create a new axes axhline - draw a horizontal line across axes axvline - draw a vertical line across axes axhspan - draw a horizontal bar across axes axvspan - draw a vertical bar across axes axis - Set or return the current axis limits bar - make a bar chart barh - a horizontal bar chart broken_barh - a set of horizontal bars with gaps box - set the axes frame on/off state boxplot - make a box and whisker plot cla - clear current axes clabel - label a contour plot clf - clear a figure window clim - adjust the color limits of the current image close - close a figure window colorbar - add a colorbar to the current figure cohere - make a plot of coherence contour - make a contour plot contourf - make a filled contour plot csd - make a plot of cross spectral density delaxes - delete an axes from the current figure draw - Force a redraw of the current figure errorbar - make an errorbar graph figlegend - make legend on the figure rather than the axes figimage - make a figure image figtext - add text in figure coords figure - create or change active figure fill - make filled polygons findobj - recursively find all objects matching some criteria gca - return the current axes gcf - return the current figure gci - get the current image, or None getp - get a handle graphics property grid - set whether gridding is on hist - make a histogram hold - set the axes hold state ioff - turn interaction mode off ion - turn interaction mode on isinteractive - return True if interaction mode is on imread - load image file into array imshow - plot image data ishold - return the hold state of the current axes legend - make an axes legend loglog - a log log plot matshow - display a matrix in a new figure preserving aspect pcolor - make a pseudocolor plot pcolormesh - make a pseudocolor plot using a quadrilateral mesh pie - make a pie chart plot - make a line plot plot_date - plot dates plotfile - plot column data from an ASCII tab/space/comma delimited file pie - pie charts polar - make a polar plot on a PolarAxes psd - make a plot of power spectral density quiver - make a direction field (arrows) plot rc - control the default params rgrids - customize the radial grids and labels for polar savefig - save the current figure scatter - make a scatter plot setp - set a handle graphics property semilogx - log x axis semilogy - log y axis show - show the figures specgram - a spectrogram plot spy - plot sparsity pattern using markers or image stem - make a stem plot subplot - make a subplot (numrows, numcols, axesnum) subplots_adjust - change the params controlling the subplot positions of current figure subplot_tool - launch the subplot configuration tool suptitle - add a figure title table - add a table to the plot text - add some text at location x,y to the current axes thetagrids - customize the radial theta grids and labels for polar title - add a title to the current axes xcorr - plot the autocorrelation function of x and y xlim - set/get the xlimits ylim - set/get the ylimits xticks - set/get the xticks yticks - set/get the yticks xlabel - add an xlabel to the current axes ylabel - add a ylabel to the current axes autumn - set the default colormap to autumn bone - set the default colormap to bone cool - set the default colormap to cool copper - set the default colormap to copper flag - set the default colormap to flag gray - set the default colormap to gray hot - set the default colormap to hot hsv - set the default colormap to hsv jet - set the default colormap to jet pink - set the default colormap to pink prism - set the default colormap to prism spring - set the default colormap to spring summer - set the default colormap to summer winter - set the default colormap to winter spectral - set the default colormap to spectral _Event handling connect - register an event handler disconnect - remove a connected event handler _Matrix commands cumprod - the cumulative product along a dimension cumsum - the cumulative sum along a dimension detrend - remove the mean or besdt fit line from an array diag - the k-th diagonal of matrix diff - the n-th differnce of an array eig - the eigenvalues and eigen vectors of v eye - a matrix where the k-th diagonal is ones, else zero find - return the indices where a condition is nonzero fliplr - flip the rows of a matrix up/down flipud - flip the columns of a matrix left/right linspace - a linear spaced vector of N values from min to max inclusive logspace - a log spaced vector of N values from min to max inclusive meshgrid - repeat x and y to make regular matrices ones - an array of ones rand - an array from the uniform distribution [0,1] randn - an array from the normal distribution rot90 - rotate matrix k*90 degress counterclockwise squeeze - squeeze an array removing any dimensions of length 1 tri - a triangular matrix tril - a lower triangular matrix triu - an upper triangular matrix vander - the Vandermonde matrix of vector x svd - singular value decomposition zeros - a matrix of zeros _Probability levypdf - The levy probability density function from the char. func. normpdf - The Gaussian probability density function rand - random numbers from the uniform distribution randn - random numbers from the normal distribution _Statistics corrcoef - correlation coefficient cov - covariance matrix amax - the maximum along dimension m mean - the mean along dimension m median - the median along dimension m amin - the minimum along dimension m norm - the norm of vector x prod - the product along dimension m ptp - the max-min along dimension m std - the standard deviation along dimension m asum - the sum along dimension m _Time series analysis bartlett - M-point Bartlett window blackman - M-point Blackman window cohere - the coherence using average periodiogram csd - the cross spectral density using average periodiogram fft - the fast Fourier transform of vector x hamming - M-point Hamming window hanning - M-point Hanning window hist - compute the histogram of x kaiser - M length Kaiser window psd - the power spectral density using average periodiogram sinc - the sinc function of array x _Dates date2num - convert python datetimes to numeric representation drange - create an array of numbers for date plots num2date - convert numeric type (float days since 0001) to datetime _Other angle - the angle of a complex array griddata - interpolate irregularly distributed data to a regular grid load - load ASCII data into array polyfit - fit x, y to an n-th order polynomial polyval - evaluate an n-th order polynomial roots - the roots of the polynomial coefficients in p save - save an array to an ASCII file trapz - trapezoidal integration __end """
"""Exception classes for CherryPy. CherryPy provides (and uses) exceptions for declaring that the HTTP response should be a status other than the default "200 OK". You can ``raise`` them like normal Python exceptions. You can also call them and they will raise themselves; this means you can set an :class:`HTTPError<cherrypy._cperror.HTTPError>` or :class:`HTTPRedirect<cherrypy._cperror.HTTPRedirect>` as the :attr:`request.handler<cherrypy._cprequest.Request.handler>`. .. _redirectingpost: Redirecting POST ================ When you GET a resource and are redirected by the server to another Location, there's generally no problem since GET is both a "safe method" (there should be no side-effects) and an "idempotent method" (multiple calls are no different than a single call). POST, however, is neither safe nor idempotent--if you charge a credit card, you don't want to be charged twice by a redirect! For this reason, *none* of the 3xx responses permit a user-agent (browser) to resubmit a POST on redirection without first confirming the action with the user: ===== ================================= =========== 300 Multiple Choices Confirm with the user 301 Moved Permanently Confirm with the user 302 Found (Object moved temporarily) Confirm with the user 303 See Other GET the new URI--no confirmation 304 Not modified (for conditional GET only--POST should not raise this error) 305 Use Proxy Confirm with the user 307 Temporary Redirect Confirm with the user ===== ================================= =========== However, browsers have historically implemented these restrictions poorly; in particular, many browsers do not force the user to confirm 301, 302 or 307 when redirecting POST. For this reason, CherryPy defaults to 303, which most user-agents appear to have implemented correctly. Therefore, if you raise HTTPRedirect for a POST request, the user-agent will most likely attempt to GET the new URI (without asking for confirmation from the user). We realize this is confusing for developers, but it's the safest thing we could do. You are of course free to raise ``HTTPRedirect(uri, status=302)`` or any other 3xx status if you know what you're doing, but given the environment, we couldn't let any of those be the default. Custom Error Handling ===================== .. image:: /refman/cperrors.gif Anticipated HTTP responses -------------------------- The 'error_page' config namespace can be used to provide custom HTML output for expected responses (like 404 Not Found). Supply a filename from which the output will be read. The contents will be interpolated with the values %(status)s, %(message)s, %(traceback)s, and %(version)s using plain old Python `string formatting <http://www.python.org/doc/2.6.4/library/stdtypes.html#string-formatting-operations>`_. :: _cp_config = {'error_page.404': os.path.join(localDir, "static/index.html")} Beginning in version 3.1, you may also provide a function or other callable as an error_page entry. It will be passed the same status, message, traceback and version arguments that are interpolated into templates:: def error_page_402(status, message, traceback, version): return "Error %s - Well, I'm very sorry but you haven't paid!" % status cherrypy.config.update({'error_page.402': error_page_402}) Also in 3.1, in addition to the numbered error codes, you may also supply "error_page.default" to handle all codes which do not have their own error_page entry. Unanticipated errors -------------------- CherryPy also has a generic error handling mechanism: whenever an unanticipated error occurs in your code, it will call :func:`Request.error_response<cherrypy._cprequest.Request.error_response>` to set the response status, headers, and body. By default, this is the same output as :class:`HTTPError(500) <cherrypy._cperror.HTTPError>`. If you want to provide some other behavior, you generally replace "request.error_response". Here is some sample code that shows how to display a custom error message and send an e-mail containing the error:: from cherrypy import _cperror def handle_error(): cherrypy.response.status = 500 cherrypy.response.body = ["<html><body>Sorry, an error occured</body></html>"] sendMail('error@domain.com', 'Error in your web app', _cperror.format_exc()) class Root: _cp_config = {'request.error_response': handle_error} Note that you have to explicitly set :attr:`response.body <cherrypy._cprequest.Response.body>` and not simply return an error message as a result. """
# # ElementTree # $Id: ElementTree.py 3224 2007-08-27 21:23:39Z USERNAME $ # # light-weight XML support for Python 1.5.2 and later. # # history: # 2001-10-20 fl created (from various sources) # 2001-11-01 fl return root from parse method # 2002-02-16 fl sort attributes in lexical order # 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup # 2002-05-01 fl finished TreeBuilder refactoring # 2002-07-14 fl added basic namespace support to ElementTree.write # 2002-07-25 fl added QName attribute support # 2002-10-20 fl fixed encoding in write # 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding # 2002-11-27 fl accept file objects or file names for parse/write # 2002-12-04 fl moved XMLTreeBuilder back to this module # 2003-01-11 fl fixed entity encoding glitch for us-ascii # 2003-02-13 fl added XML literal factory # 2003-02-21 fl added ProcessingInstruction/PI factory # 2003-05-11 fl added tostring/fromstring helpers # 2003-05-26 fl added ElementPath support # 2003-07-05 fl added makeelement factory method # 2003-07-28 fl added more well-known namespace prefixes # 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed # 2003-10-31 fl markup updates # 2003-11-15 fl fixed nested namespace bug # 2004-03-28 fl added XMLID helper # 2004-06-02 fl added default support to findtext # 2004-06-08 fl fixed encoding of non-ascii element/attribute names # 2004-08-23 fl take advantage of post-2.1 expat features # 2005-02-01 fl added iterparse implementation # 2005-03-02 fl fixed iterparse support for pre-2.2 versions # 2006-11-18 fl added parser support for IronPython (ElementIron) # 2007-08-27 fl fixed newlines in attributes # # Copyright (c) 1999-2007 by NAME All rights reserved. # # USERNAME@pythonware.com # http://www.pythonware.com # # -------------------------------------------------------------------- # The ElementTree toolkit is # # Copyright (c) 1999-2007 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. ########################################################################################### # Implementation of the stochastic depth algorithm described in the paper # # NAME et al. "Deep networks with stochastic depth." arXiv preprint arXiv:1603.09382 (2016). # # Reference torch implementation can be found at https://github.com/yueatsprograms/Stochastic_Depth # # There are some differences in the implementation: # - A BN->ReLU->Conv is used for skip connection when input and output shapes are different, # as oppose to a padding layer. # - The residual block is different: we use BN->ReLU->Conv->BN->ReLU->Conv, as oppose to # Conv->BN->ReLU->Conv->BN (->ReLU also applied to skip connection). # - We did not try to match with the same initialization, learning rate scheduling, etc. # #-------------------------------------------------------------------------------- # A sample from the running log (We achieved ~9.4% error after 500 epochs, some # more careful tuning of the hyper parameters and maybe also the arch is needed # to achieve the reported numbers in the paper): # # INFO:root:Epoch[80] Batch [50] Speed: 1020.95 samples/sec Train-accuracy=0.910080 # INFO:root:Epoch[80] Batch [100] Speed: 1013.41 samples/sec Train-accuracy=0.912031 # INFO:root:Epoch[80] Batch [150] Speed: 1035.48 samples/sec Train-accuracy=0.913438 # INFO:root:Epoch[80] Batch [200] Speed: 1045.00 samples/sec Train-accuracy=0.907344 # INFO:root:Epoch[80] Batch [250] Speed: 1055.32 samples/sec Train-accuracy=0.905937 # INFO:root:Epoch[80] Batch [300] Speed: 1071.71 samples/sec Train-accuracy=0.912500 # INFO:root:Epoch[80] Batch [350] Speed: 1033.73 samples/sec Train-accuracy=0.910937 # INFO:root:Epoch[80] Train-accuracy=0.919922 # INFO:root:Epoch[80] Time cost=48.348 # INFO:root:Saved checkpoint to "sd-110-0081.params" # INFO:root:Epoch[80] Validation-accuracy=0.880142 # ... # INFO:root:Epoch[115] Batch [50] Speed: 1037.04 samples/sec Train-accuracy=0.937040 # INFO:root:Epoch[115] Batch [100] Speed: 1041.12 samples/sec Train-accuracy=0.934219 # INFO:root:Epoch[115] Batch [150] Speed: 1036.02 samples/sec Train-accuracy=0.933125 # INFO:root:Epoch[115] Batch [200] Speed: 1057.49 samples/sec Train-accuracy=0.938125 # INFO:root:Epoch[115] Batch [250] Speed: 1060.56 samples/sec Train-accuracy=0.933438 # INFO:root:Epoch[115] Batch [300] Speed: 1046.25 samples/sec Train-accuracy=0.935625 # INFO:root:Epoch[115] Batch [350] Speed: 1043.83 samples/sec Train-accuracy=0.927188 # INFO:root:Epoch[115] Train-accuracy=0.938477 # INFO:root:Epoch[115] Time cost=47.815 # INFO:root:Saved checkpoint to "sd-110-0116.params" # INFO:root:Epoch[115] Validation-accuracy=0.884415 # ... # INFO:root:Saved checkpoint to "sd-110-0499.params" # INFO:root:Epoch[498] Validation-accuracy=0.908554 # INFO:root:Epoch[499] Batch [50] Speed: 1068.28 samples/sec Train-accuracy=0.991422 # INFO:root:Epoch[499] Batch [100] Speed: 1053.10 samples/sec Train-accuracy=0.991094 # INFO:root:Epoch[499] Batch [150] Speed: 1042.89 samples/sec Train-accuracy=0.995156 # INFO:root:Epoch[499] Batch [200] Speed: 1066.22 samples/sec Train-accuracy=0.991406 # INFO:root:Epoch[499] Batch [250] Speed: 1050.56 samples/sec Train-accuracy=0.990781 # INFO:root:Epoch[499] Batch [300] Speed: 1032.02 samples/sec Train-accuracy=0.992500 # INFO:root:Epoch[499] Batch [350] Speed: 1062.16 samples/sec Train-accuracy=0.992969 # INFO:root:Epoch[499] Train-accuracy=0.994141 # INFO:root:Epoch[499] Time cost=47.401 # INFO:root:Saved checkpoint to "sd-110-0500.params" # INFO:root:Epoch[499] Validation-accuracy=0.906050 # ###########################################################################################
# This is a sample transform plugin script for bbcrack # All transform plugin scripts need to be named trans*.py, in the plugins folder # Each plugin script should add Transform objects. # First define a new Transform class, inheriting either from Transform_char or # Transform_string: # - Transform_char: for transforms that apply to each character/byte # independently, not depending on the location of the character. # (example: simple XOR) # - Transform_string: for all other transforms, that may apply to several # characters at once, or taking into account the location of the character. # (example: XOR with increasing key) # Transform_char is usually much faster because it uses a translation table. # A class represents a generic transform (obfuscation algorithm), such as XOR # or XOR+ROL. # When the class is instantiated as an object, it includes the keys of the # obfuscation algorithm, specified as parameters. (e.g. "XOR 4F" or "XOR 4F + # ROL 3") # For each transform class, you need to implement the following methods/variables: # - a description and an short name for the transform # - __init__() to store parameters # - iter_params() to generate all the possible parameters for bruteforcing # - transform_char() or transform_string() to apply the transform to a single # character or to the whole string at once. # Then do not forget to add to the proper level 1, 2 or 3. (see below after # class samples) # If you develop useful plugin scripts and you would like me to reference them, # or if you think about additional transforms that bbcrack should include, # please contact me using this form: http://www.decalage.info/contact # See below for three different examples: # 1) Transform_char with single parameter # 2) Transform_char with multiple parameters # 3) Transform_string #------------------------------------------------------------------------------ ##class Transform_SAMPLE_XOR (Transform_char): ## """ ## sample XOR Transform, single parameter ## """ ## # Provide a description for the transform, and an id (short name for ## # command line options): ## gen_name = 'SAMPLE XOR with 8 bits static key A. Parameters: A (1-FF).' ## gen_id = 'samplexor' ## ## # the __init__ method must store provided parameters and build the specific ## # name and shortname of the transform with parameters ## def __init__(self, params): ## """ ## constructor for the Transform object. ## This method needs to be overloaded for every specific Transform. ## It should set name and shortname according to the provided parameters. ## (for example shortname="xor_17" for a XOR transform with params=17) ## params: single value or tuple of values, parameters for the transformation ## """ ## self.params = params ## self.name = "Sample XOR %02X" % params ## # this shortname will be used to save bbcrack and bbtrans results to files ## self.shortname = "samplexor%02X" % params ## ## def transform_char (self, char): ## """ ## Method to be overloaded, only for a transform that acts on a character. ## This method should apply the transform to the provided char, using params ## as parameters, and return the transformed data as a character. ## (here character = string of length 1) ## ## NOTE: here the algorithm can be slow, because it will only be used 256 ## times to build a translation table. ## """ ## # here params is an integer ## return chr(ord(char) ^ self.params) ## ## @staticmethod ## def iter_params (): ## """ ## Method to be overloaded. ## This static method should iterate over all possible parameters for the ## transform function, yielding each set of parameters as a single value ## or a tuple of values. ## (for example for a XOR transform, it should yield 1 to 255) ## This method should be used on the Transform class in order to ## instantiate a Transform object with each set of parameters. ## """ ## # the XOR key can be 1 to 255 (0 would be identity) ## for key in xrange(1,256): ## yield key #------------------------------------------------------------------------------ ##class Transform_SAMPLE_XOR_ROL (Transform_char): ## """ ## Sample XOR+ROL Transform - multiple parameters ## """ ## # generic name for the class: ## gen_name = 'XOR with static 8 bits key A, then rotate B bits left. Parameters: A (1-FF), B (1-7).' ## gen_id = 'xor_rol' ## ## def __init__(self, params): ## # Here we assume that params is a tuple with two integers: ## self.params = params ## self.name = "XOR %02X then ROL %d" % params ## self.shortname = "xor%02X_rol%d" % params ## ## def transform_char (self, char): ## # here params is a tuple ## xor_key, rol_bits = self.params ## return chr(rol(ord(char) ^ xor_key, rol_bits)) ## ## @staticmethod ## def iter_params (): ## "return (XOR key, ROL bits)" ## # the XOR key can be 1 to 255 (0 would be like ROL) ## for xor_key in xrange(1,256): ## # the ROL bits can be 1 to 7: ## for rol_bits in xrange(1,8): ## # yield a tuple with XOR key and ROL bits: ## yield (xor_key, rol_bits) #------------------------------------------------------------------------------ ##class Transform_SAMPLE_XOR_INC (Transform_string): ## """ ## Sample XOR Transform, with incrementing key ## (this kind of transform must be implemented as a Transform_string, because ## it gives different results depending on the location of the character) ## """ ## # generic name for the class: ## gen_name = 'XOR with 8 bits key A incrementing after each character. Parameters: A (0-FF).' ## gen_id = 'xor_inc' ## ## def __init__(self, params): ## self.params = params ## self.name = "XOR %02X INC" % params ## self.shortname = "xor%02X_inc" % params ## ## def transform_string (self, data): ## """ ## Method to be overloaded, only for a transform that acts on a string ## globally. ## This method should apply the transform to the data string, using params ## as parameters, and return the transformed data as a string. ## (the resulting string does not need to have the same length as data) ## """ ## # here params is an integer ## out = '' ## for i in xrange(len(data)): ## xor_key = (self.params + i) & 0xFF ## out += chr(ord(data[i]) ^ xor_key) ## return out ## ## @staticmethod ## def iter_params (): ## # the XOR key can be 0 to 255 (0 is not identity here) ## for xor_key in xrange(0,256): ## yield xor_key #------------------------------------------------------------------------------ # Second, add it to the proper level: # - level 1 for fast transform with up to 2000 iterations (e.g. xor, xor+rol) # - level 2 for slower transforms or more iterations (e.g. xor+add) # - level 3 for slow or infrequent transforms ##add_transform(Transform_SAMPLE_XOR, level=1) ##add_transform(Transform_SAMPLE_XOR_ROL, level=1) ##add_transform(Transform_SAMPLE_XOR_INC, level=2) # see bbcrack.py and the Transform classes for more options.
#!/usr/bin/env python # ***** BEGIN LICENSE BLOCK ***** # Version: MPL 1.1/GPL 2.0/LGPL 2.1 # # The contents of this file are subject to the Mozilla Public License Version # 1.1 (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # http://www.mozilla.org/MPL/ # # Software distributed under the License is distributed on an "AS IS" basis, # WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License # for the specific language governing rights and limitations under the # License. # # The Original Code is font utility code. # # The Initial Developer of the Original Code is Mozilla Corporation. # Portions created by the Initial Developer are Copyright (C) 2009 # the Initial Developer. All Rights Reserved. # # Contributor(s): # NAME <jdaggett@mozilla.com> # # Alternatively, the contents of this file may be used under the terms of # either the GNU General Public License Version 2 or later (the "GPL"), or # the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), # in which case the provisions of the GPL or the LGPL are applicable instead # of those above. If you wish to allow use of your version of this file only # under the terms of either the GPL or the LGPL, and not to allow others to # use your version of this file under the terms of the MPL, indicate your # decision by deleting the provisions above and replace them with the notice # and other provisions required by the GPL or the LGPL. If you do not delete # the provisions above, a recipient may use your version of this file under # the terms of any one of the MPL, the GPL or the LGPL. # # ***** END LICENSE BLOCK ***** */ # eotlitetool.py - create EOT version of OpenType font for use with IE # # Usage: eotlitetool.py [-o output-filename] font1 [font2 ...] # # OpenType file structure # http://www.microsoft.com/typography/otspec/otff.htm # # Types: # # BYTE 8-bit unsigned integer. # CHAR 8-bit signed integer. # USHORT 16-bit unsigned integer. # SHORT 16-bit signed integer. # ULONG 32-bit unsigned integer. # Fixed 32-bit signed fixed-point number (16.16) # LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer. # # SFNT Header # # Fixed sfnt version // 0x00010000 for version 1.0. # USHORT numTables // Number of tables. # USHORT searchRange // (Maximum power of 2 <= numTables) x 16. # USHORT entrySelector // Log2(maximum power of 2 <= numTables). # USHORT rangeShift // NumTables x 16-searchRange. # # Table Directory # # ULONG tag // 4-byte identifier. # ULONG checkSum // CheckSum for this table. # ULONG offset // Offset from beginning of TrueType font file. # ULONG length // Length of this table. # # OS/2 Table (Version 4) # # USHORT version // 0x0004 # SHORT xAvgCharWidth # USHORT usWeightClass # USHORT usWidthClass # USHORT fsType # SHORT ySubscriptXSize # SHORT ySubscriptYSize # SHORT ySubscriptXOffset # SHORT ySubscriptYOffset # SHORT ySuperscriptXSize # SHORT ySuperscriptYSize # SHORT ySuperscriptXOffset # SHORT ySuperscriptYOffset # SHORT yStrikeoutSize # SHORT yStrikeoutPosition # SHORT sFamilyClass # BYTE panose[10] # ULONG ulUnicodeRange1 // Bits 0-31 # ULONG ulUnicodeRange2 // Bits 32-63 # ULONG ulUnicodeRange3 // Bits 64-95 # ULONG ulUnicodeRange4 // Bits 96-127 # CHAR achVendID[4] # USHORT fsSelection # USHORT usFirstCharIndex # USHORT usLastCharIndex # SHORT sTypoAscender # SHORT sTypoDescender # SHORT sTypoLineGap # USHORT usWinAscent # USHORT usWinDescent # ULONG ulCodePageRange1 // Bits 0-31 # ULONG ulCodePageRange2 // Bits 32-63 # SHORT sxHeight # SHORT sCapHeight # USHORT usDefaultChar # USHORT usBreakChar # USHORT usMaxContext # # # The Naming Table is organized as follows: # # [name table header] # [name records] # [string data] # # Name Table Header # # USHORT format // Format selector (=0). # USHORT count // Number of name records. # USHORT stringOffset // Offset to start of string storage (from start of table). # # Name Record # # USHORT platformID // Platform ID. # USHORT encodingID // Platform-specific encoding ID. # USHORT languageID // Language ID. # USHORT nameID // Name ID. # USHORT length // String length (in bytes). # USHORT offset // String offset from start of storage area (in bytes). # # head Table # # Fixed tableVersion // Table version number 0x00010000 for version 1.0. # Fixed fontRevision // Set by font manufacturer. # ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum. # ULONG magicNumber // Set to 0x5F0F3CF5. # USHORT flags # USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines. # LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # SHORT xMin // For all glyph bounding boxes. # SHORT yMin # SHORT xMax # SHORT yMax # USHORT macStyle # USHORT lowestRecPPEM // Smallest readable size in pixels. # SHORT fontDirectionHint # SHORT indexToLocFormat // 0 for short offsets, 1 for long. # SHORT glyphDataFormat // 0 for current format. # # # # Embedded OpenType (EOT) file format # http://www.w3.org/Submission/EOT/ # # EOT version 0x00020001 # # An EOT font consists of a header with the original OpenType font # appended at the end. Most of the data in the EOT header is simply a # copy of data from specific tables within the font data. The exceptions # are the 'Flags' field and the root string name field. The root string # is a set of names indicating domains for which the font data can be # used. A null root string implies the font data can be used anywhere. # The EOT header is in little-endian byte order but the font data remains # in big-endian order as specified by the OpenType spec. # # Overall structure: # # [EOT header] # [EOT name records] # [font data] # # EOT header # # ULONG eotSize // Total structure length in bytes (including string and font data) # ULONG fontDataSize // Length of the OpenType font (FontData) in bytes # ULONG version // Version number of this format - 0x00020001 # ULONG flags // Processing Flags (0 == no special processing) # BYTE fontPANOSE[10] // OS/2 Table panose # BYTE charset // DEFAULT_CHARSET (0x01) # BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise # ULONG weight // OS/2 Table usWeightClass # USHORT fsType // OS/2 Table fsType (specifies embedding permission flags) # USHORT magicNumber // Magic number for EOT file - 0x504C. # ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1 # ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2 # ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3 # ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4 # ULONG codePageRange1 // OS/2 Table ulCodePageRange1 # ULONG codePageRange2 // OS/2 Table ulCodePageRange2 # ULONG checkSumAdjustment // head Table CheckSumAdjustment # ULONG reserved[4] // Reserved - must be 0 # USHORT padding1 // Padding - must be 0 # # EOT name records # # USHORT FamilyNameSize // Font family name size in bytes # BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16 # USHORT Padding2 // Padding - must be 0 # # USHORT StyleNameSize // Style name size in bytes # BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16 # USHORT Padding3 // Padding - must be 0 # # USHORT VersionNameSize // Version name size in bytes # bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16 # USHORT Padding4 // Padding - must be 0 # # USHORT FullNameSize // Full name size in bytes # BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16 # USHORT Padding5 // Padding - must be 0 # # USHORT RootStringSize // Root string size in bytes # BYTE RootString[RootStringSize] // Root string, little-endian UTF-16