language
stringclasses 1
value | repo
stringclasses 346
values | path
stringlengths 6
201
| class_span
dict | source
stringlengths 21
2.38M
| target
stringlengths 1
96
|
|---|---|---|---|---|---|
python
|
falconry__falcon
|
falcon/request.py
|
{
"start": 2127,
"end": 87019
}
|
class ____:
"""Represents a client's HTTP request.
Note:
`Request` is not meant to be instantiated directly by responders.
Args:
env (dict): A WSGI environment dict passed in from the server. See
also PEP-3333.
Keyword Arguments:
options (RequestOptions): Set of global options passed from the App handler.
"""
__slots__ = (
'__dict__',
'_bounded_stream',
'_cached_access_route',
'_cached_forwarded',
'_cached_forwarded_prefix',
'_cached_forwarded_uri',
'_cached_headers',
'_cached_headers_lower',
'_cached_prefix',
'_cached_relative_uri',
'_cached_uri',
'_params',
'_wsgierrors',
'content_type',
'context',
'env',
'method',
'options',
'path',
'query_string',
'stream',
'uri_template',
'_media',
'_media_error',
'is_websocket',
)
_cookies: dict[str, list[str]] | None = None
_cookies_collapsed: dict[str, str] | None = None
_cached_if_match: UnsetOr[list[ETag | Literal['*']] | None] = _UNSET
_cached_if_none_match: UnsetOr[list[ETag | Literal['*']] | None] = _UNSET
# Child classes may override this
context_type: ClassVar[type] = structures.Context
"""Class variable that determines the factory or
type to use for initializing the `context` attribute. By default,
the framework will instantiate bare objects (instances of the bare
:class:`falcon.Context` class). However, you may override this
behavior by creating a custom child class of
``Request``, and then passing that new class to
``App()`` by way of the latter's `request_type` parameter.
Note:
When overriding `context_type` with a factory function (as
opposed to a class), the function is called like a method of
the current ``Request`` instance. Therefore the first argument
is the Request instance itself (i.e., `self`).
"""
# Attribute declaration
env: dict[str, Any]
"""Reference to the WSGI environ ``dict`` passed in from the
server. (See also PEP-3333.)
"""
context: structures.Context
"""Empty object to hold any data (in its attributes)
about the request which is specific to your app (e.g. session
object). Falcon itself will not interact with this attribute after
it has been initialized.
Note:
The preferred way to pass request-specific data, when using the
default context type, is to set attributes directly on the
`context` object. For example::
req.context.role = 'trial'
req.context.user = 'guest'
"""
method: str
"""HTTP method requested, uppercase (e.g., ``'GET'``, ``'POST'``, etc.)"""
path: str
"""Path portion of the request URI (not including query string).
Warning:
If this attribute is to be used by the app for any upstream
requests, any non URL-safe characters in the path must be URL
encoded back before making the request.
Note:
``req.path`` may be set to a new value by a
``process_request()`` middleware method in order to influence
routing. If the original request path was URL encoded, it will
be decoded before being returned by this attribute.
"""
query_string: str
"""Query string portion of the request URI, without the preceding
'?' character.
"""
uri_template: str | None
"""The template for the route that was matched for
this request. May be ``None`` if the request has not yet been
routed, as would be the case for ``process_request()`` middleware
methods. May also be ``None`` if your app uses a custom routing
engine and the engine does not provide the URI template when
resolving a route.
"""
content_type: str | None
"""Value of the Content-Type header, or ``None`` if the header is missing."""
stream: ReadableIO
"""File-like input object for reading the body of the
request, if any. This object provides direct access to the
server's data stream and is non-seekable. In order to
avoid unintended side effects, and to provide maximum
flexibility to the application, Falcon itself does not
buffer or spool the data in any way.
Since this object is provided by the WSGI
server itself, rather than by Falcon, it may behave
differently depending on how you host your app. For example,
attempting to read more bytes than are expected (as
determined by the Content-Length header) may or may not
block indefinitely. It's a good idea to test your WSGI
server to find out how it behaves.
This can be particularly problematic when a request body is
expected, but none is given. In this case, the following
call blocks under certain WSGI servers::
# Blocks if Content-Length is 0
data = req.stream.read()
The workaround is fairly straightforward, if verbose::
# If Content-Length happens to be 0, or the header is
# missing altogether, this will not block.
data = req.stream.read(req.content_length or 0)
Alternatively, when passing the stream directly to a
consumer, it may be necessary to branch off the
value of the Content-Length header::
if req.content_length:
doc = json.load(req.stream)
For a slight performance cost, you may instead wish to use
:attr:`bounded_stream`, which wraps the native WSGI
input object to normalize its behavior.
Note:
If an HTML form is POSTed to the API using the
*application/x-www-form-urlencoded* media type, and
the :attr:`~.RequestOptions.auto_parse_form_urlencoded`
option is set, the framework
will consume `stream` in order to parse the parameters
and merge them into the query string parameters. In this
case, the stream will be left at EOF.
"""
options: RequestOptions
"""Set of global options passed from the App handler."""
is_websocket: bool
"""Always ``False`` in a sync ``Request``."""
def __init__(
self, env: dict[str, Any], options: RequestOptions | None = None
) -> None:
self.is_websocket: bool = False
self.env = env
self.options = options if options is not None else RequestOptions()
self._wsgierrors: TextIO = env['wsgi.errors']
self.method = env['REQUEST_METHOD']
self.uri_template = None
self._media: UnsetOr[Any] = _UNSET
self._media_error: Exception | None = None
# NOTE(kgriffs): PEP 3333 specifies that PATH_INFO may be the
# empty string, so normalize it in that case.
path: str = env['PATH_INFO'] or '/'
# PEP 3333 specifies that the PATH_INFO variable is always
# "bytes tunneled as latin-1" and must be encoded back.
#
# NOTE(kgriffs): The decoded path may contain UTF-8 characters.
# But according to the WSGI spec, no strings can contain chars
# outside ISO-8859-1. Therefore, to reconcile the URI
# encoding standard that allows UTF-8 with the WSGI spec
# that does not, WSGI servers tunnel the string via
# ISO-8859-1, e.g.:
#
# tunnelled_path = path.encode('utf-8').decode('iso-8859-1')
# perf(vytas): Only decode the tunnelled path in case it is not ASCII.
# For ASCII-strings, the below decoding chain is a no-op.
if not path.isascii():
path = path.encode('iso-8859-1').decode('utf-8', 'replace')
if (
self.options.strip_url_path_trailing_slash
and len(path) != 1
and path.endswith('/')
):
self.path: str = path[:-1]
else:
self.path = path
# PERF(ueg1990): try/catch cheaper and faster (and more Pythonic)
try:
self.query_string = env['QUERY_STRING']
except KeyError:
self.query_string = ''
self._params: dict[str, str | list[str]] = {}
else:
if self.query_string:
self._params = parse_query_string(
self.query_string,
keep_blank=self.options.keep_blank_qs_values,
csv=self.options.auto_parse_qs_csv,
)
else:
self._params = {}
self._cached_access_route: list[str] | None = None
self._cached_forwarded: list[Forwarded] | None = None
self._cached_forwarded_prefix: str | None = None
self._cached_forwarded_uri: str | None = None
self._cached_headers: dict[str, str] | None = None
self._cached_headers_lower: dict[str, str] | None = None
self._cached_prefix: str | None = None
self._cached_relative_uri: str | None = None
self._cached_uri: str | None = None
try:
self.content_type = self.env['CONTENT_TYPE']
except KeyError:
self.content_type = None
self.stream = env['wsgi.input']
self._bounded_stream: BoundedStream | None = None # Lazy wrapping
# PERF(kgriffs): Technically, we should spend a few more
# cycles and parse the content type for real, but
# this heuristic will work virtually all the time.
if (
self.options._auto_parse_form_urlencoded
and self.content_type is not None
and 'application/x-www-form-urlencoded' in self.content_type
and
# NOTE(kgriffs): Within HTTP, a payload for a GET or HEAD
# request has no defined semantics, so we don't expect a
# body in those cases. We would normally not expect a body
# for OPTIONS either, but RFC 7231 does allow for it.
self.method not in ('GET', 'HEAD')
):
self._parse_form_urlencoded()
self.context = self.context_type()
def __repr__(self) -> str:
return '<%s: %s %r>' % (self.__class__.__name__, self.method, self.url)
# ------------------------------------------------------------------------
# Properties
# ------------------------------------------------------------------------
user_agent: str | None = helpers._header_property('HTTP_USER_AGENT')
"""Value of the User-Agent header, or ``None`` if the header is missing."""
auth: str | None = helpers._header_property('HTTP_AUTHORIZATION')
"""Value of the Authorization header, or ``None`` if the header is missing."""
expect: str | None = helpers._header_property('HTTP_EXPECT')
"""Value of the Expect header, or ``None`` if the header is missing."""
if_range: str | None = helpers._header_property('HTTP_IF_RANGE')
"""Value of the If-Range header, or ``None`` if the header is missing."""
referer: str | None = helpers._header_property('HTTP_REFERER')
"""Value of the Referer header, or ``None`` if the header is missing."""
@property
def forwarded(self) -> list[Forwarded] | None:
"""Value of the Forwarded header, as a parsed list
of :class:`falcon.Forwarded` objects, or ``None`` if the header
is missing. If the header value is malformed, Falcon will
make a best effort to parse what it can.
(See also: RFC 7239, Section 4)
""" # noqa: D205
# PERF(kgriffs): We could DRY up this memoization pattern using
# a decorator, but that would incur additional overhead without
# resorting to some trickery to rewrite the body of the method
# itself (vs. simply wrapping it with some memoization logic).
# At some point we might look into this but I don't think
# it's worth it right now.
if self._cached_forwarded is None:
forwarded = self.get_header('Forwarded')
if forwarded is None:
return None
self._cached_forwarded = _parse_forwarded_header(forwarded)
return self._cached_forwarded
@property
def client_accepts_json(self) -> bool:
"""``True`` if the Accept header indicates that the client is
willing to receive JSON, otherwise ``False``.
""" # noqa: D205
return self.client_accepts('application/json')
@property
def client_accepts_msgpack(self) -> bool:
"""``True`` if the Accept header indicates that the client is
willing to receive MessagePack, otherwise ``False``.
""" # noqa: D205
return self.client_accepts('application/x-msgpack') or self.client_accepts(
'application/msgpack'
)
@property
def client_accepts_xml(self) -> bool:
"""``True`` if the Accept header indicates that the client is
willing to receive XML, otherwise ``False``.
""" # noqa: D205
return self.client_accepts('application/xml')
@property
def accept(self) -> str:
"""Value of the Accept header, or ``'*/*'`` if the header is missing."""
# NOTE(kgriffs): Per RFC, a missing accept header is
# equivalent to '*/*'
try:
return self.env['HTTP_ACCEPT'] or '*/*'
except KeyError:
return '*/*'
@property
def content_length(self) -> int | None:
"""Value of the Content-Length header converted to an ``int``.
Returns ``None`` if the header is missing.
"""
try:
value = self.env['CONTENT_LENGTH']
except KeyError:
return None
# NOTE(kgriffs): Normalize an empty value to behave as if
# the header were not included; wsgiref, at least, inserts
# an empty CONTENT_LENGTH value if the request does not
# set the header. Gunicorn and uWSGI do not do this, but
# others might if they are trying to match wsgiref's
# behavior too closely.
if not value:
return None
try:
value_as_int = int(value)
except ValueError:
msg = 'The value of the header must be a number.'
raise errors.HTTPInvalidHeader(msg, 'Content-Length')
if value_as_int < 0:
msg = 'The value of the header must be a positive number.'
raise errors.HTTPInvalidHeader(msg, 'Content-Length')
return value_as_int
@property
def bounded_stream(self) -> BoundedStream:
"""File-like wrapper around `stream` to normalize
certain differences between the native input objects
employed by different WSGI servers. In particular,
`bounded_stream` is aware of the expected Content-Length of
the body, and will never block on out-of-bounds reads,
assuming the client does not stall while transmitting the
data to the server.
For example, the following will not block when
Content-Length is 0 or the header is missing altogether::
data = req.bounded_stream.read()
This is also safe::
doc = json.load(req.bounded_stream)
""" # noqa: D205
if self._bounded_stream is None:
self._bounded_stream = self._get_wrapped_wsgi_input()
return self._bounded_stream
@property
def date(self) -> datetime | None:
"""Value of the Date header, converted to a ``datetime`` instance.
The header value is assumed to conform to RFC 1123.
.. versionchanged:: 4.0
This property now returns timezone-aware
:class:`~datetime.datetime` objects (or ``None``).
"""
return self.get_header_as_datetime('Date')
@property
def if_match(self) -> list[ETag | Literal['*']] | None:
"""Value of the If-Match header, as a parsed list of
:class:`falcon.ETag` objects or ``None`` if the header is missing
or its value is blank.
This property provides a list of all ``entity-tags`` in the
header, both strong and weak, in the same order as listed in
the header.
(See also: RFC 7232, Section 3.1)
""" # noqa: D205
# TODO(kgriffs): It may make sense at some point to create a
# header property generator that DRY's up the memoization
# pattern for us.
if self._cached_if_match is _UNSET:
header_value = self.env.get('HTTP_IF_MATCH')
if header_value:
self._cached_if_match = helpers._parse_etags(header_value)
else:
self._cached_if_match = None
return self._cached_if_match
@property
def if_none_match(self) -> list[ETag | Literal['*']] | None:
"""Value of the If-None-Match header, as a parsed
list of :class:`falcon.ETag` objects or ``None`` if the header is
missing or its value is blank.
This property provides a list of all ``entity-tags`` in the
header, both strong and weak, in the same order as listed in
the header.
(See also: RFC 7232, Section 3.2)
""" # noqa: D205
if self._cached_if_none_match is _UNSET:
header_value = self.env.get('HTTP_IF_NONE_MATCH')
if header_value:
self._cached_if_none_match = helpers._parse_etags(header_value)
else:
self._cached_if_none_match = None
return self._cached_if_none_match
@property
def if_modified_since(self) -> datetime | None:
"""Value of the If-Modified-Since header.
Returns ``None`` if the header is missing.
.. versionchanged:: 4.0
This property now returns timezone-aware
:class:`~datetime.datetime` objects (or ``None``).
"""
return self.get_header_as_datetime('If-Modified-Since')
@property
def if_unmodified_since(self) -> datetime | None:
"""Value of the If-Unmodified-Since header.
Returns ``None`` if the header is missing.
.. versionchanged:: 4.0
This property now returns timezone-aware
:class:`~datetime.datetime` objects (or ``None``).
"""
return self.get_header_as_datetime('If-Unmodified-Since')
@property
def range(self) -> tuple[int, int] | None:
"""A 2-member ``tuple`` parsed from the value of the
Range header, or ``None`` if the header is missing.
The two members correspond to the first and last byte
positions of the requested resource, inclusive. Negative
indices indicate offset from the end of the resource,
where -1 is the last byte, -2 is the second-to-last byte,
and so forth.
Only continuous ranges are supported (e.g., "bytes=0-0,-1" would
result in an HTTPBadRequest exception when the attribute is
accessed).
""" # noqa: D205
value = self.get_header('Range')
if value is None:
return None
if '=' in value:
unit, sep, req_range = value.partition('=')
else:
msg = "The value must be prefixed with a range unit, e.g. 'bytes='"
raise errors.HTTPInvalidHeader(msg, 'Range')
if ',' in req_range:
msg = 'The value must be a continuous range.'
raise errors.HTTPInvalidHeader(msg, 'Range')
try:
first, sep, last = req_range.partition('-')
if not sep:
raise ValueError()
if first and last:
first_num, last_num = (int(first), int(last))
if last_num < first_num:
raise ValueError()
elif first:
first_num, last_num = (int(first), -1)
elif last:
first_num, last_num = (-int(last), -1)
if first_num >= 0:
raise ValueError()
else:
msg = 'The range offsets are missing.'
raise errors.HTTPInvalidHeader(msg, 'Range')
return first_num, last_num
except ValueError:
href = 'https://tools.ietf.org/html/rfc7233'
href_text = 'HTTP/1.1 Range Requests'
msg = 'It must be a range formatted according to RFC 7233.'
raise errors.HTTPInvalidHeader(msg, 'Range', href=href, href_text=href_text)
@property
def range_unit(self) -> str | None:
"""Unit of the range parsed from the value of the Range header.
Returns ``None`` if the header is missing.
"""
value = self.get_header('Range')
if value is None:
return None
if value and '=' in value:
unit, sep, req_range = value.partition('=')
return unit
else:
msg = "The value must be prefixed with a range unit, e.g. 'bytes='"
raise errors.HTTPInvalidHeader(msg, 'Range')
@property
def root_path(self) -> str:
"""The initial portion of the request URI's path that
corresponds to the application object, so that the
application knows its virtual "location". This may be an
empty string, if the application corresponds to the "root"
of the server.
(In WSGI it corresponds to the "SCRIPT_NAME" environ variable defined
by PEP-3333; in ASGI it Corresponds to the "root_path"ASGI HTTP
scope field.)
""" # noqa: D205
# PERF(kgriffs): try..except is faster than get() assuming that
# we normally expect the key to exist. Even though PEP-3333
# allows WSGI servers to omit the key when the value is an
# empty string, uwsgi, gunicorn, waitress, and wsgiref all
# include it even in that case.
try:
return self.env['SCRIPT_NAME']
except KeyError:
return ''
@property
# NOTE(caselit): Deprecated long ago. Warns since 4.0.
@deprecation.deprecated(
'Use `root_path` instead. '
'(This compatibility alias will be removed in Falcon 5.0.)',
is_property=True,
)
def app(self) -> str:
"""Deprecated alias for :attr:`root_path`."""
return self.root_path
@property
def scheme(self) -> str:
"""URL scheme used for the request. Either 'http' or 'https'.
Note:
If the request was proxied, the scheme may not
match what was originally requested by the client.
:attr:`forwarded_scheme` can be used, instead,
to handle such cases.
"""
return self.env['wsgi.url_scheme']
@property
def forwarded_scheme(self) -> str:
"""Original URL scheme requested by the user agent, if the request was proxied.
Typical values are 'http' or 'https'.
The following request headers are checked, in order of
preference, to determine the forwarded scheme:
- ``Forwarded``
- ``X-Forwarded-For``
If none of these headers are available, or if the
Forwarded header is available but does not contain a
"proto" parameter in the first hop, the value of
:attr:`scheme` is returned instead.
(See also: RFC 7239, Section 1)
"""
# PERF(kgriffs): Since the Forwarded header is still relatively
# new, we expect X-Forwarded-Proto to be more common, so
# try to avoid calling self.forwarded if we can, since it uses a
# try...catch that will usually result in a relatively expensive
# raised exception.
if 'HTTP_FORWARDED' in self.env:
forwarded = self.forwarded
if forwarded:
# Use first hop, fall back on own scheme
scheme = forwarded[0].scheme or self.scheme
else:
scheme = self.scheme
else:
# PERF(kgriffs): This call should normally succeed, so
# just go for it without wasting time checking it
# first. Note also that the indexing operator is
# slightly faster than using get().
try:
scheme = self.env['HTTP_X_FORWARDED_PROTO'].lower()
except KeyError:
scheme = self.env['wsgi.url_scheme']
return scheme
@property
def uri(self) -> str:
"""The fully-qualified URI for the request."""
if self._cached_uri is None:
# PERF: For small numbers of items, '+' is faster
# than ''.join(...). Concatenation is also generally
# faster than formatting.
value = self.scheme + '://' + self.netloc + self.relative_uri
self._cached_uri = value
return self._cached_uri
url = uri
"""Alias for :attr:`Request.uri`."""
@property
def forwarded_uri(self) -> str:
"""Original URI for proxied requests.
Uses :attr:`forwarded_scheme` and :attr:`forwarded_host` in order
to reconstruct the original URI requested by the user agent.
"""
if self._cached_forwarded_uri is None:
# PERF: For small numbers of items, '+' is faster
# than ''.join(...). Concatenation is also generally
# faster than formatting.
value = (
self.forwarded_scheme + '://' + self.forwarded_host + self.relative_uri
)
self._cached_forwarded_uri = value
return self._cached_forwarded_uri
@property
def relative_uri(self) -> str:
"""The path and query string portion of the
request URI, omitting the scheme and host.
""" # noqa: D205
if self._cached_relative_uri is None:
if self.query_string:
self._cached_relative_uri = (
self.root_path + self.path + '?' + self.query_string
)
else:
self._cached_relative_uri = self.root_path + self.path
return self._cached_relative_uri
@property
def prefix(self) -> str:
"""The prefix of the request URI, including scheme,
host, and app :attr:`~.root_path` (if any).
""" # noqa: D205
if self._cached_prefix is None:
self._cached_prefix = self.scheme + '://' + self.netloc + self.root_path
return self._cached_prefix
@property
def forwarded_prefix(self) -> str:
"""The prefix of the original URI for proxied requests.
Uses :attr:`forwarded_scheme` and :attr:`forwarded_host` in order
to reconstruct the original URI.
"""
if self._cached_forwarded_prefix is None:
self._cached_forwarded_prefix = (
self.forwarded_scheme + '://' + self.forwarded_host + self.root_path
)
return self._cached_forwarded_prefix
@property
def host(self) -> str:
"""Host request header field."""
try:
# NOTE(kgriffs): Prefer the host header; the web server
# isn't supposed to mess with it, so it should be what
# the client actually sent.
host_header = self.env['HTTP_HOST']
host, port = parse_host(host_header)
except KeyError:
# PERF(kgriffs): According to PEP-3333, this header
# will always be present.
host = self.env['SERVER_NAME']
return host
@property
def forwarded_host(self) -> str:
"""Original host request header as received
by the first proxy in front of the application server.
The following request headers are checked, in order of
preference, to determine the forwarded scheme:
- ``Forwarded``
- ``X-Forwarded-Host``
If none of the above headers are available, or if the
Forwarded header is available but the "host"
parameter is not included in the first hop, the value of
:attr:`host` is returned instead.
Note:
Reverse proxies are often configured to set the Host
header directly to the one that was originally
requested by the user agent; in that case, using
:attr:`host` is sufficient.
(See also: RFC 7239, Section 4)
""" # noqa: D205
# PERF(kgriffs): Since the Forwarded header is still relatively
# new, we expect X-Forwarded-Host to be more common, so
# try to avoid calling self.forwarded if we can, since it uses a
# try...catch that will usually result in a relatively expensive
# raised exception.
if 'HTTP_FORWARDED' in self.env:
forwarded = self.forwarded
if forwarded:
# Use first hop, fall back on self
host = forwarded[0].host or self.netloc
else:
host = self.netloc
else:
# PERF(kgriffs): This call should normally succeed, assuming
# that the caller is expecting a forwarded header, so
# just go for it without wasting time checking it
# first.
try:
host = self.env['HTTP_X_FORWARDED_HOST']
except KeyError:
host = self.netloc
return host
@property
def subdomain(self) -> str | None:
"""Leftmost (i.e., most specific) subdomain from the hostname.
If only a single domain name is given, `subdomain` will be ``None``.
Note:
If the hostname in the request is an IP address, the value
for `subdomain` is undefined.
"""
# PERF(kgriffs): .partition is slightly faster than .split
subdomain, sep, remainder = self.host.partition('.')
return subdomain if sep else None
@property
def headers(self) -> Mapping[str, str]:
"""Raw HTTP headers from the request with dash-separated
names normalized to uppercase.
Note:
This property differs from the ASGI version of ``Request.headers``
in that the latter returns *lowercase* names. Middleware, such
as tracing and logging components, that need to be compatible with
both WSGI and ASGI apps should use :attr:`headers_lower` instead.
Warning:
Parsing all the headers to create this dict is done the first
time this attribute is accessed, and the returned object should
be treated as read-only. Note that this parsing can be costly,
so unless you need all the headers in this format, you should
instead use the ``get_header()`` method or one of the
convenience attributes to get a value for a specific header.
""" # noqa: D205
if self._cached_headers is None:
headers = self._cached_headers = {}
for name, value in self.env.items():
if name.startswith('HTTP_'):
# NOTE(kgriffs): Don't take the time to fix the case
# since headers are supposed to be case-insensitive
# anyway.
headers[name[5:].replace('_', '-')] = value
elif name in WSGI_CONTENT_HEADERS:
headers[name.replace('_', '-')] = value
return self._cached_headers
@property
def headers_lower(self) -> Mapping[str, str]:
"""Same as :attr:`headers` except header names are normalized to lowercase.
.. versionadded:: 4.0
"""
if self._cached_headers_lower is None:
self._cached_headers_lower = {
key.lower(): value for key, value in self.headers.items()
}
return self._cached_headers_lower
@property
def params(self) -> Mapping[str, str | list[str]]:
"""The mapping of request query parameter names to their values.
Where the parameter appears multiple times in the query
string, the value mapped to that parameter key will be a list of
all the values in the order seen.
"""
return self._params
@property
def cookies(self) -> Mapping[str, str]:
"""A dict of name/value cookie pairs.
The returned object should be treated as read-only to avoid unintended
side-effects. If a cookie appears more than once in the request, only
the first value encountered will be made available here.
See also: :meth:`~falcon.Request.get_cookie_values` or
:meth:`~falcon.asgi.Request.get_cookie_values`.
"""
if self._cookies_collapsed is None:
if self._cookies is None:
header_value = self.get_header('Cookie')
if header_value:
self._cookies = helpers._parse_cookie_header(header_value)
else:
self._cookies = {}
self._cookies_collapsed = {n: v[0] for n, v in self._cookies.items()}
return self._cookies_collapsed
@property
def access_route(self) -> list[str]:
"""IP address of the original client, as well
as any known addresses of proxies fronting the WSGI server.
The following request headers are checked, in order of
preference, to determine the addresses:
- ``Forwarded``
- ``X-Forwarded-For``
- ``X-Real-IP``
If none of these headers are available, the value of
:attr:`~.remote_addr` is used instead.
Note:
Per `RFC 7239`_, the access route may contain "unknown"
and obfuscated identifiers, in addition to IPv4 and
IPv6 addresses
.. _RFC 7239: https://tools.ietf.org/html/rfc7239
Warning:
Headers can be forged by any client or proxy. Use this
property with caution and validate all values before
using them. Do not rely on the access route to authorize
requests.
""" # noqa: D205
if self._cached_access_route is None:
# NOTE(kgriffs): Try different headers in order of
# preference; if none are found, fall back to REMOTE_ADDR.
#
# If one of these headers is present, but its value is
# malformed such that we end up with an empty list, or
# a non-empty list containing malformed values, go ahead
# and return the results as-is. The alternative would be
# to fall back to another header or to REMOTE_ADDR, but
# that only masks the problem; the operator needs to be
# aware that an upstream proxy is malfunctioning.
if 'HTTP_FORWARDED' in self.env:
self._cached_access_route = []
for hop in self.forwarded or ():
if hop.src is not None:
host, __ = parse_host(hop.src)
self._cached_access_route.append(host)
elif 'HTTP_X_FORWARDED_FOR' in self.env:
addresses = self.env['HTTP_X_FORWARDED_FOR'].split(',')
self._cached_access_route = [ip.strip() for ip in addresses]
elif 'HTTP_X_REAL_IP' in self.env:
self._cached_access_route = [self.env['HTTP_X_REAL_IP']]
if self._cached_access_route:
if self._cached_access_route[-1] != self.remote_addr:
self._cached_access_route.append(self.remote_addr)
else:
self._cached_access_route = [self.remote_addr]
return self._cached_access_route
@property
def remote_addr(self) -> str:
"""IP address of the closest client or proxy to the WSGI server.
This property is determined by the value of ``REMOTE_ADDR``
in the WSGI environment dict. Since this address is not
derived from an HTTP header, clients and proxies can not
forge it.
Note:
If your application is behind one or more reverse
proxies, you can use :attr:`~.access_route`
to retrieve the real IP address of the client.
"""
try:
value: str = self.env['REMOTE_ADDR']
except KeyError:
value = '127.0.0.1'
return value
@property
def port(self) -> int:
"""Port used for the request.
If the Host header is present in the request, but does not specify a port,
the default one for the given schema is returned (80 for HTTP and 443
for HTTPS). If the request does not include a Host header, the listening
port for the server is returned instead.
"""
try:
host_header = self.env['HTTP_HOST']
default_port = 80 if self.env['wsgi.url_scheme'] == 'http' else 443
_, port = parse_host(host_header, default_port=default_port)
except KeyError:
# NOTE(kgriffs): Normalize to an int, since that is the type
# returned by parse_host().
#
# NOTE(kgriffs): In the case that SERVER_PORT was used,
# PEP-3333 requires that the port never be an empty string.
port = int(self.env['SERVER_PORT'])
return port
@property
def netloc(self) -> str:
"""Returns the "host:port" portion of the request URL.
The port may be omitted if it is the default one for the URL's schema
(80 for HTTP and 443 for HTTPS).
"""
env = self.env
# NOTE(kgriffs): According to PEP-3333 we should first
# try to use the Host header if present.
#
# PERF(kgriffs): try..except is faster than get() when we
# expect the key to be present most of the time.
try:
netloc_value: str = env['HTTP_HOST']
except KeyError:
netloc_value = env['SERVER_NAME']
port: str = env['SERVER_PORT']
if self.scheme == 'https':
if port != '443':
netloc_value += ':' + port
else:
if port != '80':
netloc_value += ':' + port
return netloc_value
def get_media(self, default_when_empty: UnsetOr[Any] = _UNSET) -> Any:
"""Return a deserialized form of the request stream.
The first time this method is called, the request stream will be
deserialized using the Content-Type header as well as the media-type
handlers configured via :class:`falcon.RequestOptions`. The result will
be cached and returned in subsequent calls::
deserialized_media = req.get_media()
If the matched media handler raises an error while attempting to
deserialize the request body, the exception will propagate up
to the caller.
See also :ref:`media` for more information regarding media handling.
Note:
When ``get_media`` is called on a request with an empty body,
Falcon will let the media handler try to deserialize the body
and will return the value returned by the handler or propagate
the exception raised by it. To instead return a different value
in case of an exception by the handler, specify the argument
``default_when_empty``.
Warning:
This operation will consume the request stream the first time
it's called and cache the results. Follow-up calls will just
retrieve a cached version of the object.
Args:
default_when_empty: Fallback value to return when there is no body
in the request and the media handler raises an error
(like in the case of the default JSON media handler).
By default, Falcon uses the value returned by the media handler
or propagates the raised exception, if any.
This value is not cached, and will be used only for the current
call.
Returns:
media (object): The deserialized media representation.
"""
if self._media is not _UNSET:
return self._media
if self._media_error is not None:
if default_when_empty is not _UNSET and isinstance(
self._media_error, errors.MediaNotFoundError
):
return default_when_empty
raise self._media_error
handler, _, _ = self.options.media_handlers._resolve(
self.content_type, self.options.default_media_type
)
try:
self._media = handler.deserialize(
self.bounded_stream, self.content_type, self.content_length
)
except errors.MediaNotFoundError as err:
self._media_error = err
if default_when_empty is not _UNSET:
return default_when_empty
raise
except Exception as err:
self._media_error = err
raise
finally:
if handler.exhaust_stream:
self.bounded_stream.exhaust()
return self._media
media: Any = property(get_media)
"""Property that acts as an alias for
:meth:`~.get_media`. This alias provides backwards-compatibility
for apps that were built for versions of the framework prior to
3.0::
# Equivalent to: deserialized_media = req.get_media()
deserialized_media = req.media
New WSGI apps are encouraged to use :meth:`~.get_media` directly instead of
this property.
"""
# ------------------------------------------------------------------------
# Methods
# ------------------------------------------------------------------------
def client_accepts(self, media_type: str) -> bool:
"""Determine whether or not the client accepts a given media type.
Args:
media_type (str): An Internet media type to check.
Returns:
bool: ``True`` if the client has indicated in the Accept header
that it accepts the specified media type. Otherwise, returns
``False``.
"""
accept = self.accept
# PERF(kgriffs): Usually the following will be true, so
# try it first.
if (accept == media_type) or (accept == '*/*'):
return True
# Fall back to full-blown parsing
try:
return mediatypes.quality(media_type, accept) != 0.0
except ValueError:
return False
def client_prefers(self, media_types: Iterable[str]) -> str | None:
"""Return the client's preferred media type, given several choices.
Args:
media_types (iterable of str): One or more Internet media types
from which to choose the client's preferred type. This value
**must** be an iterable collection of strings.
Returns:
str: The client's preferred media type, based on the Accept
header. Returns ``None`` if the client does not accept any
of the given types.
"""
try:
# NOTE(kgriffs): best_match will return '' if no match is found
preferred_type = mediatypes.best_match(media_types, self.accept)
except ValueError:
# Value for the accept header was not formatted correctly
preferred_type = ''
return preferred_type if preferred_type else None
@overload
def get_header(
self, name: str, required: Literal[True], default: str | None = ...
) -> str: ...
@overload
def get_header(self, name: str, required: bool = ..., *, default: str) -> str: ...
@overload
def get_header(
self, name: str, required: bool = ..., default: str | None = ...
) -> str | None: ...
def get_header(
self, name: str, required: bool = False, default: str | None = None
) -> str | None:
"""Retrieve the raw string value for the given header.
Args:
name (str): Header name, case-insensitive (e.g., 'Content-Type')
Keyword Args:
required (bool): Set to ``True`` to raise
``HTTPBadRequest`` instead of returning gracefully when the
header is not found (default ``False``).
default (any): Value to return if the header
is not found (default ``None``).
Returns:
str: The value of the specified header if it exists, or
the default value if the header is not found and is not
required.
Raises:
HTTPBadRequest: The header was not found in the request, but
it was required.
"""
wsgi_name = name.upper().replace('-', '_')
# Use try..except to optimize for the header existing in most cases
try:
# Don't take the time to cache beforehand, using HTTP naming.
# This will be faster, assuming that most headers are looked
# up only once, and not all headers will be requested.
return self.env['HTTP_' + wsgi_name]
except KeyError:
# NOTE(kgriffs): There are a couple headers that do not
# use the HTTP prefix in the env, so try those. We expect
# people to usually just use the relevant helper properties
# to access these instead of .get_header.
if wsgi_name in WSGI_CONTENT_HEADERS:
try:
return self.env[wsgi_name]
except KeyError:
pass
if not required:
return default
raise errors.HTTPMissingHeader(name)
@overload
def get_header_as_int(self, header: str, required: Literal[True]) -> int: ...
@overload
def get_header_as_int(self, header: str, required: bool = ...) -> int | None: ...
def get_header_as_int(self, header: str, required: bool = False) -> int | None:
"""Retrieve the int value for the given header.
Args:
name (str): Header name, case-insensitive (e.g., 'Content-Length')
Keyword Args:
required (bool): Set to ``True`` to raise
``HTTPBadRequest`` instead of returning gracefully when the
header is not found (default ``False``).
Returns:
int: The value of the specified header if it exists,
or ``None`` if the header is not found and is not required.
Raises:
HTTPBadRequest: The header was not found in the request, but
it was required.
HttpInvalidHeader: The header contained a malformed/invalid value.
.. versionadded:: 4.0
"""
http_int = self.get_header(header, required=required)
try:
return int(http_int) if http_int is not None else None
except ValueError:
msg = 'The value of the header must be an integer.'
raise errors.HTTPInvalidHeader(msg, header)
@overload
def get_header_as_datetime(
self, header: str, required: Literal[True], obs_date: bool = ...
) -> datetime: ...
@overload
def get_header_as_datetime(
self, header: str, required: bool = ..., obs_date: bool = ...
) -> datetime | None: ...
def get_header_as_datetime(
self, header: str, required: bool = False, obs_date: bool = False
) -> datetime | None:
"""Return an HTTP header with HTTP-Date values as a datetime.
Args:
name (str): Header name, case-insensitive (e.g., 'Date')
Keyword Args:
required (bool): Set to ``True`` to raise
``HTTPBadRequest`` instead of returning gracefully when the
header is not found (default ``False``).
obs_date (bool): Support obs-date formats according to
RFC 7231, e.g.: "Sunday, 06-Nov-94 08:49:37 GMT"
(default ``False``).
Returns:
datetime: The value of the specified header if it exists,
or ``None`` if the header is not found and is not required.
Raises:
HTTPBadRequest: The header was not found in the request, but
it was required.
HttpInvalidHeader: The header contained a malformed/invalid value.
.. versionchanged:: 4.0
This method now returns timezone-aware :class:`~datetime.datetime`
objects.
"""
http_date = self.get_header(header, required=required)
try:
if http_date is not None:
return util.http_date_to_dt(http_date, obs_date=obs_date)
else:
return None
except ValueError:
msg = 'It must be formatted according to RFC 7231, Section 7.1.1.1'
raise errors.HTTPInvalidHeader(msg, header)
def get_cookie_values(self, name: str) -> list[str] | None:
"""Return all values provided in the Cookie header for the named cookie.
(See also: :ref:`Getting Cookies <getting-cookies>`)
Args:
name (str): Cookie name, case-sensitive.
Returns:
list: Ordered list of all values specified in the Cookie header for
the named cookie, or ``None`` if the cookie was not included in
the request. If the cookie is specified more than once in the
header, the returned list of values will preserve the ordering of
the individual ``cookie-pair``'s in the header.
"""
if self._cookies is None:
# PERF(kgriffs): While this code isn't exactly DRY (the same code
# is duplicated by the cookies property) it does make things a bit
# more performant by removing the extra function call that would
# be required to factor this out. If we ever have to do this in a
# *third* place, we would probably want to factor it out at that
# point.
header_value = self.get_header('Cookie')
if header_value:
self._cookies = helpers._parse_cookie_header(header_value)
else:
self._cookies = {}
return self._cookies.get(name)
@overload
def get_param(
self,
name: str,
required: Literal[True],
store: StoreArg = ...,
default: str | None = ...,
) -> str: ...
@overload
def get_param(
self,
name: str,
required: bool = ...,
store: StoreArg = ...,
*,
default: str,
) -> str: ...
@overload
def get_param(
self,
name: str,
required: bool = False,
store: StoreArg = None,
default: str | None = None,
) -> str | None: ...
def get_param(
self,
name: str,
required: bool = False,
store: StoreArg = None,
default: str | None = None,
) -> str | None:
"""Return the raw value of a query string parameter as a string.
Note:
If an HTML form is POSTed to the API using the
*application/x-www-form-urlencoded* media type, Falcon can
automatically parse the parameters from the request body
and merge them into the query string parameters. To enable
this functionality, set
:attr:`~.RequestOptions.auto_parse_form_urlencoded` to
``True`` via :any:`App.req_options`.
Note, however, that the
:attr:`~.RequestOptions.auto_parse_form_urlencoded` option is
considered deprecated as of Falcon 3.0 in favor of accessing the
URL-encoded form via :attr:`~Request.media`, and it may be removed
in a future release.
See also: :ref:`access_urlencoded_form`
Note:
Similar to the way multiple keys in form data are handled, if a
query parameter is included in the query string multiple times,
only one of those values will be returned, and it is undefined which
one. This caveat also applies when
:attr:`~falcon.RequestOptions.auto_parse_qs_csv` is enabled and the
given parameter is assigned to a comma-separated list of values
(e.g., ``foo=a,b,c``).
When multiple values are expected for a parameter,
:meth:`~.get_param_as_list` can be used to retrieve all of
them at once.
Args:
name (str): Parameter name, case-sensitive (e.g., 'sort').
Keyword Args:
required (bool): Set to ``True`` to raise
``HTTPBadRequest`` instead of returning ``None`` when the
parameter is not found (default ``False``).
store (dict): A ``dict``-like object in which to place
the value of the param, but only if the param is present.
default (any): If the param is not found returns the
given value instead of ``None``
Returns:
str: The value of the param as a string, or ``None`` if param is
not found and is not required.
Raises:
HTTPBadRequest: A required param is missing from the request.
"""
params = self._params
# PERF: Use if..in since it is a good all-around performer; we don't
# know how likely params are to be specified by clients.
if name in params:
# NOTE(warsaw): If the key appeared multiple times, it will be
# stored internally as a list. We do not define which one
# actually gets returned, but let's pick the last one for grins.
param = params[name]
if isinstance(param, list):
param = param[-1]
if store is not None:
store[name] = param
return param
if not required:
return default
raise errors.HTTPMissingParam(name)
@overload
def get_param_as_int(
self,
name: str,
required: Literal[True],
min_value: int | None = ...,
max_value: int | None = ...,
store: StoreArg = ...,
default: int | None = ...,
) -> int: ...
@overload
def get_param_as_int(
self,
name: str,
required: bool = ...,
min_value: int | None = ...,
max_value: int | None = ...,
store: StoreArg = ...,
*,
default: int,
) -> int: ...
@overload
def get_param_as_int(
self,
name: str,
required: bool = ...,
min_value: int | None = ...,
max_value: int | None = ...,
store: StoreArg = ...,
default: int | None = ...,
) -> int | None: ...
def get_param_as_int(
self,
name: str,
required: bool = False,
min_value: int | None = None,
max_value: int | None = None,
store: StoreArg = None,
default: int | None = None,
) -> int | None:
"""Return the value of a query string parameter as an int.
Args:
name (str): Parameter name, case-sensitive (e.g., 'limit').
Keyword Args:
required (bool): Set to ``True`` to raise
``HTTPBadRequest`` instead of returning ``None`` when the
parameter is not found or is not an integer (default
``False``).
min_value (int): Set to the minimum value allowed for this
param. If the param is found and it is less than min_value, an
``HTTPError`` is raised.
max_value (int): Set to the maximum value allowed for this
param. If the param is found and its value is greater than
max_value, an ``HTTPError`` is raised.
store (dict): A ``dict``-like object in which to place
the value of the param, but only if the param is found
(default ``None``).
default (any): If the param is not found returns the
given value instead of ``None``
Returns:
int: The value of the param if it is found and can be converted to
an ``int``. If the param is not found, returns ``None``, unless
`required` is ``True``.
Raises:
HTTPBadRequest: The param was not found in the request, even
though it was required to be there, or it was found but
could not be converted to an ``int``. Also raised if the
param's value falls outside the given interval, i.e., the
value must be in the interval: min_value <= value <=
max_value to avoid triggering an error.
"""
params = self._params
# PERF: Use if..in since it is a good all-around performer; we don't
# know how likely params are to be specified by clients.
if name in params:
val_str = params[name]
if isinstance(val_str, list):
val_str = val_str[-1]
try:
val = int(val_str)
except ValueError:
msg = 'The value must be an integer.'
raise errors.HTTPInvalidParam(msg, name)
if min_value is not None and val < min_value:
msg = 'The value must be at least ' + str(min_value)
raise errors.HTTPInvalidParam(msg, name)
if max_value is not None and max_value < val:
msg = 'The value may not exceed ' + str(max_value)
raise errors.HTTPInvalidParam(msg, name)
if store is not None:
store[name] = val
return val
if not required:
return default
raise errors.HTTPMissingParam(name)
@overload
def get_param_as_float(
self,
name: str,
required: Literal[True],
min_value: float | None = ...,
max_value: float | None = ...,
store: StoreArg = ...,
default: float | None = ...,
) -> float: ...
@overload
def get_param_as_float(
self,
name: str,
required: bool = ...,
min_value: float | None = ...,
max_value: float | None = ...,
store: StoreArg = ...,
*,
default: float,
) -> float: ...
@overload
def get_param_as_float(
self,
name: str,
required: bool = ...,
min_value: float | None = ...,
max_value: float | None = ...,
store: StoreArg = ...,
default: float | None = ...,
) -> float | None: ...
def get_param_as_float(
self,
name: str,
required: bool = False,
min_value: float | None = None,
max_value: float | None = None,
store: StoreArg = None,
default: float | None = None,
) -> float | None:
"""Return the value of a query string parameter as an float.
Args:
name (str): Parameter name, case-sensitive (e.g., 'limit').
Keyword Args:
required (bool): Set to ``True`` to raise
``HTTPBadRequest`` instead of returning ``None`` when the
parameter is not found or is not an float (default
``False``).
min_value (float): Set to the minimum value allowed for this
param. If the param is found and it is less than min_value, an
``HTTPError`` is raised.
max_value (float): Set to the maximum value allowed for this
param. If the param is found and its value is greater than
max_value, an ``HTTPError`` is raised.
store (dict): A ``dict``-like object in which to place
the value of the param, but only if the param is found
(default ``None``).
default (any): If the param is not found returns the
given value instead of ``None``
Returns:
float: The value of the param if it is found and can be converted to
an ``float``. If the param is not found, returns ``None``, unless
`required` is ``True``.
Raises:
HTTPBadRequest: The param was not found in the request, even
though it was required to be there, or it was found but
could not be converted to an ``float``. Also raised if the
param's value falls outside the given interval, i.e., the
value must be in the interval: min_value <= value <=
max_value to avoid triggering an error.
"""
params = self._params
# PERF: Use if..in since it is a good all-around performer; we don't
# know how likely params are to be specified by clients.
if name in params:
val_str = params[name]
if isinstance(val_str, list):
val_str = val_str[-1]
try:
val = float(val_str)
except ValueError:
msg = 'The value must be a float.'
raise errors.HTTPInvalidParam(msg, name)
if min_value is not None and val < min_value:
msg = 'The value must be at least ' + str(min_value)
raise errors.HTTPInvalidParam(msg, name)
if max_value is not None and max_value < val:
msg = 'The value may not exceed ' + str(max_value)
raise errors.HTTPInvalidParam(msg, name)
if store is not None:
store[name] = val
return val
if not required:
return default
raise errors.HTTPMissingParam(name)
@overload
def get_param_as_uuid(
self,
name: str,
required: Literal[True],
store: StoreArg = ...,
default: UUID | None = ...,
) -> UUID: ...
@overload
def get_param_as_uuid(
self,
name: str,
required: bool = ...,
store: StoreArg = ...,
*,
default: UUID,
) -> UUID: ...
@overload
def get_param_as_uuid(
self,
name: str,
required: bool = ...,
store: StoreArg = ...,
default: UUID | None = ...,
) -> UUID | None: ...
def get_param_as_uuid(
self,
name: str,
required: bool = False,
store: StoreArg = None,
default: UUID | None = None,
) -> UUID | None:
"""Return the value of a query string parameter as an UUID.
The value to convert must conform to the standard UUID string
representation per RFC 4122. For example, the following
strings are all valid::
# Lowercase
'64be949b-3433-4d36-a4a8-9f19d352fee8'
# Uppercase
'BE71ECAA-F719-4D42-87FD-32613C2EEB60'
# Mixed
'81c8155C-D6de-443B-9495-39Fa8FB239b5'
Args:
name (str): Parameter name, case-sensitive (e.g., 'id').
Keyword Args:
required (bool): Set to ``True`` to raise
``HTTPBadRequest`` instead of returning ``None`` when the
parameter is not found or is not a UUID (default
``False``).
store (dict): A ``dict``-like object in which to place
the value of the param, but only if the param is found
(default ``None``).
default (any): If the param is not found returns the
given value instead of ``None``
Returns:
UUID: The value of the param if it is found and can be converted to
a ``UUID``. If the param is not found, returns
``default`` (default ``None``), unless `required` is ``True``.
Raises:
HTTPBadRequest: The param was not found in the request, even
though it was required to be there, or it was found but
could not be converted to a ``UUID``.
"""
params = self._params
# PERF: Use if..in since it is a good all-around performer; we don't
# know how likely params are to be specified by clients.
if name in params:
val_str = params[name]
if isinstance(val_str, list):
val_str = val_str[-1]
try:
val = UUID(val_str)
except ValueError:
msg = 'The value must be a UUID string.'
raise errors.HTTPInvalidParam(msg, name)
if store is not None:
store[name] = val
return val
if not required:
return default
raise errors.HTTPMissingParam(name)
@overload
def get_param_as_bool(
self,
name: str,
required: Literal[True],
store: StoreArg = ...,
blank_as_true: bool = ...,
default: bool | None = ...,
) -> bool: ...
@overload
def get_param_as_bool(
self,
name: str,
required: bool = ...,
store: StoreArg = ...,
blank_as_true: bool = ...,
*,
default: bool,
) -> bool: ...
@overload
def get_param_as_bool(
self,
name: str,
required: bool = ...,
store: StoreArg = ...,
blank_as_true: bool = ...,
default: bool | None = ...,
) -> bool | None: ...
def get_param_as_bool(
self,
name: str,
required: bool = False,
store: StoreArg = None,
blank_as_true: bool = True,
default: bool | None = None,
) -> bool | None:
"""Return the value of a query string parameter as a boolean.
This method treats valueless parameters as flags. By default, if no
value is provided for the parameter in the query string, ``True`` is
assumed and returned. If the parameter is missing altogether, ``None``
is returned as with other ``get_param_*()`` methods, which can be
easily treated as falsy by the caller as needed.
The following boolean strings are supported::
TRUE_STRINGS = ('true', 'True', 't', 'yes', 'y', '1', 'on')
FALSE_STRINGS = ('false', 'False', 'f', 'no', 'n', '0', 'off')
Args:
name (str): Parameter name, case-sensitive (e.g., 'detailed').
Keyword Args:
required (bool): Set to ``True`` to raise
``HTTPBadRequest`` instead of returning ``None`` when the
parameter is not found or is not a recognized boolean
string (default ``False``).
store (dict): A ``dict``-like object in which to place
the value of the param, but only if the param is found (default
``None``).
blank_as_true (bool): Valueless query string parameters
are treated as flags, resulting in ``True`` being
returned when such a parameter is present, and ``False``
otherwise. To require the client to explicitly opt-in to a
truthy value, pass ``blank_as_true=False`` to return ``False``
when a value is not specified in the query string.
default (any): If the param is not found, return this
value instead of ``None``.
Returns:
bool: The value of the param if it is found and can be converted
to a ``bool``. If the param is not found, returns ``None``
unless `required` is ``True``.
Raises:
HTTPBadRequest: A required param is missing from the request, or
can not be converted to a ``bool``.
"""
params = self._params
# PERF: Use if..in since it is a good all-around performer; we don't
# know how likely params are to be specified by clients.
if name in params:
val_str = params[name]
if isinstance(val_str, list):
val_str = val_str[-1]
if val_str in TRUE_STRINGS:
val = True
elif val_str in FALSE_STRINGS:
val = False
elif not val_str:
val = blank_as_true
else:
msg = 'The value of the parameter must be "true" or "false".'
raise errors.HTTPInvalidParam(msg, name)
if store is not None:
store[name] = val
return val
if not required:
return default
raise errors.HTTPMissingParam(name)
@overload
def get_param_as_list(
self,
name: str,
transform: None = ...,
*,
required: Literal[True],
store: StoreArg = ...,
default: list[str] | None = ...,
) -> list[str]: ...
@overload
def get_param_as_list(
self,
name: str,
transform: Callable[[str], _T],
required: Literal[True],
store: StoreArg = ...,
default: list[_T] | None = ...,
) -> list[_T]: ...
@overload
def get_param_as_list(
self,
name: str,
transform: None = ...,
required: bool = ...,
store: StoreArg = ...,
*,
default: list[str],
) -> list[str]: ...
@overload
def get_param_as_list(
self,
name: str,
transform: Callable[[str], _T],
required: bool = ...,
store: StoreArg = ...,
*,
default: list[_T],
) -> list[_T]: ...
@overload
def get_param_as_list(
self,
name: str,
transform: None = ...,
required: bool = ...,
store: StoreArg = ...,
default: list[str] | None = ...,
) -> list[str] | None: ...
@overload
def get_param_as_list(
self,
name: str,
transform: Callable[[str], _T],
required: bool = ...,
store: StoreArg = ...,
default: list[_T] | None = ...,
) -> list[_T] | None: ...
def get_param_as_list(
self,
name: str,
transform: Callable[[str], _T] | None = None,
required: bool = False,
store: StoreArg = None,
default: list[_T] | None = None,
) -> list[_T] | list[str] | None:
"""Return the value of a query string parameter as a list.
List items must be comma-separated or must be provided
as multiple instances of the same param in the query string
ala *application/x-www-form-urlencoded*.
Note:
To enable the interpretation of comma-separated parameter values,
the :attr:`~falcon.RequestOptions.auto_parse_qs_csv` option must
be set to ``True`` (default ``False``).
Args:
name (str): Parameter name, case-sensitive (e.g., 'ids').
Keyword Args:
transform (callable): An optional transform function
that takes as input each element in the list as a ``str`` and
outputs a transformed element for inclusion in the list that
will be returned. For example, passing ``int`` will
transform list items into numbers.
required (bool): Set to ``True`` to raise ``HTTPBadRequest``
instead of returning ``None`` when the parameter is not
found (default ``False``).
store (dict): A ``dict``-like object in which to place
the value of the param, but only if the param is found (default
``None``).
default (any): If the param is not found returns the
given value instead of ``None``
Returns:
list: The value of the param if it is found. Otherwise, returns
``None`` unless *required* is ``True``.
Empty list elements will be included by default, but this behavior
can be configured by setting the
:attr:`~falcon.RequestOptions.keep_blank_qs_values` option. For
example, by default the following query strings would both result in
``['1', '', '3']``::
things=1&things=&things=3
things=1,,3
Note, however, that for the second example string above to be
interpreted as a list, the
:attr:`~falcon.RequestOptions.auto_parse_qs_csv` option must be
set to ``True``.
Raises:
HTTPBadRequest: A required param is missing from the request, or
a transform function raised an instance of ``ValueError``.
"""
params = self._params
# PERF: Use if..in since it is a good all-around performer; we don't
# know how likely params are to be specified by clients.
if name in params:
items = params[name]
# NOTE(warsaw): When a key appears multiple times in the request
# query, it will already be represented internally as a list.
# NOTE(kgriffs): Likewise for comma-delimited values.
if not isinstance(items, list):
items = [items]
items_ret: list[str] | list[_T]
# PERF(kgriffs): Use if-else rather than a DRY approach
# that sets transform to a passthrough function; avoids
# function calling overhead.
if transform is not None:
try:
items_ret = [transform(i) for i in items]
except ValueError:
msg = 'The value is not formatted correctly.'
raise errors.HTTPInvalidParam(msg, name)
else:
items_ret = items
if store is not None:
store[name] = items_ret
return items_ret
if not required:
return default
raise errors.HTTPMissingParam(name)
@overload
def get_param_as_datetime(
self,
name: str,
format_string: str = ...,
*,
required: Literal[True],
store: StoreArg = ...,
default: datetime | None = ...,
) -> datetime: ...
@overload
def get_param_as_datetime(
self,
name: str,
format_string: str = ...,
required: bool = ...,
store: StoreArg = ...,
*,
default: datetime,
) -> datetime: ...
@overload
def get_param_as_datetime(
self,
name: str,
format_string: str = ...,
required: bool = ...,
store: StoreArg = ...,
default: datetime | None = ...,
) -> datetime | None: ...
def get_param_as_datetime(
self,
name: str,
format_string: str = '%Y-%m-%dT%H:%M:%S%z',
required: bool = False,
store: StoreArg = None,
default: datetime | None = None,
) -> datetime | None:
"""Return the value of a query string parameter as a datetime.
Args:
name (str): Parameter name, case-sensitive (e.g., 'ids').
Keyword Args:
format_string (str): String used to parse the param value
into a ``datetime``. Any format recognized by strptime() is
supported (default ``'%Y-%m-%dT%H:%M:%S%z'``).
required (bool): Set to ``True`` to raise
``HTTPBadRequest`` instead of returning ``None`` when the
parameter is not found (default ``False``).
store (dict): A ``dict``-like object in which to place
the value of the param, but only if the param is found (default
``None``).
default (any): If the param is not found returns the
given value instead of ``None``
Returns:
datetime.datetime: The value of the param if it is found and can be
converted to a ``datetime`` according to the supplied format
string. If the param is not found, returns ``None`` unless
required is ``True``.
Raises:
HTTPBadRequest: A required param is missing from the request, or
the value could not be converted to a ``datetime``.
.. versionchanged:: 4.0
The default value of `format_string` was changed from
``'%Y-%m-%dT%H:%M:%SZ'`` to ``'%Y-%m-%dT%H:%M:%S%z'``.
The new format is a superset of the old one parsing-wise, however,
the converted :class:`~datetime.datetime` object is now
timezone-aware.
"""
param_value = self.get_param(name, required=required)
if param_value is None:
return default
try:
date_time = strptime(param_value, format_string)
except ValueError:
msg = 'The date value does not match the required format.'
raise errors.HTTPInvalidParam(msg, name)
if store is not None:
store[name] = date_time
return date_time
@overload
def get_param_as_date(
self,
name: str,
format_string: str = ...,
*,
required: Literal[True],
store: StoreArg = ...,
default: py_date | None = ...,
) -> py_date: ...
@overload
def get_param_as_date(
self,
name: str,
format_string: str = ...,
required: bool = ...,
store: StoreArg = ...,
*,
default: py_date,
) -> py_date: ...
@overload
def get_param_as_date(
self,
name: str,
format_string: str = ...,
required: bool = ...,
store: StoreArg = ...,
default: py_date | None = ...,
) -> py_date | None: ...
def get_param_as_date(
self,
name: str,
format_string: str = '%Y-%m-%d',
required: bool = False,
store: StoreArg = None,
default: py_date | None = None,
) -> py_date | None:
"""Return the value of a query string parameter as a date.
Args:
name (str): Parameter name, case-sensitive (e.g., 'ids').
Keyword Args:
format_string (str): String used to parse the param value
into a date. Any format recognized by strptime() is
supported (default ``"%Y-%m-%d"``).
required (bool): Set to ``True`` to raise
``HTTPBadRequest`` instead of returning ``None`` when the
parameter is not found (default ``False``).
store (dict): A ``dict``-like object in which to place
the value of the param, but only if the param is found (default
``None``).
default (any): If the param is not found returns the
given value instead of ``None``
Returns:
datetime.date: The value of the param if it is found and can be
converted to a ``date`` according to the supplied format
string. If the param is not found, returns ``None`` unless
required is ``True``.
Raises:
HTTPBadRequest: A required param is missing from the request, or
the value could not be converted to a ``date``.
"""
date_time = self.get_param_as_datetime(name, format_string, required)
if date_time:
date = date_time.date()
else:
return default
if store is not None:
store[name] = date
return date
def get_param_as_json(
self,
name: str,
required: bool = False,
store: StoreArg = None,
default: Any | None = None,
) -> Any:
"""Return the decoded JSON value of a query string parameter.
Given a JSON value, decode it to an appropriate Python type,
(e.g., ``dict``, ``list``, ``str``, ``int``, ``bool``, etc.)
Warning:
If the :attr:`~falcon.RequestOptions.auto_parse_qs_csv` option is
set to ``True`` (default ``False``), the framework will
misinterpret any JSON values that include literal
(non-percent-encoded) commas. If the query string may include
JSON, you can use JSON array syntax in lieu of CSV as a workaround.
Args:
name (str): Parameter name, case-sensitive (e.g., 'payload').
Keyword Args:
required (bool): Set to ``True`` to raise ``HTTPBadRequest``
instead of returning ``None`` when the parameter is not
found (default ``False``).
store (dict): A ``dict``-like object in which to place the
value of the param, but only if the param is found
(default ``None``).
default (any): If the param is not found returns the
given value instead of ``None``
Returns:
dict: The value of the param if it is found. Otherwise, returns
``None`` unless required is ``True``.
Raises:
HTTPBadRequest: A required param is missing from the request, or
the value could not be parsed as JSON.
"""
param_value = self.get_param(name, required=required)
if param_value is None:
return default
handler, _, _ = self.options.media_handlers._resolve(
MEDIA_JSON, MEDIA_JSON, raise_not_found=False
)
if handler is None:
handler = _DEFAULT_JSON_HANDLER
try:
# TODO(CaselIT): find a way to avoid encode + BytesIO if handlers
# interface is refactored. Possibly using the WS interface?
val = handler.deserialize(
BytesIO(param_value.encode()), MEDIA_JSON, len(param_value)
)
except errors.HTTPBadRequest:
msg = 'It could not be parsed as JSON.'
raise errors.HTTPInvalidParam(msg, name)
if store is not None:
store[name] = val
return val
def has_param(self, name: str) -> bool:
"""Determine whether or not the query string parameter already exists.
Args:
name (str): Parameter name, case-sensitive (e.g., 'sort').
Returns:
bool: ``True`` if param is found, or ``False`` if param is
not found.
"""
return name in self._params
def log_error(self, message: str) -> None:
"""Write an error message to the server's log.
Prepends timestamp and request info to message, and writes the
result out to the WSGI server's error stream (`wsgi.error`).
Args:
message (str): Description of the problem.
"""
if self.query_string:
query_string_formatted = '?' + self.query_string
else:
query_string_formatted = ''
log_line = DEFAULT_ERROR_LOG_FORMAT.format(
now(), self.method, self.path, query_string_formatted
)
self._wsgierrors.write(log_line + message + '\n')
# ------------------------------------------------------------------------
# Helpers
# ------------------------------------------------------------------------
def _get_wrapped_wsgi_input(self) -> BoundedStream:
try:
content_length = self.content_length or 0
# NOTE(kgriffs): This branch is indeed covered in test_wsgi.py
# even though coverage isn't able to detect it.
except errors.HTTPInvalidHeader: # pragma: no cover
# NOTE(kgriffs): The content-length header was specified,
# but it had an invalid value. Assume no content.
content_length = 0
return BoundedStream(self.env['wsgi.input'], content_length)
def _parse_form_urlencoded(self) -> None:
content_length = self.content_length
if not content_length:
return
body_bytes = self.stream.read(content_length)
# NOTE(kgriffs): According to
# https://html.spec.whatwg.org/multipage/form-control-infrastructure.html#application%2Fx-www-form-urlencoded-encoding-algorithm
# the
# body should be US-ASCII. Enforcing this also helps
# catch malicious input.
try:
body = body_bytes.decode('ascii')
except UnicodeDecodeError:
body = None
self.log_error(
'Non-ASCII characters found in form body '
'with Content-Type of '
'application/x-www-form-urlencoded. Body '
'will be ignored.'
)
if body:
extra_params = parse_query_string(
body,
keep_blank=self.options.keep_blank_qs_values,
csv=self.options.auto_parse_qs_csv,
)
self._params.update(extra_params)
# PERF: To avoid typos and improve storage space and speed over a dict.
|
Request
|
python
|
pypa__pipenv
|
pipenv/patched/pip/_internal/resolution/resolvelib/candidates.py
|
{
"start": 4107,
"end": 9287
}
|
class ____(Candidate):
"""A candidate backed by an ``InstallRequirement``.
This represents a package request with the target not being already
in the environment, and needs to be fetched and installed. The backing
``InstallRequirement`` is responsible for most of the leg work; this
class exposes appropriate information to the resolver.
:param link: The link passed to the ``InstallRequirement``. The backing
``InstallRequirement`` will use this link to fetch the distribution.
:param source_link: The link this candidate "originates" from. This is
different from ``link`` when the link is found in the wheel cache.
``link`` would point to the wheel cache, while this points to the
found remote link (e.g. from pypi.org).
"""
dist: BaseDistribution
is_installed = False
def __init__(
self,
link: Link,
source_link: Link,
ireq: InstallRequirement,
factory: "Factory",
name: Optional[NormalizedName] = None,
version: Optional[Version] = None,
) -> None:
self._link = link
self._source_link = source_link
self._factory = factory
self._ireq = ireq
self._name = name
self._version = version
self.dist = self._prepare()
self._hash: Optional[int] = None
def __str__(self) -> str:
return f"{self.name} {self.version}"
def __repr__(self) -> str:
return f"{self.__class__.__name__}({str(self._link)!r})"
def __hash__(self) -> int:
if self._hash is not None:
return self._hash
self._hash = hash((self.__class__, self._link))
return self._hash
def __eq__(self, other: Any) -> bool:
if isinstance(other, self.__class__):
return links_equivalent(self._link, other._link)
return False
@property
def source_link(self) -> Optional[Link]:
return self._source_link
@property
def project_name(self) -> NormalizedName:
"""The normalised name of the project the candidate refers to"""
if self._name is None:
self._name = self.dist.canonical_name
return self._name
@property
def name(self) -> str:
return self.project_name
@property
def version(self) -> Version:
if self._version is None:
self._version = self.dist.version
return self._version
def format_for_error(self) -> str:
return (
f"{self.name} {self.version} "
f"(from {self._link.file_path if self._link.is_file else self._link})"
)
def _prepare_distribution(self) -> BaseDistribution:
raise NotImplementedError("Override in subclass")
def _check_metadata_consistency(self, dist: BaseDistribution) -> None:
"""Check for consistency of project name and version of dist."""
if self._name is not None and self._name != dist.canonical_name:
raise MetadataInconsistent(
self._ireq,
"name",
self._name,
dist.canonical_name,
)
if self._version is not None and self._version != dist.version:
raise MetadataInconsistent(
self._ireq,
"version",
str(self._version),
str(dist.version),
)
# check dependencies are valid
# TODO performance: this means we iterate the dependencies at least twice,
# we may want to cache parsed Requires-Dist
try:
list(dist.iter_dependencies(list(dist.iter_provided_extras())))
except InvalidRequirement as e:
raise MetadataInvalid(self._ireq, str(e))
def _prepare(self) -> BaseDistribution:
try:
dist = self._prepare_distribution()
except HashError as e:
# Provide HashError the underlying ireq that caused it. This
# provides context for the resulting error message to show the
# offending line to the user.
e.req = self._ireq
raise
except InstallationSubprocessError as exc:
# The output has been presented already, so don't duplicate it.
exc.context = "See above for output."
raise
self._check_metadata_consistency(dist)
return dist
def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]:
# Emit the Requires-Python requirement first to fail fast on
# unsupported candidates and avoid pointless downloads/preparation.
yield self._factory.make_requires_python_requirement(self.dist.requires_python)
requires = self.dist.iter_dependencies() if with_requires else ()
for r in requires:
yield from self._factory.make_requirements_from_spec(str(r), self._ireq)
def get_install_requirement(self) -> Optional[InstallRequirement]:
ireq = self._ireq
if self._version and ireq.req and not ireq.req.url:
ireq.req.specifier = SpecifierSet(f"=={self._version}")
return ireq
|
_InstallRequirementBackedCandidate
|
python
|
microsoft__pyright
|
packages/pyright-internal/src/tests/samples/metaclass8.py
|
{
"start": 141,
"end": 251
}
|
class ____(type, Generic[T]): ...
# This should generate an error because generic metaclasses are not allowed.
|
A
|
python
|
great-expectations__great_expectations
|
great_expectations/metrics/batch/row_count.py
|
{
"start": 178,
"end": 298
}
|
class ____(BatchMetric[BatchRowCountResult]):
"""Count of rows in a table"""
name = "table.row_count"
|
BatchRowCount
|
python
|
marshmallow-code__marshmallow
|
src/marshmallow/fields.py
|
{
"start": 47534,
"end": 48746
}
|
class ____(DateTime):
"""A formatted aware datetime string.
:param format: See :class:`DateTime`.
:param default_timezone: Used on deserialization. If `None`, naive
datetimes are rejected. If not `None`, naive datetimes are set this
timezone.
:param kwargs: The same keyword arguments that :class:`Field` receives.
.. versionadded:: 3.0.0rc9
"""
AWARENESS = "aware"
def __init__(
self,
format: str | None = None, # noqa: A002
*,
default_timezone: dt.tzinfo | None = None,
**kwargs: Unpack[_BaseFieldKwargs],
) -> None:
super().__init__(format=format, **kwargs)
self.default_timezone = default_timezone
def _deserialize(self, value, attr, data, **kwargs) -> dt.datetime:
ret = super()._deserialize(value, attr, data, **kwargs)
if not utils.is_aware(ret):
if self.default_timezone is None:
raise self.make_error(
"invalid_awareness",
awareness=self.AWARENESS,
obj_type=self.OBJ_TYPE,
)
ret = ret.replace(tzinfo=self.default_timezone)
return ret
|
AwareDateTime
|
python
|
coleifer__peewee
|
tests/sqlite_changelog.py
|
{
"start": 798,
"end": 973
}
|
class ____(TestModel):
data = JSONField() # Diff of json?
changelog = ChangeLog(database)
CL = changelog.model
@skip_unless(json_installed(), 'requires sqlite json1')
|
CT2
|
python
|
realpython__materials
|
asterioids-pygame-project/source_code_step_8/space_rocks/game.py
|
{
"start": 106,
"end": 3115
}
|
class ____:
MIN_ASTEROID_DISTANCE = 250
def __init__(self):
self._init_pygame()
self.screen = pygame.display.set_mode((800, 600))
self.background = load_sprite("space", False)
self.clock = pygame.time.Clock()
self.asteroids = []
self.bullets = []
self.spaceship = Spaceship((400, 300), self.bullets.append)
for _ in range(6):
while True:
position = get_random_position(self.screen)
if (
position.distance_to(self.spaceship.position)
> self.MIN_ASTEROID_DISTANCE
):
break
self.asteroids.append(Asteroid(position, self.asteroids.append))
def main_loop(self):
while True:
self._handle_input()
self._process_game_logic()
self._draw()
def _init_pygame(self):
pygame.init()
pygame.display.set_caption("Space Rocks")
def _handle_input(self):
for event in pygame.event.get():
if event.type == pygame.QUIT or (
event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE
):
quit()
elif (
self.spaceship
and event.type == pygame.KEYDOWN
and event.key == pygame.K_SPACE
):
self.spaceship.shoot()
is_key_pressed = pygame.key.get_pressed()
if self.spaceship:
if is_key_pressed[pygame.K_RIGHT]:
self.spaceship.rotate(clockwise=True)
elif is_key_pressed[pygame.K_LEFT]:
self.spaceship.rotate(clockwise=False)
if is_key_pressed[pygame.K_UP]:
self.spaceship.accelerate()
def _process_game_logic(self):
for game_object in self._get_game_objects():
game_object.move(self.screen)
if self.spaceship:
for asteroid in self.asteroids:
if asteroid.collides_with(self.spaceship):
self.spaceship = None
break
for bullet in self.bullets[:]:
for asteroid in self.asteroids[:]:
if asteroid.collides_with(bullet):
self.asteroids.remove(asteroid)
self.bullets.remove(bullet)
asteroid.split()
break
for bullet in self.bullets[:]:
if not self.screen.get_rect().collidepoint(bullet.position):
self.bullets.remove(bullet)
def _draw(self):
self.screen.blit(self.background, (0, 0))
for game_object in self._get_game_objects():
game_object.draw(self.screen)
pygame.display.flip()
self.clock.tick(60)
def _get_game_objects(self):
game_objects = [*self.asteroids, *self.bullets]
if self.spaceship:
game_objects.append(self.spaceship)
return game_objects
|
SpaceRocks
|
python
|
PrefectHQ__prefect
|
src/integrations/prefect-dbt/tests/cloud/test_runs.py
|
{
"start": 3626,
"end": 7572
}
|
class ____:
async def test_get_artifact_success(self, dbt_cloud_credentials):
with respx.mock(using="httpx") as respx_mock:
respx_mock.get(
"https://cloud.getdbt.com/api/v2/accounts/123456789/runs/12/artifacts/manifest.json", # noqa
headers={"Authorization": "Bearer my_api_key"},
).mock(
return_value=Response(
200,
json={
"metadata": {
"dbt_schema_version": "https://schemas.getdbt.com/dbt/catalog/v1.json", # noqa
"dbt_version": "1.1.1",
}
},
)
)
response = await get_dbt_cloud_run_artifact.fn(
dbt_cloud_credentials=dbt_cloud_credentials,
run_id=12,
path="manifest.json",
)
assert response == {
"metadata": {
"dbt_schema_version": "https://schemas.getdbt.com/dbt/catalog/v1.json",
"dbt_version": "1.1.1",
}
}
async def test_get_non_json_artifact(self, dbt_cloud_credentials):
with respx.mock(using="httpx") as respx_mock:
respx_mock.get(
"https://cloud.getdbt.com/api/v2/accounts/123456789/runs/12/artifacts/compiled/dbt_artifacts/models/dim_dbt__current_models.sql", # noqa
headers={"Authorization": "Bearer my_api_key"},
).mock(return_value=Response(200, text="Hi! I'm some SQL!"))
response = await get_dbt_cloud_run_artifact.fn(
dbt_cloud_credentials=dbt_cloud_credentials,
run_id=12,
path="compiled/dbt_artifacts/models/dim_dbt__current_models.sql",
)
assert response == "Hi! I'm some SQL!"
async def test_get_artifact_with_step(self, dbt_cloud_credentials):
with respx.mock(using="httpx") as respx_mock:
respx_mock.get(
"https://cloud.getdbt.com/api/v2/accounts/123456789/runs/12/artifacts/manifest.json?step=1", # noqa
headers={"Authorization": "Bearer my_api_key"},
).mock(
return_value=Response(
200,
json={
"metadata": {
"dbt_schema_version": "https://schemas.getdbt.com/dbt/catalog/v1.json", # noqa
"dbt_version": "1.1.1",
}
},
)
)
response = await get_dbt_cloud_run_artifact.fn(
dbt_cloud_credentials=dbt_cloud_credentials,
run_id=12,
path="manifest.json",
step=1,
)
assert response == {
"metadata": {
"dbt_schema_version": "https://schemas.getdbt.com/dbt/catalog/v1.json",
"dbt_version": "1.1.1",
}
}
async def test_get_artifact_failure(self, dbt_cloud_credentials):
with respx.mock(using="httpx") as respx_mock:
respx_mock.get(
"https://cloud.getdbt.com/api/v2/accounts/123456789/runs/12/artifacts/manifest.json", # noqa
headers={"Authorization": "Bearer my_api_key"},
).mock(
return_value=Response(
500, json={"status": {"user_message": "This is what went wrong"}}
)
)
with pytest.raises(
DbtCloudGetRunArtifactFailed, match="This is what went wrong"
):
await get_dbt_cloud_run_artifact.fn(
dbt_cloud_credentials=dbt_cloud_credentials,
run_id=12,
path="manifest.json",
)
|
TestDbtCloudGetRunArtifact
|
python
|
facebookresearch__faiss
|
tests/test_index.py
|
{
"start": 13603,
"end": 17671
}
|
class ____(unittest.TestCase):
def run_search_and_reconstruct(self, index, xb, xq, k=10, eps=None):
n, d = xb.shape
assert xq.shape[1] == d
assert index.d == d
D_ref, I_ref = index.search(xq, k)
R_ref = index.reconstruct_n(0, n)
D, I, R = index.search_and_reconstruct(xq, k)
np.testing.assert_almost_equal(D, D_ref, decimal=5)
check_ref_knn_with_draws(D_ref, I_ref, D, I)
self.assertEqual(R.shape[:2], I.shape)
self.assertEqual(R.shape[2], d)
# (n, k, ..) -> (n * k, ..)
I_flat = I.reshape(-1)
R_flat = R.reshape(-1, d)
# Filter out -1s when not enough results
R_flat = R_flat[I_flat >= 0]
I_flat = I_flat[I_flat >= 0]
recons_ref_err = np.mean(np.linalg.norm(R_flat - R_ref[I_flat]))
self.assertLessEqual(recons_ref_err, 1e-6)
def norm1(x):
return np.sqrt((x ** 2).sum(axis=1))
recons_err = np.mean(norm1(R_flat - xb[I_flat]))
if eps is not None:
self.assertLessEqual(recons_err, eps)
return D, I, R
def test_IndexFlat(self):
d = 32
nb = 1000
nt = 1500
nq = 200
(xt, xb, xq) = get_dataset(d, nb, nt, nq)
index = faiss.IndexFlatL2(d)
index.add(xb)
self.run_search_and_reconstruct(index, xb, xq, eps=0.0)
def test_IndexIVFFlat(self):
d = 32
nb = 1000
nt = 1500
nq = 200
(xt, xb, xq) = get_dataset(d, nb, nt, nq)
quantizer = faiss.IndexFlatL2(d)
index = faiss.IndexIVFFlat(quantizer, d, 32, faiss.METRIC_L2)
index.cp.min_points_per_centroid = 5 # quiet warning
index.nprobe = 4
index.train(xt)
index.add(xb)
self.run_search_and_reconstruct(index, xb, xq, eps=0.0)
def test_IndexIVFFlatPanorama(self):
d = 32
nb = 1000
nt = 1500
nq = 200
nlevels = 4
(xt, xb, xq) = get_dataset(d, nb, nt, nq)
quantizer = faiss.IndexFlatL2(d)
index = faiss.IndexIVFFlatPanorama(quantizer, d, 32, nlevels)
index.cp.min_points_per_centroid = 5 # quiet warning
index.nprobe = 4
index.train(xt)
index.add(xb)
self.run_search_and_reconstruct(index, xb, xq, eps=0.0)
def test_IndexIVFPQ(self):
d = 32
nb = 1000
nt = 1500
nq = 200
(xt, xb, xq) = get_dataset(d, nb, nt, nq)
quantizer = faiss.IndexFlatL2(d)
index = faiss.IndexIVFPQ(quantizer, d, 32, 8, 8)
index.cp.min_points_per_centroid = 5 # quiet warning
index.nprobe = 4
index.train(xt)
index.add(xb)
self.run_search_and_reconstruct(index, xb, xq, eps=1.0)
def test_IndexIVFRQ(self):
d = 32
nb = 1000
nt = 1500
nq = 200
(xt, xb, xq) = get_dataset(d, nb, nt, nq)
quantizer = faiss.IndexFlatL2(d)
index = faiss.IndexIVFResidualQuantizer(quantizer, d, 32, 8, 8)
index.cp.min_points_per_centroid = 5 # quiet warning
index.nprobe = 4
index.train(xt)
index.add(xb)
self.run_search_and_reconstruct(index, xb, xq, eps=1.0)
def test_MultiIndex(self):
d = 32
nb = 1000
nt = 1500
nq = 200
(xt, xb, xq) = get_dataset(d, nb, nt, nq)
index = faiss.index_factory(d, "IMI2x5,PQ8np")
faiss.ParameterSpace().set_index_parameter(index, "nprobe", 4)
index.train(xt)
index.add(xb)
self.run_search_and_reconstruct(index, xb, xq, eps=1.0)
def test_IndexTransform(self):
d = 32
nb = 1000
nt = 1500
nq = 200
(xt, xb, xq) = get_dataset(d, nb, nt, nq)
index = faiss.index_factory(d, "L2norm,PCA8,IVF32,PQ8np")
faiss.ParameterSpace().set_index_parameter(index, "nprobe", 4)
index.train(xt)
index.add(xb)
self.run_search_and_reconstruct(index, xb, xq)
|
TestSearchAndReconstruct
|
python
|
PrefectHQ__prefect
|
src/prefect/client/orchestration/_deployments/client.py
|
{
"start": 25038,
"end": 48988
}
|
class ____(BaseAsyncClient):
async def create_deployment(
self,
flow_id: UUID,
name: str,
version: str | None = None,
version_info: "VersionInfo | None" = None,
schedules: list["DeploymentScheduleCreate"] | None = None,
concurrency_limit: int | None = None,
concurrency_options: "ConcurrencyOptions | None" = None,
parameters: dict[str, Any] | None = None,
description: str | None = None,
work_queue_name: str | None = None,
work_pool_name: str | None = None,
tags: list[str] | None = None,
storage_document_id: UUID | None = None,
path: str | None = None,
entrypoint: str | None = None,
infrastructure_document_id: UUID | None = None,
parameter_openapi_schema: dict[str, Any] | None = None,
paused: bool | None = None,
pull_steps: list[dict[str, Any]] | None = None,
enforce_parameter_schema: bool | None = None,
job_variables: dict[str, Any] | None = None,
branch: str | None = None,
base: UUID | None = None,
root: UUID | None = None,
) -> UUID:
"""
Create a deployment.
Args:
flow_id: the flow ID to create a deployment for
name: the name of the deployment
version: an optional version string for the deployment
tags: an optional list of tags to apply to the deployment
storage_document_id: an reference to the storage block document
used for the deployed flow
infrastructure_document_id: an reference to the infrastructure block document
to use for this deployment
job_variables: A dictionary of dot delimited infrastructure overrides that
will be applied at runtime; for example `env.CONFIG_KEY=config_value` or
`namespace='prefect'`. This argument was previously named `infra_overrides`.
Both arguments are supported for backwards compatibility.
Raises:
RequestError: if the deployment was not created for any reason
Returns:
the ID of the deployment in the backend
"""
from prefect.client.schemas.actions import DeploymentCreate
deployment_create = DeploymentCreate(
flow_id=flow_id,
name=name,
version=version,
version_info=version_info,
parameters=dict(parameters or {}),
tags=list(tags or []),
work_queue_name=work_queue_name,
description=description,
storage_document_id=storage_document_id,
path=path,
entrypoint=entrypoint,
infrastructure_document_id=infrastructure_document_id,
job_variables=dict(job_variables or {}),
parameter_openapi_schema=parameter_openapi_schema or {},
paused=paused,
schedules=schedules or [],
concurrency_limit=concurrency_limit,
concurrency_options=concurrency_options,
pull_steps=pull_steps,
enforce_parameter_schema=enforce_parameter_schema,
branch=branch,
base=base,
root=root,
)
if work_pool_name is not None:
deployment_create.work_pool_name = work_pool_name
# Exclude newer fields that are not set to avoid compatibility issues
exclude = {
field
for field in [
"work_pool_name",
"work_queue_name",
]
if field not in deployment_create.model_fields_set
}
exclude_if_none = [
"paused",
"pull_steps",
"enforce_parameter_schema",
"version_info",
"branch",
"base",
"root",
]
for field in exclude_if_none:
if getattr(deployment_create, field) is None:
exclude.add(field)
payload = deployment_create.model_dump(mode="json", exclude=exclude)
if deployment_create.version_info:
payload["version_info"] = deployment_create.version_info.model_dump(
mode="json"
)
try:
response = await self.request("POST", "/deployments/", json=payload)
except HTTPStatusError as e:
if e.response.status_code == 403 and "maximum number of deployments" in str(
e
):
raise ObjectLimitReached(http_exc=e) from e
if e.response.status_code == 409:
raise ObjectAlreadyExists(http_exc=e) from e
else:
raise
deployment_id = response.json().get("id")
if not deployment_id:
raise RequestError(f"Malformed response: {response}")
return UUID(deployment_id)
async def _set_deployment_paused_state(
self, deployment_id: UUID, paused: bool
) -> None:
await self.request(
"PATCH",
"/deployments/{id}",
path_params={"id": deployment_id},
json={"paused": paused},
)
@deprecated_callable(
start_date="Jun 2025",
help="Use pause_deployment or resume_deployment instead.",
)
async def set_deployment_paused_state(
self, deployment_id: UUID, paused: bool
) -> None:
"""
DEPRECATED: Use pause_deployment or resume_deployment instead.
Set the paused state of a deployment.
Args:
deployment_id: the deployment ID to update
paused: whether the deployment should be paused
"""
await self._set_deployment_paused_state(deployment_id, paused)
async def pause_deployment(self, deployment_id: Union[UUID, str]) -> None:
"""
Pause a deployment by ID.
Args:
deployment_id: The deployment ID of interest (can be a UUID or a string).
Raises:
ObjectNotFound: If request returns 404
RequestError: If request fails
"""
if not isinstance(deployment_id, UUID):
try:
deployment_id = UUID(deployment_id)
except ValueError:
raise ValueError(f"Invalid deployment ID: {deployment_id}")
try:
await self._set_deployment_paused_state(deployment_id, paused=True)
except HTTPStatusError as e:
if e.response.status_code == 404:
raise ObjectNotFound(http_exc=e) from e
else:
raise
async def resume_deployment(self, deployment_id: Union[UUID, str]) -> None:
"""
Resume (unpause) a deployment by ID.
Args:
deployment_id: The deployment ID of interest (can be a UUID or a string).
Raises:
ObjectNotFound: If request returns 404
RequestError: If request fails
"""
if not isinstance(deployment_id, UUID):
try:
deployment_id = UUID(deployment_id)
except ValueError:
raise ValueError(f"Invalid deployment ID: {deployment_id}")
try:
await self._set_deployment_paused_state(deployment_id, paused=False)
except HTTPStatusError as e:
if e.response.status_code == 404:
raise ObjectNotFound(http_exc=e) from e
else:
raise
async def update_deployment(
self,
deployment_id: UUID,
deployment: "DeploymentUpdate",
) -> None:
exclude_if_none = [
"version_info",
]
exclude = {"name", "flow_name", "triggers"}
for field in exclude_if_none:
if getattr(deployment, field) is None:
exclude.add(field)
payload = deployment.model_dump(
mode="json",
exclude_unset=True,
exclude=exclude,
)
if deployment.version_info:
payload["version_info"] = deployment.version_info.model_dump(mode="json")
await self.request(
"PATCH",
"/deployments/{id}",
path_params={"id": deployment_id},
json=payload,
)
async def _create_deployment_from_schema(self, schema: "DeploymentCreate") -> UUID:
"""
Create a deployment from a prepared `DeploymentCreate` schema.
"""
# TODO: We are likely to remove this method once we have considered the
# packaging interface for deployments further.
response = await self.request(
"POST", "/deployments/", json=schema.model_dump(mode="json")
)
deployment_id = response.json().get("id")
if not deployment_id:
raise RequestError(f"Malformed response: {response}")
return UUID(deployment_id)
async def read_deployment(
self,
deployment_id: Union[UUID, str],
) -> "DeploymentResponse":
"""
Query the Prefect API for a deployment by id.
Args:
deployment_id: the deployment ID of interest
Returns:
a Deployment model representation of the deployment
"""
from prefect.client.schemas.responses import DeploymentResponse
if not isinstance(deployment_id, UUID):
try:
deployment_id = UUID(deployment_id)
except ValueError:
raise ValueError(f"Invalid deployment ID: {deployment_id}")
try:
response = await self.request(
"GET",
"/deployments/{id}",
path_params={"id": deployment_id},
)
except HTTPStatusError as e:
if e.response.status_code == 404:
raise ObjectNotFound(http_exc=e) from e
else:
raise
return DeploymentResponse.model_validate(response.json())
async def read_deployment_by_name(
self,
name: str,
) -> "DeploymentResponse":
"""
Query the Prefect API for a deployment by name.
Args:
name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>
Raises:
ObjectNotFound: If request returns 404
RequestError: If request fails
Returns:
a Deployment model representation of the deployment
"""
from prefect.client.schemas.responses import DeploymentResponse
try:
flow_name, deployment_name = name.split("/")
response = await self.request(
"GET",
"/deployments/name/{flow_name}/{deployment_name}",
path_params={
"flow_name": flow_name,
"deployment_name": deployment_name,
},
)
except (HTTPStatusError, ValueError) as e:
if isinstance(e, HTTPStatusError) and e.response.status_code == 404:
raise ObjectNotFound(http_exc=e) from e
elif isinstance(e, ValueError):
raise ValueError(
f"Invalid deployment name format: {name}. Expected format: <FLOW_NAME>/<DEPLOYMENT_NAME>"
) from e
else:
raise
return DeploymentResponse.model_validate(response.json())
async def read_deployments(
self,
*,
flow_filter: "FlowFilter | None" = None,
flow_run_filter: "FlowRunFilter | None" = None,
task_run_filter: "TaskRunFilter | None" = None,
deployment_filter: "DeploymentFilter | None" = None,
work_pool_filter: "WorkPoolFilter | None" = None,
work_queue_filter: "WorkQueueFilter | None" = None,
limit: int | None = None,
sort: "DeploymentSort | None" = None,
offset: int = 0,
) -> list["DeploymentResponse"]:
"""
Query the Prefect API for deployments. Only deployments matching all
the provided criteria will be returned.
Args:
flow_filter: filter criteria for flows
flow_run_filter: filter criteria for flow runs
task_run_filter: filter criteria for task runs
deployment_filter: filter criteria for deployments
work_pool_filter: filter criteria for work pools
work_queue_filter: filter criteria for work pool queues
limit: a limit for the deployment query
offset: an offset for the deployment query
Returns:
a list of Deployment model representations
of the deployments
"""
from prefect.client.schemas.responses import DeploymentResponse
body: dict[str, Any] = {
"flows": flow_filter.model_dump(mode="json") if flow_filter else None,
"flow_runs": (
flow_run_filter.model_dump(mode="json", exclude_unset=True)
if flow_run_filter
else None
),
"task_runs": (
task_run_filter.model_dump(mode="json") if task_run_filter else None
),
"deployments": (
deployment_filter.model_dump(mode="json") if deployment_filter else None
),
"work_pools": (
work_pool_filter.model_dump(mode="json") if work_pool_filter else None
),
"work_pool_queues": (
work_queue_filter.model_dump(mode="json") if work_queue_filter else None
),
"limit": limit,
"offset": offset,
"sort": sort,
}
response = await self.request("POST", "/deployments/filter", json=body)
return DeploymentResponse.model_validate_list(response.json())
async def delete_deployment(
self,
deployment_id: UUID,
) -> None:
"""
Delete deployment by id.
Args:
deployment_id: The deployment id of interest.
Raises:
ObjectNotFound: If request returns 404
RequestError: If requests fails
"""
try:
await self.request(
"DELETE",
"/deployments/{id}",
path_params={"id": deployment_id},
)
except HTTPStatusError as e:
if e.response.status_code == 404:
raise ObjectNotFound(http_exc=e) from e
else:
raise
async def create_deployment_schedules(
self,
deployment_id: UUID,
schedules: list[tuple["SCHEDULE_TYPES", bool]],
) -> list["DeploymentSchedule"]:
"""
Create deployment schedules.
Args:
deployment_id: the deployment ID
schedules: a list of tuples containing the schedule to create
and whether or not it should be active.
Raises:
RequestError: if the schedules were not created for any reason
Returns:
the list of schedules created in the backend
"""
from prefect.client.schemas.actions import DeploymentScheduleCreate
from prefect.client.schemas.objects import DeploymentSchedule
deployment_schedule_create = [
DeploymentScheduleCreate(schedule=schedule[0], active=schedule[1])
for schedule in schedules
]
json = [
deployment_schedule_create.model_dump(mode="json")
for deployment_schedule_create in deployment_schedule_create
]
response = await self.request(
"POST",
"/deployments/{id}/schedules",
path_params={"id": deployment_id},
json=json,
)
return DeploymentSchedule.model_validate_list(response.json())
async def read_deployment_schedules(
self,
deployment_id: UUID,
) -> list["DeploymentSchedule"]:
"""
Query the Prefect API for a deployment's schedules.
Args:
deployment_id: the deployment ID
Returns:
a list of DeploymentSchedule model representations of the deployment schedules
"""
from prefect.client.schemas.objects import DeploymentSchedule
try:
response = await self.request(
"GET",
"/deployments/{id}/schedules",
path_params={"id": deployment_id},
)
except HTTPStatusError as e:
if e.response.status_code == 404:
raise ObjectNotFound(http_exc=e) from e
else:
raise
return DeploymentSchedule.model_validate_list(response.json())
async def update_deployment_schedule(
self,
deployment_id: UUID,
schedule_id: UUID,
active: bool | None = None,
schedule: "SCHEDULE_TYPES | None" = None,
) -> None:
"""
Update a deployment schedule by ID.
Args:
deployment_id: the deployment ID
schedule_id: the deployment schedule ID of interest
active: whether or not the schedule should be active
schedule: the cron, rrule, or interval schedule this deployment schedule should use
"""
from prefect.client.schemas.actions import DeploymentScheduleUpdate
kwargs: dict[str, Any] = {}
if active is not None:
kwargs["active"] = active
if schedule is not None:
kwargs["schedule"] = schedule
deployment_schedule_update = DeploymentScheduleUpdate(**kwargs)
json = deployment_schedule_update.model_dump(mode="json", exclude_unset=True)
try:
await self.request(
"PATCH",
"/deployments/{id}/schedules/{schedule_id}",
path_params={"id": deployment_id, "schedule_id": schedule_id},
json=json,
)
except HTTPStatusError as e:
if e.response.status_code == 404:
raise ObjectNotFound(http_exc=e) from e
else:
raise
async def delete_deployment_schedule(
self,
deployment_id: UUID,
schedule_id: UUID,
) -> None:
"""
Delete a deployment schedule.
Args:
deployment_id: the deployment ID
schedule_id: the ID of the deployment schedule to delete.
Raises:
RequestError: if the schedules were not deleted for any reason
"""
try:
await self.request(
"DELETE",
"/deployments/{id}/schedules/{schedule_id}",
path_params={"id": deployment_id, "schedule_id": schedule_id},
)
except HTTPStatusError as e:
if e.response.status_code == 404:
raise ObjectNotFound(http_exc=e) from e
else:
raise
async def get_scheduled_flow_runs_for_deployments(
self,
deployment_ids: list[UUID],
scheduled_before: "datetime.datetime | None" = None,
limit: int | None = None,
) -> list["FlowRun"]:
from prefect.client.schemas.objects import FlowRun
body: dict[str, Any] = dict(deployment_ids=[str(id) for id in deployment_ids])
if scheduled_before:
body["scheduled_before"] = str(scheduled_before)
if limit:
body["limit"] = limit
response = await self.request(
"POST",
"/deployments/get_scheduled_flow_runs",
json=body,
)
return FlowRun.model_validate_list(response.json())
async def create_flow_run_from_deployment(
self,
deployment_id: UUID,
*,
parameters: dict[str, Any] | None = None,
context: dict[str, Any] | None = None,
state: State[Any] | None = None,
name: str | None = None,
tags: Iterable[str] | None = None,
idempotency_key: str | None = None,
parent_task_run_id: UUID | None = None,
work_queue_name: str | None = None,
job_variables: dict[str, Any] | None = None,
labels: "KeyValueLabelsField | None" = None,
) -> "FlowRun":
"""
Create a flow run for a deployment.
Args:
deployment_id: The deployment ID to create the flow run from
parameters: Parameter overrides for this flow run. Merged with the
deployment defaults
context: Optional run context data
state: The initial state for the run. If not provided, defaults to
`Scheduled` for now. Should always be a `Scheduled` type.
name: An optional name for the flow run. If not provided, the server will
generate a name.
tags: An optional iterable of tags to apply to the flow run; these tags
are merged with the deployment's tags.
idempotency_key: Optional idempotency key for creation of the flow run.
If the key matches the key of an existing flow run, the existing run will
be returned instead of creating a new one.
parent_task_run_id: if a subflow run is being created, the placeholder task
run identifier in the parent flow
work_queue_name: An optional work queue name to add this run to. If not provided,
will default to the deployment's set work queue. If one is provided that does not
exist, a new work queue will be created within the deployment's work pool.
job_variables: Optional variables that will be supplied to the flow run job.
Raises:
RequestError: if the Prefect API does not successfully create a run for any reason
Returns:
The flow run model
"""
from prefect.client.schemas.actions import DeploymentFlowRunCreate
from prefect.client.schemas.objects import FlowRun
from prefect.states import Scheduled, to_state_create
parameters = parameters or {}
context = context or {}
state = state or Scheduled()
tags = tags or []
labels = labels or {}
flow_run_create = DeploymentFlowRunCreate(
parameters=parameters,
context=context,
state=to_state_create(state),
tags=list(tags),
name=name,
idempotency_key=idempotency_key,
parent_task_run_id=parent_task_run_id,
job_variables=job_variables,
labels=labels,
)
# done separately to avoid including this field in payloads sent to older API versions
if work_queue_name:
flow_run_create.work_queue_name = work_queue_name
response = await self.request(
"POST",
"/deployments/{id}/create_flow_run",
path_params={"id": deployment_id},
json=flow_run_create.model_dump(mode="json", exclude_unset=True),
)
return FlowRun.model_validate(response.json())
async def create_deployment_branch(
self,
deployment_id: UUID,
branch: str,
options: "DeploymentBranchingOptions | None" = None,
overrides: "DeploymentUpdate | None" = None,
) -> UUID:
from prefect.client.schemas.actions import DeploymentBranch
from prefect.client.schemas.objects import DeploymentBranchingOptions
response = await self.request(
"POST",
"/deployments/{id}/branch",
path_params={"id": deployment_id},
json=DeploymentBranch(
branch=branch,
options=options or DeploymentBranchingOptions(),
overrides=overrides,
).model_dump(mode="json", exclude_unset=True),
)
return UUID(response.json().get("id"))
|
DeploymentAsyncClient
|
python
|
matplotlib__matplotlib
|
lib/matplotlib/dviread.py
|
{
"start": 32196,
"end": 36855
}
|
class ____(Dvi):
r"""
A virtual font (\*.vf file) containing subroutines for dvi files.
Parameters
----------
filename : str or path-like
Notes
-----
The virtual font format is a derivative of dvi:
http://mirrors.ctan.org/info/knuth/virtual-fonts
This class reuses some of the machinery of `Dvi`
but replaces the `!_read` loop and dispatch mechanism.
Examples
--------
::
vf = Vf(filename)
glyph = vf[code]
glyph.text, glyph.boxes, glyph.width
"""
def __init__(self, filename):
super().__init__(filename, 0)
try:
self._first_font = None
self._chars = {}
self._read()
finally:
self.close()
def __getitem__(self, code):
return self._chars[code]
def _read(self):
"""
Read one page from the file. Return True if successful,
False if there were no more pages.
"""
packet_char = packet_ends = None
packet_len = packet_width = None
while True:
byte = self.file.read(1)[0]
# If we are in a packet, execute the dvi instructions
if self.state is _dvistate.inpage:
byte_at = self.file.tell()-1
if byte_at == packet_ends:
self._finalize_packet(packet_char, packet_width)
packet_len = packet_char = packet_width = None
# fall through to out-of-packet code
elif byte_at > packet_ends:
raise ValueError("Packet length mismatch in vf file")
else:
if byte in (139, 140) or byte >= 243:
raise ValueError(f"Inappropriate opcode {byte} in vf file")
Dvi._dtable[byte](self, byte)
continue
# We are outside a packet
if byte < 242: # a short packet (length given by byte)
packet_len = byte
packet_char = self._read_arg(1)
packet_width = self._read_arg(3)
packet_ends = self._init_packet(byte)
self.state = _dvistate.inpage
elif byte == 242: # a long packet
packet_len = self._read_arg(4)
packet_char = self._read_arg(4)
packet_width = self._read_arg(4)
self._init_packet(packet_len)
elif 243 <= byte <= 246:
k = self._read_arg(byte - 242, byte == 246)
c = self._read_arg(4)
s = self._read_arg(4)
d = self._read_arg(4)
a = self._read_arg(1)
l = self._read_arg(1)
self._fnt_def_real(k, c, s, d, a, l)
if self._first_font is None:
self._first_font = k
elif byte == 247: # preamble
i = self._read_arg(1)
k = self._read_arg(1)
x = self.file.read(k)
cs = self._read_arg(4)
ds = self._read_arg(4)
self._pre(i, x, cs, ds)
elif byte == 248: # postamble (just some number of 248s)
break
else:
raise ValueError(f"Unknown vf opcode {byte}")
def _init_packet(self, pl):
if self.state != _dvistate.outer:
raise ValueError("Misplaced packet in vf file")
self.h = self.v = self.w = self.x = self.y = self.z = 0
self.stack = []
self.text = []
self.boxes = []
self.f = self._first_font
self._missing_font = None
return self.file.tell() + pl
def _finalize_packet(self, packet_char, packet_width):
if not self._missing_font: # Otherwise we don't have full glyph definition.
self._chars[packet_char] = Page(
text=self.text, boxes=self.boxes, width=packet_width,
height=None, descent=None)
self.state = _dvistate.outer
def _pre(self, i, x, cs, ds):
if self.state is not _dvistate.pre:
raise ValueError("pre command in middle of vf file")
if i != 202:
raise ValueError(f"Unknown vf format {i}")
if len(x):
_log.debug('vf file comment: %s', x)
self.state = _dvistate.outer
# cs = checksum, ds = design size
def _mul1220(num1, num2):
"""Multiply two numbers in 12.20 fixed point format."""
# Separated into a function because >> has surprising precedence
return (num1*num2) >> 20
@dataclasses.dataclass(frozen=True, kw_only=True)
|
Vf
|
python
|
apache__airflow
|
providers/cncf/kubernetes/src/airflow/providers/cncf/kubernetes/kube_config.py
|
{
"start": 1010,
"end": 6191
}
|
class ____:
"""Configuration for Kubernetes."""
core_section = "core"
kubernetes_section = "kubernetes_executor"
logging_section = "logging"
def __init__(self):
configuration_dict = conf.as_dict(display_sensitive=True)
self.core_configuration = configuration_dict[self.core_section]
self.airflow_home = AIRFLOW_HOME
self.dags_folder = conf.get(self.core_section, "dags_folder")
self.parallelism = conf.getint(self.core_section, "parallelism")
self.pod_template_file = conf.get(self.kubernetes_section, "pod_template_file", fallback=None)
self.delete_worker_pods = conf.getboolean(self.kubernetes_section, "delete_worker_pods")
self.delete_worker_pods_on_failure = conf.getboolean(
self.kubernetes_section, "delete_worker_pods_on_failure"
)
self.worker_pod_pending_fatal_container_state_reasons = []
if conf.get(self.kubernetes_section, "worker_pod_pending_fatal_container_state_reasons", fallback=""):
self.worker_pod_pending_fatal_container_state_reasons = [
r.strip()
for r in conf.get(
self.kubernetes_section, "worker_pod_pending_fatal_container_state_reasons"
).split(",")
]
self.worker_pods_creation_batch_size = conf.getint(
self.kubernetes_section, "worker_pods_creation_batch_size"
)
self.worker_container_repository = conf.get(self.kubernetes_section, "worker_container_repository")
if self.worker_container_repository:
warnings.warn(
"Configuration 'worker_container_repository' is deprecated. "
"Use 'pod_template_file' to specify the container image repository instead.",
AirflowProviderDeprecationWarning,
stacklevel=2,
)
self.worker_container_tag = conf.get(self.kubernetes_section, "worker_container_tag")
if self.worker_container_tag:
warnings.warn(
"Configuration 'worker_container_tag' is deprecated. "
"Use 'pod_template_file' to specify the container image tag instead.",
AirflowProviderDeprecationWarning,
stacklevel=2,
)
if self.worker_container_repository and self.worker_container_tag:
self.kube_image = f"{self.worker_container_repository}:{self.worker_container_tag}"
else:
self.kube_image = None
# The Kubernetes Namespace in which the Scheduler and Webserver reside. Note
# that if your
# cluster has RBAC enabled, your scheduler may need service account permissions to
# create, watch, get, and delete pods in this namespace.
self.kube_namespace = conf.get(self.kubernetes_section, "namespace")
if self.kube_namespace and self.kube_namespace != "default":
warnings.warn(
"Configuration 'namespace' is deprecated. "
"Use 'pod_template_file' to specify the namespace instead.",
AirflowProviderDeprecationWarning,
stacklevel=2,
)
self.multi_namespace_mode = conf.getboolean(self.kubernetes_section, "multi_namespace_mode")
if self.multi_namespace_mode and conf.get(
self.kubernetes_section, "multi_namespace_mode_namespace_list"
):
self.multi_namespace_mode_namespace_list = conf.get(
self.kubernetes_section, "multi_namespace_mode_namespace_list"
).split(",")
else:
self.multi_namespace_mode_namespace_list = None
# The Kubernetes Namespace in which pods will be created by the executor. Note
# that if your
# cluster has RBAC enabled, your workers may need service account permissions to
# interact with cluster components.
self.executor_namespace = conf.get(self.kubernetes_section, "namespace")
self.kube_client_request_args = conf.getjson(
self.kubernetes_section, "kube_client_request_args", fallback={}
)
if not isinstance(self.kube_client_request_args, dict):
raise AirflowConfigException(
f"[{self.kubernetes_section}] 'kube_client_request_args' expected a JSON dict, got "
+ type(self.kube_client_request_args).__name__
)
if self.kube_client_request_args:
if "_request_timeout" in self.kube_client_request_args and isinstance(
self.kube_client_request_args["_request_timeout"], list
):
self.kube_client_request_args["_request_timeout"] = tuple(
self.kube_client_request_args["_request_timeout"]
)
self.delete_option_kwargs = conf.getjson(self.kubernetes_section, "delete_option_kwargs", fallback={})
if not isinstance(self.delete_option_kwargs, dict):
raise AirflowConfigException(
f"[{self.kubernetes_section}] 'delete_option_kwargs' expected a JSON dict, got "
+ type(self.delete_option_kwargs).__name__
)
|
KubeConfig
|
python
|
huggingface__transformers
|
src/transformers/models/maskformer/convert_maskformer_original_pytorch_checkpoint_to_pytorch.py
|
{
"start": 5696,
"end": 6450
}
|
class ____:
def __call__(self, original_config: object) -> MaskFormerImageProcessor:
model = original_config.MODEL
model_input = original_config.INPUT
dataset_catalog = MetadataCatalog.get(original_config.DATASETS.TEST[0])
return MaskFormerImageProcessor(
image_mean=(torch.tensor(model.PIXEL_MEAN) / 255).tolist(),
image_std=(torch.tensor(model.PIXEL_STD) / 255).tolist(),
size=model_input.MIN_SIZE_TEST,
max_size=model_input.MAX_SIZE_TEST,
num_labels=model.SEM_SEG_HEAD.NUM_CLASSES,
ignore_index=dataset_catalog.ignore_label,
size_divisibility=32, # 32 is required by swin
)
|
OriginalMaskFormerConfigToImageProcessorConverter
|
python
|
dagster-io__dagster
|
python_modules/dagster/dagster/_grpc/server.py
|
{
"start": 4772,
"end": 4848
}
|
class ____(TypedDict):
current_request_count: Optional[int]
|
GrpcApiMetrics
|
python
|
squidfunk__mkdocs-material
|
material/plugins/projects/structure/__init__.py
|
{
"start": 10548,
"end": 10645
}
|
class ____(Link):
# Indicate that the link points to a project
is_project = True
|
ProjectLink
|
python
|
pandas-dev__pandas
|
pandas/io/parsers/base_parser.py
|
{
"start": 1611,
"end": 32900
}
|
class ____:
class BadLineHandleMethod(Enum):
ERROR = 0
WARN = 1
SKIP = 2
_implicit_index: bool
_first_chunk: bool
keep_default_na: bool
dayfirst: bool
cache_dates: bool
usecols_dtype: str | None
def __init__(self, kwds) -> None:
self._implicit_index = False
self.names = kwds.get("names")
self.orig_names: Sequence[Hashable] | None = None
self.index_col = kwds.get("index_col", None)
self.unnamed_cols: set = set()
self.index_names: Sequence[Hashable] | None = None
self.col_names: Sequence[Hashable] | None = None
parse_dates = kwds.pop("parse_dates", False)
if parse_dates is None or lib.is_bool(parse_dates):
parse_dates = bool(parse_dates)
elif not isinstance(parse_dates, list):
raise TypeError(
"Only booleans and lists are accepted for the 'parse_dates' parameter"
)
self.parse_dates: bool | list = parse_dates
self.date_parser = kwds.pop("date_parser", lib.no_default)
self.date_format = kwds.pop("date_format", None)
self.dayfirst = kwds.pop("dayfirst", False)
self.na_values = kwds.get("na_values")
self.na_fvalues = kwds.get("na_fvalues")
self.na_filter = kwds.get("na_filter", False)
self.keep_default_na = kwds.get("keep_default_na", True)
self.dtype = copy(kwds.get("dtype", None))
self.converters = kwds.get("converters")
self.dtype_backend = kwds.get("dtype_backend")
self.true_values = kwds.get("true_values")
self.false_values = kwds.get("false_values")
self.cache_dates = kwds.pop("cache_dates", True)
# validate header options for mi
self.header = kwds.get("header")
if is_list_like(self.header, allow_sets=False):
if kwds.get("usecols"):
raise ValueError(
"cannot specify usecols when specifying a multi-index header"
)
if kwds.get("names"):
raise ValueError(
"cannot specify names when specifying a multi-index header"
)
# validate index_col that only contains integers
if self.index_col is not None:
# In this case we can pin down index_col as list[int]
if is_integer(self.index_col):
self.index_col = [self.index_col]
elif not (
is_list_like(self.index_col, allow_sets=False)
and all(map(is_integer, self.index_col))
):
raise ValueError(
"index_col must only contain integers of column positions "
"when specifying a multi-index header"
)
else:
self.index_col = list(self.index_col)
self._first_chunk = True
self.usecols, self.usecols_dtype = _validate_usecols_arg(kwds["usecols"])
# Fallback to error to pass a sketchy test(test_override_set_noconvert_columns)
# Normally, this arg would get pre-processed earlier on
self.on_bad_lines = kwds.get("on_bad_lines", self.BadLineHandleMethod.ERROR)
def close(self) -> None:
pass
@final
def _should_parse_dates(self, i: int) -> bool:
if isinstance(self.parse_dates, bool):
return self.parse_dates
else:
if self.index_names is not None:
name = self.index_names[i]
else:
name = None
j = i if self.index_col is None else self.index_col[i]
return (j in self.parse_dates) or (
name is not None and name in self.parse_dates
)
@final
def _extract_multi_indexer_columns(
self,
header,
index_names: Sequence[Hashable] | None,
passed_names: bool = False,
) -> tuple[
Sequence[Hashable], Sequence[Hashable] | None, Sequence[Hashable] | None, bool
]:
"""
Extract and return the names, index_names, col_names if the column
names are a MultiIndex.
Parameters
----------
header: list of lists
The header rows
index_names: list, optional
The names of the future index
passed_names: bool, default False
A flag specifying if names where passed
"""
if len(header) < 2:
return header[0], index_names, None, passed_names
# the names are the tuples of the header that are not the index cols
# 0 is the name of the index, assuming index_col is a list of column
# numbers
ic = self.index_col
if ic is None:
ic = []
if not isinstance(ic, (list, tuple, np.ndarray)):
ic = [ic]
sic = set(ic)
# clean the index_names
index_names = header.pop(-1)
index_names, _, _ = self._clean_index_names(index_names, self.index_col)
# extract the columns
field_count = len(header[0])
# check if header lengths are equal
if not all(len(header_iter) == field_count for header_iter in header[1:]):
raise ParserError("Header rows must have an equal number of columns.")
def extract(r):
return tuple(r[i] for i in range(field_count) if i not in sic)
columns = list(zip(*(extract(r) for r in header), strict=True))
names = columns.copy()
for single_ic in sorted(ic):
names.insert(single_ic, single_ic)
# Clean the column names (if we have an index_col).
if ic:
col_names = [
r[ic[0]]
if ((r[ic[0]] is not None) and r[ic[0]] not in self.unnamed_cols)
else None
for r in header
]
else:
col_names = [None] * len(header)
passed_names = True
return names, index_names, col_names, passed_names
@final
def _maybe_make_multi_index_columns(
self,
columns: SequenceT,
col_names: Sequence[Hashable] | None = None,
) -> SequenceT | MultiIndex:
# possibly create a column mi here
if is_potential_multi_index(columns):
columns_mi = cast("Sequence[tuple[Hashable, ...]]", columns)
return MultiIndex.from_tuples(columns_mi, names=col_names)
return columns
@final
def _make_index(
self, alldata, columns, indexnamerow: list[Scalar] | None = None
) -> tuple[Index | None, Sequence[Hashable] | MultiIndex]:
index: Index | None
if isinstance(self.index_col, list) and len(self.index_col):
to_remove = []
indexes = []
for idx in self.index_col:
if isinstance(idx, str):
raise ValueError(f"Index {idx} invalid")
to_remove.append(idx)
indexes.append(alldata[idx])
# remove index items from content and columns, don't pop in
# loop
for i in sorted(to_remove, reverse=True):
alldata.pop(i)
if not self._implicit_index:
columns.pop(i)
index = self._agg_index(indexes)
# add names for the index
if indexnamerow:
coffset = len(indexnamerow) - len(columns)
index = index.set_names(indexnamerow[:coffset])
else:
index = None
# maybe create a mi on the columns
columns = self._maybe_make_multi_index_columns(columns, self.col_names)
return index, columns
@final
def _clean_mapping(self, mapping):
"""converts col numbers to names"""
if not isinstance(mapping, dict):
return mapping
clean = {}
# for mypy
assert self.orig_names is not None
for col, v in mapping.items():
if isinstance(col, int) and col not in self.orig_names:
col = self.orig_names[col]
clean[col] = v
if isinstance(mapping, defaultdict):
remaining_cols = set(self.orig_names) - set(clean.keys())
clean.update({col: mapping[col] for col in remaining_cols})
return clean
@final
def _agg_index(self, index) -> Index:
arrays = []
converters = self._clean_mapping(self.converters)
clean_dtypes = self._clean_mapping(self.dtype)
if self.index_names is not None:
names: Iterable = self.index_names
zip_strict = True
else:
names = itertools.cycle([None])
zip_strict = False
for i, (arr, name) in enumerate(zip(index, names, strict=zip_strict)):
if self._should_parse_dates(i):
arr = date_converter(
arr,
col=self.index_names[i] if self.index_names is not None else None,
dayfirst=self.dayfirst,
cache_dates=self.cache_dates,
date_format=self.date_format,
)
if self.na_filter:
col_na_values = self.na_values
col_na_fvalues = self.na_fvalues
else:
col_na_values = set()
col_na_fvalues = set()
if isinstance(self.na_values, dict):
assert self.index_names is not None
col_name = self.index_names[i]
if col_name is not None:
col_na_values, col_na_fvalues = get_na_values(
col_name, self.na_values, self.na_fvalues, self.keep_default_na
)
else:
col_na_values, col_na_fvalues = set(), set()
cast_type = None
index_converter = False
if self.index_names is not None:
if isinstance(clean_dtypes, dict):
cast_type = clean_dtypes.get(self.index_names[i], None)
if isinstance(converters, dict):
index_converter = converters.get(self.index_names[i]) is not None
try_num_bool = not (
(cast_type and is_string_dtype(cast_type)) or index_converter
)
arr, _ = self._infer_types(
arr, col_na_values | col_na_fvalues, cast_type is None, try_num_bool
)
if cast_type is not None:
# Don't perform RangeIndex inference
idx = Index(arr, name=name, dtype=cast_type)
else:
idx = ensure_index_from_sequences([arr], [name])
arrays.append(idx)
if len(arrays) == 1:
return arrays[0]
else:
return MultiIndex.from_arrays(arrays)
@final
def _set_noconvert_dtype_columns(
self, col_indices: list[int], names: Sequence[Hashable]
) -> set[int]:
"""
Set the columns that should not undergo dtype conversions.
Currently, any column that is involved with date parsing will not
undergo such conversions. If usecols is specified, the positions of the columns
not to cast is relative to the usecols not to all columns.
Parameters
----------
col_indices: The indices specifying order and positions of the columns
names: The column names which order is corresponding with the order
of col_indices
Returns
-------
A set of integers containing the positions of the columns not to convert.
"""
usecols: list[int] | list[str] | None
noconvert_columns = set()
if self.usecols_dtype == "integer":
# A set of integers will be converted to a list in
# the correct order every single time.
usecols = sorted(self.usecols)
elif callable(self.usecols) or self.usecols_dtype not in ("empty", None):
# The names attribute should have the correct columns
# in the proper order for indexing with parse_dates.
usecols = col_indices
else:
# Usecols is empty.
usecols = None
def _set(x) -> int:
if usecols is not None and is_integer(x):
x = usecols[x]
if not is_integer(x):
x = col_indices[names.index(x)]
return x
if isinstance(self.parse_dates, list):
validate_parse_dates_presence(self.parse_dates, names)
for val in self.parse_dates:
noconvert_columns.add(_set(val))
elif self.parse_dates:
if isinstance(self.index_col, list):
for k in self.index_col:
noconvert_columns.add(_set(k))
elif self.index_col is not None:
noconvert_columns.add(_set(self.index_col))
return noconvert_columns
@final
def _infer_types(
self, values, na_values, no_dtype_specified, try_num_bool: bool = True
) -> tuple[ArrayLike, int]:
"""
Infer types of values, possibly casting
Parameters
----------
values : ndarray
na_values : set
no_dtype_specified: Specifies if we want to cast explicitly
try_num_bool : bool, default try
try to cast values to numeric (first preference) or boolean
Returns
-------
converted : ndarray or ExtensionArray
na_count : int
"""
na_count = 0
if issubclass(values.dtype.type, (np.number, np.bool_)):
# If our array has numeric dtype, we don't have to check for strings in isin
na_values = np.array([val for val in na_values if not isinstance(val, str)])
mask = algorithms.isin(values, na_values)
na_count = mask.astype("uint8", copy=False).sum()
if na_count > 0:
if is_integer_dtype(values):
values = values.astype(np.float64)
np.putmask(values, mask, np.nan)
return values, na_count
dtype_backend = self.dtype_backend
non_default_dtype_backend = (
no_dtype_specified and dtype_backend is not lib.no_default
)
result: ArrayLike
if try_num_bool and is_object_dtype(values.dtype):
# exclude e.g DatetimeIndex here
try:
result, result_mask = lib.maybe_convert_numeric(
values,
na_values,
False,
convert_to_masked_nullable=non_default_dtype_backend, # type: ignore[arg-type]
)
except (ValueError, TypeError):
# e.g. encountering datetime string gets ValueError
# TypeError can be raised in floatify
na_count = parsers.sanitize_objects(values, na_values)
result = values
else:
if non_default_dtype_backend:
if result_mask is None:
result_mask = np.zeros(result.shape, dtype=np.bool_)
if result_mask.all():
result = IntegerArray(
np.ones(result_mask.shape, dtype=np.int64), result_mask
)
elif is_integer_dtype(result):
result = IntegerArray(result, result_mask)
elif is_bool_dtype(result):
result = BooleanArray(result, result_mask)
elif is_float_dtype(result):
result = FloatingArray(result, result_mask)
na_count = result_mask.sum()
else:
na_count = isna(result).sum()
else:
result = values
if values.dtype == np.object_:
na_count = parsers.sanitize_objects(values, na_values)
if (
result.dtype == np.object_
and try_num_bool
and (len(result) == 0 or not isinstance(result[0], int))
):
result, bool_mask = libops.maybe_convert_bool(
np.asarray(values),
true_values=self.true_values,
false_values=self.false_values,
convert_to_masked_nullable=non_default_dtype_backend, # type: ignore[arg-type]
)
if result.dtype == np.bool_ and non_default_dtype_backend:
if bool_mask is None:
bool_mask = np.zeros(result.shape, dtype=np.bool_)
result = BooleanArray(result, bool_mask)
elif result.dtype == np.object_ and non_default_dtype_backend:
# read_excel sends array of datetime objects
if not lib.is_datetime_array(result, skipna=True):
dtype = StringDtype()
cls = dtype.construct_array_type()
result = cls._from_sequence(values, dtype=dtype)
if dtype_backend == "pyarrow":
pa = import_optional_dependency("pyarrow")
if isinstance(result, np.ndarray):
result = ArrowExtensionArray(pa.array(result, from_pandas=True))
elif isinstance(result, BaseMaskedArray):
if result._mask.all():
# We want an arrow null array here
result = ArrowExtensionArray(pa.array([None] * len(result)))
else:
result = ArrowExtensionArray(
pa.array(result._data, mask=result._mask)
)
else:
result = ArrowExtensionArray(
pa.array(result.to_numpy(), from_pandas=True)
)
return result, na_count
@overload
def _do_date_conversions(
self,
names: Index,
data: DataFrame,
) -> DataFrame: ...
@overload
def _do_date_conversions(
self,
names: Sequence[Hashable],
data: Mapping[Hashable, ArrayLike],
) -> Mapping[Hashable, ArrayLike]: ...
@final
def _do_date_conversions(
self,
names: Sequence[Hashable] | Index,
data: Mapping[Hashable, ArrayLike] | DataFrame,
) -> Mapping[Hashable, ArrayLike] | DataFrame:
if not isinstance(self.parse_dates, list):
return data
for colspec in self.parse_dates:
if isinstance(colspec, int) and colspec not in data:
colspec = names[colspec]
if (isinstance(self.index_col, list) and colspec in self.index_col) or (
isinstance(self.index_names, list) and colspec in self.index_names
):
continue
result = date_converter(
data[colspec],
col=colspec,
dayfirst=self.dayfirst,
cache_dates=self.cache_dates,
date_format=self.date_format,
)
# error: Unsupported target for indexed assignment
# ("Mapping[Hashable, ExtensionArray | ndarray[Any, Any]] | DataFrame")
data[colspec] = result # type: ignore[index]
return data
@final
def _check_data_length(
self,
columns: Sequence[Hashable],
data: Sequence[ArrayLike],
) -> None:
"""Checks if length of data is equal to length of column names.
One set of trailing commas is allowed. self.index_col not False
results in a ParserError previously when lengths do not match.
Parameters
----------
columns: list of column names
data: list of array-likes containing the data column-wise.
"""
if not self.index_col and len(columns) != len(data) and columns:
empty_str = is_object_dtype(data[-1]) and data[-1] == ""
# error: No overload variant of "__ror__" of "ndarray" matches
# argument type "ExtensionArray"
empty_str_or_na = empty_str | isna(data[-1]) # type: ignore[operator]
if len(columns) == len(data) - 1 and np.all(empty_str_or_na):
return
warnings.warn(
"Length of header or names does not match length of data. This leads "
"to a loss of data with index_col=False.",
ParserWarning,
stacklevel=find_stack_level(),
)
@final
def _validate_usecols_names(self, usecols: SequenceT, names: Sequence) -> SequenceT:
"""
Validates that all usecols are present in a given
list of names. If not, raise a ValueError that
shows what usecols are missing.
Parameters
----------
usecols : iterable of usecols
The columns to validate are present in names.
names : iterable of names
The column names to check against.
Returns
-------
usecols : iterable of usecols
The `usecols` parameter if the validation succeeds.
Raises
------
ValueError : Columns were missing. Error message will list them.
"""
missing = [c for c in usecols if c not in names]
if len(missing) > 0:
raise ValueError(
f"Usecols do not match columns, columns expected but not found: "
f"{missing}"
)
return usecols
@final
def _clean_index_names(self, columns, index_col) -> tuple[list | None, list, list]:
if not is_index_col(index_col):
return None, columns, index_col
columns = list(columns)
# In case of no rows and multiindex columns we have to set index_names to
# list of Nones GH#38292
if not columns:
return [None] * len(index_col), columns, index_col
cp_cols = list(columns)
index_names: list[str | int | None] = []
# don't mutate
index_col = list(index_col)
for i, c in enumerate(index_col):
if isinstance(c, str):
index_names.append(c)
for j, name in enumerate(cp_cols):
if name == c:
index_col[i] = j
columns.remove(name)
break
else:
name = cp_cols[c]
columns.remove(name)
index_names.append(name)
# Only clean index names that were placeholders.
for i, name in enumerate(index_names):
if isinstance(name, str) and name in self.unnamed_cols:
index_names[i] = None
return index_names, columns, index_col
@final
def _get_empty_meta(
self, columns: Sequence[HashableT], dtype: DtypeArg | None = None
) -> tuple[Index, list[HashableT], dict[HashableT, Series]]:
columns = list(columns)
index_col = self.index_col
index_names = self.index_names
# Convert `dtype` to a defaultdict of some kind.
# This will enable us to write `dtype[col_name]`
# without worrying about KeyError issues later on.
dtype_dict: defaultdict[Hashable, Any]
if not is_dict_like(dtype):
# if dtype == None, default will be object.
dtype_dict = defaultdict(lambda: dtype)
else:
dtype = cast(dict, dtype)
dtype_dict = defaultdict(
lambda: None,
{columns[k] if is_integer(k) else k: v for k, v in dtype.items()},
)
# Even though we have no data, the "index" of the empty DataFrame
# could for example still be an empty MultiIndex. Thus, we need to
# check whether we have any index columns specified, via either:
#
# 1) index_col (column indices)
# 2) index_names (column names)
#
# Both must be non-null to ensure a successful construction. Otherwise,
# we have to create a generic empty Index.
index: Index
if (index_col is None or index_col is False) or index_names is None:
index = default_index(0)
else:
# TODO: We could return default_index(0) if dtype_dict[name] is None
data = [
Index([], name=name, dtype=dtype_dict[name]) for name in index_names
]
if len(data) == 1:
index = data[0]
else:
index = MultiIndex.from_arrays(data)
index_col.sort()
for i, n in enumerate(index_col):
columns.pop(n - i)
col_dict = {
col_name: Series([], dtype=dtype_dict[col_name]) for col_name in columns
}
return index, columns, col_dict
def date_converter(
date_col,
col: Hashable,
dayfirst: bool = False,
cache_dates: bool = True,
date_format: dict[Hashable, str] | str | None = None,
):
if date_col.dtype.kind in "Mm":
return date_col
date_fmt = date_format.get(col) if isinstance(date_format, dict) else date_format
str_objs = lib.ensure_string_array(np.asarray(date_col))
try:
result = tools.to_datetime(
str_objs,
format=date_fmt,
utc=False,
dayfirst=dayfirst,
cache=cache_dates,
)
except (ValueError, TypeError):
# test_usecols_with_parse_dates4
# test_multi_index_parse_dates
return str_objs
if isinstance(result, DatetimeIndex):
arr = result.to_numpy()
arr.flags.writeable = True
return arr
return result._values
parser_defaults = {
"delimiter": None,
"escapechar": None,
"quotechar": '"',
"quoting": csv.QUOTE_MINIMAL,
"doublequote": True,
"skipinitialspace": False,
"lineterminator": None,
"header": "infer",
"index_col": None,
"names": None,
"skiprows": None,
"skipfooter": 0,
"nrows": None,
"na_values": None,
"keep_default_na": True,
"true_values": None,
"false_values": None,
"converters": None,
"dtype": None,
"cache_dates": True,
"thousands": None,
"comment": None,
"decimal": ".",
# 'engine': 'c',
"parse_dates": False,
"dayfirst": False,
"date_format": None,
"usecols": None,
# 'iterator': False,
"chunksize": None,
"encoding": None,
"compression": None,
"skip_blank_lines": True,
"encoding_errors": "strict",
"on_bad_lines": ParserBase.BadLineHandleMethod.ERROR,
"dtype_backend": lib.no_default,
}
def get_na_values(col, na_values, na_fvalues, keep_default_na: bool):
"""
Get the NaN values for a given column.
Parameters
----------
col : str
The name of the column.
na_values : array-like, dict
The object listing the NaN values as strings.
na_fvalues : array-like, dict
The object listing the NaN values as floats.
keep_default_na : bool
If `na_values` is a dict, and the column is not mapped in the
dictionary, whether to return the default NaN values or the empty set.
Returns
-------
nan_tuple : A length-two tuple composed of
1) na_values : the string NaN values for that column.
2) na_fvalues : the float NaN values for that column.
"""
if isinstance(na_values, dict):
if col in na_values:
return na_values[col], na_fvalues[col]
else:
if keep_default_na:
return STR_NA_VALUES, set()
return set(), set()
else:
return na_values, na_fvalues
def is_index_col(col) -> bool:
return col is not None and col is not False
def validate_parse_dates_presence(
parse_dates: bool | list, columns: Sequence[Hashable]
) -> set:
"""
Check if parse_dates are in columns.
If user has provided names for parse_dates, check if those columns
are available.
Parameters
----------
columns : list
List of names of the dataframe.
Returns
-------
The names of the columns which will get parsed later if a list
is given as specification.
Raises
------
ValueError
If column to parse_date is not in dataframe.
"""
if not isinstance(parse_dates, list):
return set()
missing = set()
unique_cols = set()
for col in parse_dates:
if isinstance(col, str):
if col not in columns:
missing.add(col)
else:
unique_cols.add(col)
elif col in columns:
unique_cols.add(col)
else:
unique_cols.add(columns[col])
if missing:
missing_cols = ", ".join(sorted(missing))
raise ValueError(f"Missing column provided to 'parse_dates': '{missing_cols}'")
return unique_cols
def _validate_usecols_arg(usecols):
"""
Validate the 'usecols' parameter.
Checks whether or not the 'usecols' parameter contains all integers
(column selection by index), strings (column by name) or is a callable.
Raises a ValueError if that is not the case.
Parameters
----------
usecols : list-like, callable, or None
List of columns to use when parsing or a callable that can be used
to filter a list of table columns.
Returns
-------
usecols_tuple : tuple
A tuple of (verified_usecols, usecols_dtype).
'verified_usecols' is either a set if an array-like is passed in or
'usecols' if a callable or None is passed in.
'usecols_dtype` is the inferred dtype of 'usecols' if an array-like
is passed in or None if a callable or None is passed in.
"""
msg = (
"'usecols' must either be list-like of all strings, all unicode, "
"all integers or a callable."
)
if usecols is not None:
if callable(usecols):
return usecols, None
if not is_list_like(usecols):
# see gh-20529
#
# Ensure it is iterable container but not string.
raise ValueError(msg)
usecols_dtype = lib.infer_dtype(usecols, skipna=False)
if usecols_dtype not in ("empty", "integer", "string"):
raise ValueError(msg)
usecols = set(usecols)
return usecols, usecols_dtype
return usecols, None
@overload
def evaluate_callable_usecols(
usecols: Callable[[Hashable], object],
names: Iterable[Hashable],
) -> set[int]: ...
@overload
def evaluate_callable_usecols(
usecols: SequenceT, names: Iterable[Hashable]
) -> SequenceT: ...
def evaluate_callable_usecols(
usecols: Callable[[Hashable], object] | SequenceT,
names: Iterable[Hashable],
) -> SequenceT | set[int]:
"""
Check whether or not the 'usecols' parameter
is a callable. If so, enumerates the 'names'
parameter and returns a set of indices for
each entry in 'names' that evaluates to True.
If not a callable, returns 'usecols'.
"""
if callable(usecols):
return {i for i, name in enumerate(names) if usecols(name)}
return usecols
|
ParserBase
|
python
|
huggingface__transformers
|
tests/models/sam2/test_image_processing_sam2.py
|
{
"start": 1083,
"end": 3516
}
|
class ____:
def __init__(
self,
parent,
batch_size=7,
num_channels=3,
image_size=18,
min_resolution=30,
max_resolution=400,
mask_size=None,
do_resize=True,
size=None,
do_normalize=True,
image_mean=[0.5, 0.5, 0.5],
image_std=[0.5, 0.5, 0.5],
):
size = size if size is not None else {"height": 20, "width": 20}
mask_size = mask_size if mask_size is not None else {"height": 12, "width": 12}
self.parent = parent
self.batch_size = batch_size
self.num_channels = num_channels
self.image_size = image_size
self.min_resolution = min_resolution
self.max_resolution = max_resolution
self.mask_size = mask_size
self.do_resize = do_resize
self.size = size
self.do_normalize = do_normalize
self.image_mean = image_mean
self.image_std = image_std
def prepare_image_processor_dict(self):
return {
"image_mean": self.image_mean,
"image_std": self.image_std,
"do_normalize": self.do_normalize,
"do_resize": self.do_resize,
"size": self.size,
"mask_size": self.mask_size,
}
def expected_output_image_shape(self, images):
return self.num_channels, self.size["height"], self.size["width"]
def prepare_image_inputs(self, equal_resolution=False, numpify=False, torchify=False):
return prepare_image_inputs(
batch_size=self.batch_size,
num_channels=self.num_channels,
min_resolution=self.min_resolution,
max_resolution=self.max_resolution,
equal_resolution=equal_resolution,
numpify=numpify,
torchify=torchify,
)
# Copied from transformers.tests.models.beit.test_image_processing_beit.prepare_semantic_single_inputs
def prepare_semantic_single_inputs():
ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test")
example = ds[0]
return example["image"], example["map"]
# Copied from transformers.tests.models.beit.test_image_processing_beit.prepare_semantic_batch_inputs
def prepare_semantic_batch_inputs():
ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test")
return list(ds["image"][:2]), list(ds["map"][:2])
@require_torch
@require_vision
|
Sam2ImageProcessingTester
|
python
|
vyperlang__vyper
|
vyper/semantics/types/function.py
|
{
"start": 1584,
"end": 27384
}
|
class ____(VyperType):
"""
Contract function type.
Functions compare false against all types and so cannot be assigned without
being called. Calls are validated by `fetch_call_return`, check the call
arguments against `positional_args` and `keyword_arg`, and return `return_type`.
Attributes
----------
name : str
The name of the function.
positional_args: list[PositionalArg]
Positional args for this function
keyword_args: list[KeywordArg]
Keyword args for this function
return_type: Optional[VyperType]
Type of return value
function_visibility : FunctionVisibility
enum indicating the external visibility of a function.
state_mutability : StateMutability
enum indicating the authority a function has to mutate it's own state.
nonreentrant : bool
Whether this function is marked `@nonreentrant` or not
"""
typeclass = "contract_function"
_is_callable = True
def __init__(
self,
name: str,
positional_args: list[PositionalArg],
keyword_args: list[KeywordArg],
return_type: Optional[VyperType],
function_visibility: FunctionVisibility,
state_mutability: StateMutability,
from_interface: bool = False,
nonreentrant: bool = False,
do_raw_return: bool = False,
ast_def: Optional[vy_ast.VyperNode] = None,
) -> None:
super().__init__()
self.name = name
self.positional_args = positional_args
self.keyword_args = keyword_args
self.return_type = return_type
self.visibility = function_visibility
self.mutability = state_mutability
self.nonreentrant = nonreentrant
self.do_raw_return = do_raw_return
self.from_interface = from_interface
# sanity check, nonreentrant used to be Optional[str]
assert isinstance(self.nonreentrant, bool)
self.ast_def = ast_def
self._analysed = False
# a list of internal functions this function calls.
# to be populated during module analysis.
self.called_functions: OrderedSet[ContractFunctionT] = OrderedSet()
# recursively reachable from this function
# to be populated during module analysis.
self.reachable_internal_functions: OrderedSet[ContractFunctionT] = OrderedSet()
# writes to variables from this function
self._variable_writes: OrderedSet[VarAccess] = OrderedSet()
# reads of variables from this function
self._variable_reads: OrderedSet[VarAccess] = OrderedSet()
# list of modules used (accessed state) by this function
self._used_modules: OrderedSet[ModuleInfo] = OrderedSet()
# to be populated during codegen
self._ir_info: Any = None
self._function_id: Optional[int] = None
@property
# API compatibility
def decl_node(self):
return self.ast_def
@property
def _id(self):
return self.name
def mark_analysed(self):
assert not self._analysed
self._analysed = True
@property
def analysed(self):
return self._analysed
def get_variable_reads(self):
return self._variable_reads
def get_variable_writes(self):
return self._variable_writes
def get_variable_accesses(self):
return self._variable_reads | self._variable_writes
def uses_state(self):
return (
self.nonreentrant
or uses_state(self.get_variable_accesses())
or any(f.nonreentrant for f in self.reachable_internal_functions)
)
def get_used_modules(self):
# _used_modules is populated during analysis
return self._used_modules
def mark_used_module(self, module_info):
self._used_modules.add(module_info)
def mark_variable_writes(self, var_infos):
self._variable_writes.update(var_infos)
def mark_variable_reads(self, var_infos):
self._variable_reads.update(var_infos)
@property
def modifiability(self):
return Modifiability.from_state_mutability(self.mutability)
@cached_property
def call_site_kwargs(self):
# special kwargs that are allowed in call site
return {
"gas": KwargSettings(UINT256_T, "gas"),
"value": KwargSettings(UINT256_T, 0),
"skip_contract_check": KwargSettings(BoolT(), False, require_literal=True),
"default_return_value": KwargSettings(self.return_type, None),
}
def __repr__(self):
arg_types = ",".join(repr(a) for a in self.argument_types)
return f"contract function {self.name}({arg_types})"
def __str__(self):
ret_sig = "" if not self.return_type else f" -> {self.return_type}"
args_sig = ",".join([str(t) for t in self.argument_types])
return f"def {self.name}({args_sig}){ret_sig}:"
@cached_property
def _pp_signature(self):
ret = ",".join(repr(arg.typ) for arg in self.arguments)
return f"{self.name}({ret})"
# override parent implementation. function type equality does not
# make too much sense.
def __eq__(self, other):
return self is other
def __hash__(self):
return hash(id(self))
@classmethod
def from_abi(cls, abi: dict) -> "ContractFunctionT":
"""
Generate a `ContractFunctionT` object from an ABI interface.
Arguments
---------
abi : dict
An object from a JSON ABI interface, representing a function.
Returns
-------
ContractFunctionT object.
"""
positional_args = []
for item in abi["inputs"]:
positional_args.append(PositionalArg(item["name"], type_from_abi(item)))
return_type = None
if len(abi["outputs"]) == 1:
return_type = type_from_abi(abi["outputs"][0])
elif len(abi["outputs"]) > 1:
return_type = TupleT(tuple(type_from_abi(i) for i in abi["outputs"]))
return cls(
abi["name"],
positional_args,
[],
return_type,
from_interface=True,
function_visibility=FunctionVisibility.EXTERNAL,
state_mutability=StateMutability.from_abi(abi),
)
@classmethod
def from_InterfaceDef(cls, funcdef: vy_ast.FunctionDef) -> "ContractFunctionT":
"""
Generate a `ContractFunctionT` object from a `FunctionDef` inside
of an `InterfaceDef`
Arguments
---------
funcdef: FunctionDef
Vyper ast node to generate the function definition from.
Returns
-------
ContractFunctionT
"""
# FunctionDef with stateMutability in body (Interface definitions)
body = funcdef.body
if (
len(body) == 1
and isinstance(body[0], vy_ast.Expr)
and isinstance(body[0].value, vy_ast.Name)
and StateMutability.is_valid_value(body[0].value.id)
):
# Interfaces are always public
function_visibility = FunctionVisibility.EXTERNAL
state_mutability = StateMutability(body[0].value.id)
# handle errors
elif len(body) == 1 and body[0].get("value.id") in ("constant", "modifying"):
if body[0].value.id == "constant":
expected = "view or pure"
else:
expected = "payable or nonpayable"
raise StructureException(f"State mutability should be set to {expected}", body[0])
else:
raise StructureException("Body must only contain state mutability label", body[0])
if funcdef.name == "__init__":
raise FunctionDeclarationException("Constructors cannot appear in interfaces", funcdef)
if funcdef.name == "__default__":
raise FunctionDeclarationException(
"Default functions cannot appear in interfaces", funcdef
)
positional_args, keyword_args = _parse_args(funcdef)
return_type = _parse_return_type(funcdef)
return cls(
funcdef.name,
positional_args,
keyword_args,
return_type,
function_visibility,
state_mutability,
from_interface=True,
nonreentrant=False,
ast_def=funcdef,
)
@classmethod
def from_vyi(cls, funcdef: vy_ast.FunctionDef) -> "ContractFunctionT":
"""
Generate a `ContractFunctionT` object from a `FunctionDef` inside
of an interface (`.vyi`) file
Arguments
---------
funcdef: FunctionDef
Vyper ast node to generate the function definition from.
Returns
-------
ContractFunctionT
"""
decorators = _parse_decorators(funcdef)
if decorators.nonreentrant_node is not None:
raise FunctionDeclarationException(
"`@nonreentrant` not allowed in interfaces", decorators.nonreentrant_node
)
# guaranteed by parse_decorators and disallowing nonreentrant pragma
assert decorators.reentrant_node is None # sanity check
if decorators.raw_return_node is not None:
raise FunctionDeclarationException(
"`@raw_return` not allowed in interfaces", decorators.raw_return_node
)
# it's redundant to specify visibility in vyi - always should be external
function_visibility = decorators.visibility
if function_visibility is None:
function_visibility = FunctionVisibility.EXTERNAL
if function_visibility != FunctionVisibility.EXTERNAL:
raise FunctionDeclarationException(
"Interface functions can only be marked as `@external`", decorators.visibility_node
)
if funcdef.name == "__init__":
raise FunctionDeclarationException("Constructors cannot appear in interfaces", funcdef)
if funcdef.name == "__default__":
raise FunctionDeclarationException(
"Default functions cannot appear in interfaces", funcdef
)
positional_args, keyword_args = _parse_args(funcdef)
return_type = _parse_return_type(funcdef)
body = funcdef.body
if len(body) != 1 or not (
isinstance(body[0], vy_ast.Expr) and isinstance(body[0].value, vy_ast.Ellipsis)
):
raise FunctionDeclarationException(
"function body in an interface can only be `...`!", funcdef
)
return cls(
funcdef.name,
positional_args,
keyword_args,
return_type,
function_visibility,
decorators.state_mutability,
from_interface=True,
nonreentrant=False,
ast_def=funcdef,
)
@classmethod
def from_FunctionDef(cls, funcdef: vy_ast.FunctionDef) -> "ContractFunctionT":
"""
Generate a `ContractFunctionT` object from a `FunctionDef` node.
Arguments
---------
funcdef: FunctionDef
Vyper ast node to generate the function definition from.
Returns
-------
ContractFunctionT
"""
decorators = _parse_decorators(funcdef)
# it's redundant to specify internal visibility - it's implied by not being external
function_visibility = decorators.visibility
if function_visibility is None:
function_visibility = FunctionVisibility.INTERNAL
positional_args, keyword_args = _parse_args(funcdef)
return_type = _parse_return_type(funcdef)
# validate default and init functions
if funcdef.name == "__default__":
if function_visibility != FunctionVisibility.EXTERNAL:
raise FunctionDeclarationException(
"Default function must be marked as `@external`", funcdef
)
if funcdef.args.args:
raise FunctionDeclarationException(
"Default function may not receive any arguments", funcdef.args.args[0]
)
if function_visibility == FunctionVisibility.DEPLOY and funcdef.name != "__init__":
raise FunctionDeclarationException(
"Only constructors can be marked as `@deploy`!", funcdef
)
if funcdef.name == "__init__":
if decorators.state_mutability in (StateMutability.PURE, StateMutability.VIEW):
raise FunctionDeclarationException(
"Constructor cannot be marked as `@pure` or `@view`", funcdef
)
if function_visibility != FunctionVisibility.DEPLOY:
raise FunctionDeclarationException(
"Constructor must be marked as `@deploy`", funcdef
)
if return_type is not None:
raise FunctionDeclarationException(
"Constructor may not have a return type", funcdef.returns
)
# call arguments
if funcdef.args.defaults:
raise FunctionDeclarationException(
"Constructor may not use default arguments", funcdef.args.defaults[0]
)
if decorators.nonreentrant_node is not None:
msg = "`@nonreentrant` decorator disallowed on `__init__`"
raise FunctionDeclarationException(msg, decorators.nonreentrant_node)
if decorators.raw_return:
if function_visibility != FunctionVisibility.EXTERNAL:
raise StructureException(
"@raw_return is only allowed on external functions!", decorators.raw_return_node
)
if not isinstance(return_type, BytesT):
raise StructureException(
"@raw_return is only allowed in conjunction with `Bytes[...]` return type!",
decorators.raw_return_node,
)
# compute nonreentrancy
settings = funcdef.module_node.settings
nonreentrant: bool
is_external = function_visibility == FunctionVisibility.EXTERNAL
is_pure = decorators.state_mutability == StateMutability.PURE
if is_pure:
# pure functions are always nonreentrant
nonreentrant = False
elif settings.nonreentrancy_by_default:
if not is_external:
# default, internal functions default to reentrant even if
# the pragma is set
nonreentrant = decorators.nonreentrant_node is not None
else:
# validation -- cannot use `@nonreentrant` on external
# functions if nonreentrant pragma is set
if decorators.nonreentrant_node is not None:
raise StructureException(
"used @nonreentrant decorator, but `#pragma nonreentrancy` is set"
)
nonreentrant = decorators.reentrant_node is None
else:
nonreentrant = decorators.nonreentrant_node is not None
return cls(
funcdef.name,
positional_args,
keyword_args,
return_type,
function_visibility,
decorators.state_mutability,
from_interface=False,
nonreentrant=nonreentrant,
do_raw_return=decorators.raw_return,
ast_def=funcdef,
)
def set_reentrancy_key_position(self, position: VarOffset) -> None:
if hasattr(self, "reentrancy_key_position"):
raise CompilerPanic("Position was already assigned")
if not self.nonreentrant:
raise CompilerPanic(f"Not nonreentrant {self}", self.ast_def)
self.reentrancy_key_position = position
@classmethod
def getter_from_VariableDecl(cls, node: vy_ast.VariableDecl) -> "ContractFunctionT":
"""
Generate a `ContractFunctionT` object from an `VariableDecl` node.
Used to create getter functions for public variables.
Arguments
---------
node : VariableDecl
Vyper ast node to generate the function definition from.
Returns
-------
ContractFunctionT
"""
if not node.is_public:
raise CompilerPanic("getter generated for non-public function")
# calculated by caller (ModuleAnalyzer.visit_VariableDecl)
type_ = node.target._metadata["varinfo"].typ
arguments, return_type = type_.getter_signature
args = []
for i, item in enumerate(arguments):
args.append(PositionalArg(f"arg{i}", item))
return cls(
node.target.id,
args,
[],
return_type,
from_interface=False,
function_visibility=FunctionVisibility.EXTERNAL,
state_mutability=StateMutability.VIEW,
ast_def=node,
)
@property
# convenience property for compare_signature, as it would
# appear in a public interface
def _iface_sig(self) -> Tuple[Tuple, Optional[VyperType]]:
return tuple(self.argument_types), self.return_type
def implements(self, other: "ContractFunctionT") -> bool:
"""
Checks if this function implements the signature of another
function.
Used when determining if an interface has been implemented. This method
should not be directly implemented by any inherited classes.
"""
if not self.is_external: # pragma: nocover
raise CompilerPanic("unreachable!")
assert self.visibility == other.visibility
arguments, return_type = self._iface_sig
other_arguments, other_return_type = other._iface_sig
if len(arguments) != len(other_arguments):
return False
for atyp, btyp in zip(arguments, other_arguments):
if not atyp.compare_type(btyp):
return False
if return_type and not return_type.compare_type(other_return_type): # type: ignore
return False
return self.mutability == other.mutability
@cached_property
def default_values(self) -> dict[str, vy_ast.VyperNode]:
return {arg.name: arg.default_value for arg in self.keyword_args}
# for backwards compatibility
@cached_property
def arguments(self) -> list[_FunctionArg]:
return self.positional_args + self.keyword_args # type: ignore
@cached_property
def argument_types(self) -> list[VyperType]:
return [arg.typ for arg in self.arguments]
@property
def n_positional_args(self) -> int:
return len(self.positional_args)
@property
def n_keyword_args(self) -> int:
return len(self.keyword_args)
@cached_property
def n_total_args(self) -> int:
return self.n_positional_args + self.n_keyword_args
@property
def is_external(self) -> bool:
return self.visibility == FunctionVisibility.EXTERNAL
@property
def is_internal(self) -> bool:
return self.visibility == FunctionVisibility.INTERNAL
@property
def is_deploy(self) -> bool:
return self.visibility == FunctionVisibility.DEPLOY
@property
def is_constructor(self) -> bool:
return self.name == "__init__"
@property
def is_mutable(self) -> bool:
return self.mutability > StateMutability.VIEW
@property
def is_payable(self) -> bool:
return self.mutability == StateMutability.PAYABLE
@property
def is_fallback(self) -> bool:
return self.name == "__default__"
@property
def method_ids(self) -> Dict[str, int]:
"""
Dict of `{signature: four byte selector}` for this function.
* For functions without default arguments the dict contains one item.
* For functions with default arguments, there is one key for each
function signature.
"""
arg_types = [i.canonical_abi_type for i in self.argument_types]
if self.n_keyword_args == 0:
return _generate_method_id(self.name, arg_types)
method_ids = {}
for i in range(self.n_positional_args, self.n_total_args + 1):
method_ids.update(_generate_method_id(self.name, arg_types[:i]))
return method_ids
# add more information to type exceptions generated inside calls
def _enhance_call_exception(self, e, ast_node=None):
if ast_node is not None:
e.append_annotation(ast_node)
elif e.hint is None:
# try really hard to give the user a signature
e.hint = self._pp_signature
return e
def fetch_call_return(self, node: vy_ast.Call) -> Optional[VyperType]:
# mypy hint - right now, the only way a ContractFunctionT can be
# called is via `Attribute`, e.x. self.foo() or library.bar()
assert isinstance(node.func, vy_ast.Attribute)
parent_t = get_exact_type_from_node(node.func.value)
if not parent_t._supports_external_calls and self.visibility == FunctionVisibility.EXTERNAL:
raise CallViolation("Cannot call external functions via 'self' or via library", node)
kwarg_keys = []
# for external calls, include gas and value as optional kwargs
if not self.is_internal:
kwarg_keys += list(self.call_site_kwargs.keys())
try:
validate_call_args(node, (self.n_positional_args, self.n_total_args), kwarg_keys)
except ArgumentException as e:
raise self._enhance_call_exception(e, self.ast_def)
if self.mutability < StateMutability.PAYABLE:
kwarg_node = next((k for k in node.keywords if k.arg == "value"), None)
if kwarg_node is not None:
raise CallViolation("Cannot send ether to nonpayable function", kwarg_node)
for arg, expected in zip(node.args, self.arguments):
try:
validate_expected_type(arg, expected.typ)
except TypeMismatch as e:
raise self._enhance_call_exception(e, expected.ast_source or self.ast_def)
# TODO this should be moved to validate_call_args
for kwarg in node.keywords:
if kwarg.arg in self.call_site_kwargs:
kwarg_settings = self.call_site_kwargs[kwarg.arg]
if kwarg.arg == "default_return_value" and self.return_type is None:
raise ArgumentException(
f"`{kwarg.arg}=` specified but {self.name}() does not return anything",
kwarg.value,
)
validate_expected_type(kwarg.value, kwarg_settings.typ)
if kwarg_settings.require_literal:
if not isinstance(kwarg.value, vy_ast.Constant):
raise InvalidType(
f"{kwarg.arg} must be literal {kwarg_settings.typ}", kwarg.value
)
else:
# Generate the modified source code string with the kwarg removed
# as a suggestion to the user.
kwarg_pattern = rf"{kwarg.arg}\s*=\s*{re.escape(kwarg.value.node_source_code)}"
modified_line = re.sub(
kwarg_pattern, kwarg.value.node_source_code, node.node_source_code
)
msg = "Usage of kwarg in Vyper is restricted to "
msg += ", ".join([f"{k}=" for k in self.call_site_kwargs.keys()])
hint = None
if modified_line != node.node_source_code:
hint = f"Try removing the kwarg: `{modified_line}`"
raise ArgumentException(msg, kwarg, hint=hint)
return self.return_type
def to_toplevel_abi_dict(self):
abi_dict: Dict = {"stateMutability": self.mutability.value}
if self.is_fallback:
abi_dict["type"] = "fallback"
return [abi_dict]
if self.is_constructor:
abi_dict["type"] = "constructor"
else:
abi_dict["type"] = "function"
abi_dict["name"] = self.name
abi_dict["inputs"] = [arg.typ.to_abi_arg(name=arg.name) for arg in self.arguments]
typ = self.return_type
if typ is None:
abi_dict["outputs"] = []
elif isinstance(typ, TupleT) and len(typ.member_types) > 1:
abi_dict["outputs"] = [t.to_abi_arg() for t in typ.member_types]
else:
abi_dict["outputs"] = [typ.to_abi_arg()]
if self.n_keyword_args > 0:
# for functions with default args, return a dict for each possible arg count
result = []
for i in range(self.n_positional_args, self.n_total_args + 1):
result.append(abi_dict.copy())
result[-1]["inputs"] = result[-1]["inputs"][:i]
return result
else:
return [abi_dict]
# calculate the abi signature for a given set of kwargs
def abi_signature_for_kwargs(self, kwargs: list[KeywordArg]) -> str:
args = self.positional_args + kwargs # type: ignore
return self.name + "(" + ",".join([arg.typ.abi_type.selector_name() for arg in args]) + ")"
def _parse_return_type(funcdef: vy_ast.FunctionDef) -> Optional[VyperType]:
# return types
if funcdef.returns is None:
return None
# note: consider, for cleanliness, adding DataLocation.RETURN_VALUE
return type_from_annotation(funcdef.returns, DataLocation.MEMORY)
@dataclass
|
ContractFunctionT
|
python
|
getsentry__sentry
|
src/sentry/plugins/bases/issue2.py
|
{
"start": 1368,
"end": 2167
}
|
class ____(GroupEndpoint):
publish_status = {
"GET": ApiPublishStatus.PRIVATE,
"POST": ApiPublishStatus.PRIVATE,
}
view: Callable[[Request, Group], HttpResponseBase] = None # type: ignore[assignment] # populated by .as_view
def _handle(self, request: Request, group, *args, **kwargs):
GroupMeta.objects.populate_cache([group])
return self.view(request, group, *args, **kwargs)
def get(self, request: Request, group, *args, **kwargs) -> Response:
return self._handle(request, group, *args, **kwargs)
def post(self, request: Request, group, *args, **kwargs) -> Response:
return self._handle(request, group, *args, **kwargs)
def respond(self, *args, **kwargs):
return Response(*args, **kwargs)
|
PluginGroupEndpoint
|
python
|
google__jax
|
tests/shard_alike_test.py
|
{
"start": 1235,
"end": 7944
}
|
class ____(jtu.JaxTestCase):
def setUp(self):
super().setUp()
def test_basic(self):
mesh = jtu.create_mesh((2, 2), ('x', 'y'))
np_inp = np.arange(16).reshape(8, 2)
s = NamedSharding(mesh, P('x', 'y'))
inp = jax.device_put(np_inp, s)
@jax.jit
def f(x):
y = x * x
z = y * 2
_, z = shard_alike(x, z)
return z * 2
out = f(inp)
self.assertEqual(out.sharding, s)
self.assertArraysEqual(out, np_inp * np_inp * 4)
def test_output_sharded_alike_input(self):
mesh = jtu.create_mesh((2, 2), ('x', 'y'))
np_inp = np.arange(16).reshape(8, 2)
s = NamedSharding(mesh, P('x', 'y'))
inp = jax.device_put(np_inp, s)
@jax.jit
def f(x):
y = x * 2
return shard_alike(x, y)[1]
out = f(inp)
self.assertEqual(out.sharding, s)
self.assertArraysEqual(out, np_inp * 2)
def test_arange_shard_alike_jit(self):
mesh = jtu.create_mesh((2, 2), ('x', 'y'))
np_inp = np.arange(16).reshape(8, 2)
s = NamedSharding(mesh, P('x', 'y'))
inp = jax.device_put(np_inp, s)
@jax.jit
def f(x):
y = jnp.arange(16).reshape(8, 2)
return shard_alike(x, y)[1]
out = f(inp)
self.assertEqual(out.sharding, s)
self.assertArraysEqual(out, np_inp)
def test_different_shapes(self):
mesh = jtu.create_mesh((2, 1), ('x', 'y'))
np_inp = np.arange(16).reshape(8, 2)
s = NamedSharding(mesh, P('x',))
inp = jax.device_put(np_inp, s)
@jax.jit
def f(x):
y = x @ x.T
return shard_alike(x, y)[1]
with self.assertRaisesRegex(
ValueError, 'The leaves shapes of `x` and `y` should match'):
f(inp)
def test_double_shard_alike(self):
mesh = jtu.create_mesh((2, 2), ('x', 'y'))
np_inp = np.arange(16).reshape(8, 2)
s = NamedSharding(mesh, P('x', 'y'))
inp = jax.device_put(np_inp, s)
@jax.jit
def f(x):
y = x * 2
_, y = shard_alike(x, y)
z = y @ y.T
a = jnp.arange(64).reshape(8, 8)
return shard_alike(z, a)
out1, out2 = f(inp)
self.assertEqual(out1.sharding, NamedSharding(mesh, P('x')))
self.assertEqual(out2.sharding, NamedSharding(mesh, P('x')))
def test_shard_like_eager(self):
mesh = jtu.create_mesh((4, 1), ('x', 'y'))
np_inp = np.arange(16).reshape(8, 2)
s = NamedSharding(mesh, P('x', 'y'))
inp = jax.device_put(np_inp, s)
def f(x):
y = jnp.arange(16).reshape(8, 2)
return shard_alike(x, y)[1]
out = f(inp)
self.assertTrue(out.sharding.is_equivalent_to(s, out.ndim))
self.assertArraysEqual(out, np_inp)
def test_shard_map(self):
mesh = jtu.create_mesh((4, 2), ('x', 'y'))
np_inp = np.arange(16).reshape(8, 2)
s = NamedSharding(mesh, P('x', 'y'))
inp = jax.device_put(np_inp, s)
def g(x):
return jax.lax.psum(x, 'x')
@jax.jit
def f(x):
y = x @ x.T
s_out = shard_map(g, mesh=mesh, in_specs=P('x', 'y'),
out_specs=P(None, 'y'))(y)
z = s_out.T @ s_out
return shard_alike(y, z)
out1, out2 = f(inp)
# From options; P('x', 'y'), P('y'), shard_like chooses the better option.
self.assertEqual(out1.sharding, s)
self.assertEqual(out2.sharding, s)
def test_grad(self):
mesh = jtu.create_mesh((4,), ('x',))
np_inp = np.arange(8.)
s = NamedSharding(mesh, P('x'))
inp = jax.device_put(np_inp, s)
def _cb(s):
self.assertFalse(s.is_fully_replicated)
self.assertLen(s.device_set, mesh.size)
self.assertEqual(s.shard_shape(np_inp.shape), (2,))
def f(x):
y = jnp.arange(8.)
x_, y_ = shard_alike(x, y)
jax.debug.inspect_array_sharding(y_, callback=_cb)
z = x_ + y_
return jnp.sum(z)
jax.grad(f)(inp) # doesn't crash
jax.grad(jax.jit(f))(inp) # doesn't crash
def test_shard_input_as_output(self):
mesh = jtu.create_mesh((4,), ('x',))
np_inp = np.arange(8.)
s = NamedSharding(mesh, P('x'))
@jax.jit
def f(x):
y = jax.lax.with_sharding_constraint(x, s)
z = y * 2
return shard_alike(x, z)
with jtu.count_pjit_cpp_cache_miss() as count:
f(np_inp)
out1, out2 = f(np_inp)
self.assertEqual(count(), 1)
self.assertTrue(s.is_equivalent_to(out1.sharding, np_inp.ndim))
self.assertTrue(s.is_equivalent_to(out2.sharding, np_inp.ndim))
@jax.jit
def g(x):
z = x * 2
return shard_alike(x, z)
arr = jax.device_put(np_inp, s)
with jtu.count_pjit_cpp_cache_miss() as count:
g(arr)
out3, out4 = g(arr)
self.assertEqual(count(), 1)
self.assertEqual(out3.sharding, s)
self.assertEqual(out4.sharding, s)
def test_shard_alike_inputs(self):
mesh = jtu.create_mesh((2,), ('x',))
np_inp = np.arange(8.)
s = NamedSharding(mesh, P('x'))
arr = jax.device_put(np_inp, s)
def f(x, y):
return shard_alike(x, y)
eager_out1, eager_out2 = f(arr, np_inp)
self.assertEqual(eager_out1.sharding, s)
self.assertEqual(eager_out2.sharding, s)
out1, out2 = jax.jit(f)(arr, np_inp)
self.assertEqual(out1.sharding, s)
self.assertEqual(out2.sharding, s)
def test_vmap_one_mapped(self):
mesh = jtu.create_mesh((2, 2), ('x', 'y'))
np_inp = np.arange(2)
s = NamedSharding(mesh, P('y'))
inp = jax.device_put(np_inp, s)
@jax.jit
def f(x):
def _shard_slice_like_arg(s):
sharded_s, _ = shard_alike(s, x)
return sharded_s
replicated_x = jnp.tile(x, [8, 1]) # shape == (8, 2)
return jax.vmap(_shard_slice_like_arg, in_axes=0)(replicated_x)
out = f(inp)
self.assertEqual(out.sharding, NamedSharding(mesh, P(None, 'y')))
self.assertArraysEqual(out, np.tile(np_inp, [8, 1]))
def test_vmap_both_mapped(self):
mesh = jtu.create_mesh((2, 2), ('x', 'y'))
np_inp = np.arange(16).reshape(8, 2)
s = NamedSharding(mesh, P('x', 'y'))
inp1 = jax.device_put(np_inp, s)
np_inp2 = np.arange(16).reshape(2, 8)
inp2 = jax.device_put(np_inp2, NamedSharding(mesh, P('y', 'x')))
@jax.jit
def f(x, y):
return jax.vmap(shard_alike, in_axes=(0, 1))(x, y)
out1, out2 = f(inp1, inp2)
self.assertEqual(out1.sharding, s)
self.assertEqual(out2.sharding, s)
self.assertArraysEqual(out1, np_inp)
self.assertArraysEqual(out2, np_inp2.T)
def test_sharding_preserverd_single_device(self):
mesh = jax.sharding.Mesh([jax.devices()[0]], "x")
s = NamedSharding(mesh, P("x"))
x = jax.device_put(np.arange(8), s)
_, y = shard_alike(x, jnp.arange(8))
self.assertTrue(y.sharding.is_equivalent_to(s, y.ndim))
if __name__ == '__main__':
absltest.main(testLoader=jtu.JaxTestLoader())
|
ShardAlikeTest
|
python
|
getsentry__sentry
|
src/sentry/discover/models.py
|
{
"start": 1873,
"end": 2282
}
|
class ____(Model):
__relocation_scope__ = RelocationScope.Excluded
project = FlexibleForeignKey("sentry.Project")
discover_saved_query = FlexibleForeignKey("discover.DiscoverSavedQuery")
class Meta:
app_label = "discover"
db_table = "sentry_discoversavedqueryproject"
unique_together = (("project", "discover_saved_query"),)
@region_silo_model
|
DiscoverSavedQueryProject
|
python
|
airbytehq__airbyte
|
airbyte-ci/connectors/metadata_service/lib/metadata_service/spec_cache.py
|
{
"start": 2282,
"end": 4317
}
|
class ____:
def __init__(self, bucket_name: str = PROD_SPEC_CACHE_BUCKET_NAME):
self.client = storage.Client.create_anonymous_client()
self.bucket = self.client.bucket(bucket_name)
self.cached_specs = self.get_all_cached_specs()
def get_all_cached_specs(self) -> List[CachedSpec]:
"""Returns a list of all the specs in the spec cache bucket."""
blobs = self.bucket.list_blobs(prefix=CACHE_FOLDER)
return [get_docker_info_from_spec_cache_path(blob.name) for blob in blobs if blob.name.endswith(".json")]
def _find_spec_cache(self, docker_repository: str, docker_image_tag: str, registry: Registries) -> CachedSpec:
"""Returns the spec cache path for a given docker repository and tag."""
# find the spec cache path for the given docker repository and tag
for cached_spec in self.cached_specs:
if (
cached_spec.docker_repository == docker_repository
and cached_spec.registry == registry
and cached_spec.docker_image_tag == docker_image_tag
):
return cached_spec
return None
def find_spec_cache_with_fallback(self, docker_repository: str, docker_image_tag: str, registry_str: str) -> CachedSpec:
"""Returns the spec cache path for a given docker repository and tag and fallback to OSS if none found"""
registry = Registries(registry_str)
# if the registry is cloud try to return the cloud spec first
if registry == Registries.CLOUD:
spec_cache = self._find_spec_cache(docker_repository, docker_image_tag, registry)
if spec_cache:
return spec_cache
# fallback to OSS
return self._find_spec_cache(docker_repository, docker_image_tag, Registries.OSS)
def download_spec(self, spec: CachedSpec) -> dict:
"""Downloads the spec from the spec cache bucket."""
return json.loads(self.bucket.blob(spec.spec_cache_path).download_as_string())
|
SpecCache
|
python
|
kamyu104__LeetCode-Solutions
|
Python/maximum-strength-of-k-disjoint-subarrays.py
|
{
"start": 583,
"end": 1033
}
|
class ____(object):
def maximumStrength(self, nums, k):
"""
:type nums: List[int]
:type k: int
:rtype: int
"""
dp = [[float("-inf")]*(len(nums)+1) for _ in xrange(k+1)]
dp[0] = [0]*(len(nums)+1)
for i in xrange(k):
for j in xrange(len(nums)):
dp[i+1][j+1] = max(dp[i+1][j], dp[i][j])+nums[j]*(k-i)*(1 if i%2 == 0 else -1)
return max(dp[-1])
|
Solution2
|
python
|
realpython__materials
|
crud-operations/crud_fastapi.py
|
{
"start": 404,
"end": 2200
}
|
class ____(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: int
name: str
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
@app.post("/birds/", response_model=BirdResponse)
def create_bird(bird: BirdCreate, db: Session = Depends(get_db)):
new_bird = Bird(name=bird.name)
db.add(new_bird)
db.commit()
db.refresh(new_bird)
return new_bird
@app.get("/birds/", response_model=list[BirdResponse])
def read_birds(db: Session = Depends(get_db)):
birds = db.execute(select(Bird)).scalars().all()
return birds
@app.get("/birds/{bird_id}", response_model=BirdResponse)
def read_bird(bird_id: int, db: Session = Depends(get_db)):
query = select(Bird).where(Bird.id == bird_id)
found_bird = db.execute(query).scalar_one()
if found_bird is None:
raise HTTPException(status_code=404, detail="Bird not found")
return found_bird
@app.put("/birds/{bird_id}", response_model=BirdResponse)
def update_bird(bird_id: int, bird: BirdUpdate, db: Session = Depends(get_db)):
query = select(Bird).where(Bird.id == bird_id)
found_bird = db.execute(query).scalar_one()
if found_bird is None:
raise HTTPException(status_code=404, detail="Bird not found")
found_bird.name = bird.name
db.commit()
db.refresh(found_bird)
return found_bird
@app.delete("/birds/{bird_id}", response_model=dict)
def delete_bird(bird_id: int, db: Session = Depends(get_db)):
query = select(Bird).where(Bird.id == bird_id)
found_bird = db.execute(query).scalar_one()
if found_bird is None:
raise HTTPException(status_code=404, detail="Bird not found")
db.delete(found_bird)
db.commit()
return {"message": "Bird deleted successfully"}
|
BirdResponse
|
python
|
doocs__leetcode
|
solution/0000-0099/0044.Wildcard Matching/Solution.py
|
{
"start": 0,
"end": 474
}
|
class ____:
def isMatch(self, s: str, p: str) -> bool:
@cache
def dfs(i: int, j: int) -> bool:
if i >= len(s):
return j >= len(p) or (p[j] == "*" and dfs(i, j + 1))
if j >= len(p):
return False
if p[j] == "*":
return dfs(i + 1, j) or dfs(i + 1, j + 1) or dfs(i, j + 1)
return (p[j] == "?" or s[i] == p[j]) and dfs(i + 1, j + 1)
return dfs(0, 0)
|
Solution
|
python
|
pytorch__pytorch
|
test/functorch/test_aotdispatch.py
|
{
"start": 267307,
"end": 318765
}
|
class ____(AOTTestCase):
def test_aot_module_simplified(self):
class MockModule(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.linear = torch.nn.Linear(20, 30)
def forward(self, x, y):
return (self.linear(x) + y,)
mod = MockModule()
mod.zero_grad()
x = torch.randn(128, 20, requires_grad=True)
y = torch.randn(128, 30, requires_grad=True)
inputs = [x, y]
cloned_inputs = [x.detach().clone().requires_grad_(True) for x in inputs]
ref = mod(*inputs)
ref[0].sum().backward()
compiled_f = aot_module_simplified(mod, cloned_inputs, nop)
mod.zero_grad()
res = compiled_f(*cloned_inputs)
res[0].sum().backward()
assert torch.allclose(ref[0], res[0])
assert torch.allclose(inputs[0].grad, cloned_inputs[0].grad)
assert torch.allclose(inputs[1].grad, cloned_inputs[1].grad)
def test_aot_module_simplified_dynamic(self):
class MockModule(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.linear = torch.nn.Linear(20, 30)
def forward(self, x, y):
return (self.linear(x) + y,)
mod = MockModule()
shape_env = ShapeEnv()
fake_mode = FakeTensorMode(shape_env=shape_env)
x = torch.randn(128, 20, requires_grad=True)
y = torch.randn(128, 30, requires_grad=True)
inputs = [x, y]
fake_inputs = [fake_mode.from_tensor(x) for x in inputs]
compiled_f = aot_module_simplified(mod, fake_inputs, nop)
ref = mod(*inputs)
ref[0].sum().backward()
cloned_inputs = [x.detach().clone().requires_grad_(True) for x in inputs]
res = compiled_f(*cloned_inputs)
res[0].sum().backward()
self.assertExpectedInline(
shape_env.format_guards(),
"""\
- Eq(s49, 20)
- Eq(s70, 30)""",
)
assert torch.allclose(ref[0], res[0])
assert torch.allclose(inputs[0].grad, cloned_inputs[0].grad)
assert torch.allclose(inputs[1].grad, cloned_inputs[1].grad)
# https://github.com/pytorch/pytorch/issues/105327
def test_lift_fresh_copy_in_graph(self):
class MyMod(torch.nn.Module):
def forward(self, x):
_tensor_constant0 = torch.tensor([1])
lift_fresh_copy = torch.ops.aten.lift_fresh_copy.default(
_tensor_constant0
)
y = x.mul(lift_fresh_copy)
return (y,)
mod = MyMod()
shape_env = ShapeEnv()
fake_mode = FakeTensorMode(shape_env=shape_env)
x = torch.ones(4, requires_grad=True)
inputs = [x]
fake_inputs = [fake_mode.from_tensor(x) for x in inputs]
compiled_f = aot_module_simplified(mod, fake_inputs, nop)
out_ref = mod(x)
out_test = compiled_f(x)
self.assertEqual(out_ref[0].detach(), out_test[0].detach())
def test_inference_python_dispatcher(self):
# Extracted from unet
class MockModule(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.upsample = torch.nn.Upsample(
scale_factor=2, mode="bilinear", align_corners=True
)
def forward(self, x):
return (self.upsample(x),)
mod = MockModule()
shape_env = ShapeEnv()
fake_mode = FakeTensorMode(shape_env=shape_env)
x = torch.randn(2, 512, 40, 59) # NB: must not require grad
inputs = [x]
fake_inputs = [fake_mode.from_tensor(x) for x in inputs]
aot_module_simplified(mod, fake_inputs, nop)
def test_aot_module_simplified_preserves_stack_trace(self):
class MockModule(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.linear = torch.nn.Linear(20, 30)
def forward(self, x, y):
z = self.linear(x)
z = z + y
z = z.relu()
return (z,)
tracer = torch.fx.Tracer()
tracer.record_stack_traces = True
graph = tracer.trace(MockModule())
mod = torch.fx.GraphModule(tracer.root, graph)
for node in mod.graph.nodes:
if node.op != "call_function":
continue
self.assertTrue(node.stack_trace is not None)
assert "test_aotdispatch.py" in node.stack_trace
def assert_compiler(gm: torch.fx.GraphModule, _):
for node in gm.graph.nodes:
if node.op == "output" or node.op == "placeholder":
continue
self.assertTrue(node.stack_trace is not None)
assert "test_aotdispatch.py" in node.stack_trace
return gm.forward # return a python callable
x = torch.randn(128, 20, requires_grad=True)
y = torch.randn(128, 30, requires_grad=True)
inputs = [x, y]
compiled_f = aot_module_simplified(
mod, inputs, fw_compiler=assert_compiler, bw_compiler=assert_compiler
)
res = compiled_f(*inputs)
res[0].sum().backward()
def test_aot_module_simplified_preserves_stack_trace_from_mutation(self):
class MockModule(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, x):
x_view = x[0]
x_view.mul_(2)
return (x + x,)
tracer = torch.fx.Tracer()
tracer.record_stack_traces = True
graph = tracer.trace(MockModule())
mod = torch.fx.GraphModule(tracer.root, graph)
for node in mod.graph.nodes:
if node.op != "call_function":
continue
self.assertTrue(node.stack_trace is not None)
assert "test_aotdispatch.py" in node.stack_trace
def assert_compiler(gm: torch.fx.GraphModule, _):
assert torch.ops.aten.copy_.default in [x.target for x in gm.graph.nodes]
for node in gm.graph.nodes:
if node.target == torch.ops.aten.copy_.default:
assert "stack_trace" in node.meta
assert "x_view.mul_(2)" in node.meta["stack_trace"]
return gm.forward # return a python callable
x = torch.randn(128, 20)
inputs = [x]
aot_module_simplified(
mod,
inputs,
fw_compiler=assert_compiler,
bw_compiler=assert_compiler,
keep_inference_input_mutations=True,
)
def test_aot_module_simplified_fake_tensor_gm_raises(self):
fake_mode = torch._subclasses.fake_tensor.FakeTensorMode()
real_x = torch.randn(4, requires_grad=True)
fake_x = fake_mode.from_tensor(real_x)
real_z = torch.randn(4)
fake_z = fake_mode.from_tensor(real_z)
class MockModule(torch.nn.Module):
def forward(self, x):
# Accessing a free variable fake tensor will look like a
# constant to make_fx, and result in the tensor being traced
# into the graph, which is an error condition. Make sure we
# report adequately in this case.
return (x + fake_z,)
with self.assertRaisesRegex(AssertionError, "Unexpected fake"):
aot_module_simplified(MockModule(), (fake_x,), nop)
def test_aot_test_subclasses_with_tensor_factories(self):
from torch.testing._internal.common_subclass import SubclassWithTensorFactory
inp = SubclassWithTensorFactory(torch.zeros(3, 5))
def fn(x):
return 2 * x
ref_out = fn(inp)
out = torch.compile(fn, backend="aot_eager", fullgraph=True)(inp)
self.assertEqual(ref_out, out)
# Next several tests are related to issue:
# https://github.com/pytorch/pytorch/issues/134644
# AOTD tries to predict tangents for tracing ahead of time.
# The first strategy was to coerce traced_tangents and runtime_tangents to be contiguous().
# But for models working in channels_last memory format this will add additional contiguous() calls.
# The fix is predicting tangents memory format to be similar to outputs memory format.
# And coerce runtime tangents to that traced memory format.
def test_grads_no_force_contiguous_dense(self):
with GradsNoForceContiguousContextManager() as ctx:
class M(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv = torch.nn.Conv2d(3, 3, 3)
def forward(self, x, y, cont_inp):
z = y + 3
y.mul_(2)
r = self.conv(x)
r = torch.ops._test_aotdispatch_lib.log_tangents_memory_format(r)
return (
r,
r.transpose(0, 1),
z.view(-1),
z.transpose(0, 1),
cont_inp * 2,
)
m = M()
m.to(memory_format=torch.channels_last)
m.train()
def dense_inps():
return (
torch.randn(2, 3, 5, 5, requires_grad=True).to(
memory_format=torch.channels_last
),
torch.randn(3, 2, 1, 1, requires_grad=True).to(
memory_format=torch.channels_last
),
torch.randn(3, 2, 1, 1, requires_grad=True),
)
ref_inps = dense_inps()
ref_outs = m(*ref_inps)
ref_outs[0].sum().backward()
ctx.reset_counters()
inps = dense_inps()
outs = torch.compile(m, backend="inductor", fullgraph=True)(*inps)
outs[0].sum().backward()
self.assertEqual(ctx.d[torch.channels_last], 1)
self.assertEqual(ctx.d[torch.contiguous_format], 0)
def test_grads_no_force_contiguous_subclass(self):
with GradsNoForceContiguousContextManager() as ctx:
class M(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv = torch.nn.Conv2d(3, 3, 3)
def forward(self, x, y):
r = self.conv(x)
r = torch.ops._test_aotdispatch_lib.log_tangents_memory_format(r)
return r, y + 1
m = M()
m.to(memory_format=torch.channels_last)
m.train()
def inps_fn():
return (
TwoTensor(
torch.randn(2, 3, 5, 5, requires_grad=True).to(
memory_format=torch.channels_last
),
torch.randn(2, 3, 5, 5, requires_grad=True).to(
memory_format=torch.channels_last
),
),
torch.randn(3, 2, requires_grad=True).clone(),
)
ref_outs = m(*inps_fn())
ref_outs[0].sum().backward()
ctx.reset_counters()
mc = M()
mc.to(memory_format=torch.channels_last)
mc.train()
outs = torch.compile(mc, backend="aot_eager", fullgraph=True)(*inps_fn())
outs[0].sum().backward()
self.assertEqual(ctx.d[torch.channels_last], 2)
self.assertEqual(ctx.d[torch.contiguous_format], 0)
def test_grads_no_force_contiguous_nested_subclass(self):
with GradsNoForceContiguousContextManager() as ctx:
class M(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv = torch.nn.Conv2d(3, 3, 3)
def forward(self, x):
r = self.conv(x)
r = torch.ops._test_aotdispatch_lib.log_tangents_memory_format(r)
return r
m = M()
m.to(memory_format=torch.channels_last)
m.train()
def inps_fn(x):
return (
TwoTensor(
TwoTensor(x.clone(), x.clone()), TwoTensor(x.clone(), x.clone())
),
)
x = torch.randn(2, 3, 5, 5, requires_grad=True).to(
memory_format=torch.channels_last
)
ref_inps = inps_fn(x)
ref_outs = m(*ref_inps)
ref_outs[0].sum().backward()
ctx.reset_counters()
mc = M()
mc.to(memory_format=torch.channels_last)
mc.train()
x = torch.randn(2, 3, 5, 5, requires_grad=True).to(
memory_format=torch.channels_last
)
inps = inps_fn(x)
outs = torch.compile(mc, backend="aot_eager", fullgraph=True)(*inps)
outs[0].sum().backward()
self.assertEqual(ctx.d[torch.channels_last], 4)
self.assertEqual(ctx.d[torch.contiguous_format], 0)
def test_grads_no_force_contiguous_nested_tensor_tangent(self):
# NestedTensor setattr could fails with AttributeError for attr "_min_seqlen_tensor"
# Adding test to verify that it is handled.
def fn(x):
return x.clone()
a = torch.randn(2, 3, requires_grad=True, dtype=torch.float64)
b = torch.randn(3, 3, requires_grad=True, dtype=torch.float64)
c = torch.randn(4, 3, requires_grad=True, dtype=torch.float64)
nt = torch.nested.as_nested_tensor([a, b, c], layout=torch.jagged)
out = torch.compile(fn, backend="aot_eager", fullgraph=True)(nt)
out_buffer = out.values()
ga, gb, gc = torch.autograd.grad(out_buffer.sum(), (a, b, c))
def test_wrong_guess_tangent_type(self):
def fn(x):
return x.clone()
ref_x = TwoTensor(
torch.randn(2, 3, requires_grad=True), torch.randn(2, 3, requires_grad=True)
)
ref_y = fn(ref_x)
ref_y.backward(gradient=TwoTensor(torch.randn(2, 3), torch.randn(2, 3)))
fn_comp = torch.compile(fn, fullgraph=True)
x = TwoTensor(
torch.randn(2, 3, requires_grad=True), torch.randn(2, 3, requires_grad=True)
)
y = fn_comp(x)
y.backward(gradient=TwoTensor(torch.randn(2, 3), torch.randn(2, 3)))
x2 = TwoTensor(
torch.randn(2, 3, requires_grad=True), torch.randn(2, 3, requires_grad=True)
)
y2 = fn_comp(x2)
with self.assertRaisesRegex(
RuntimeError,
"""
During the backward, we encountered a tensor subclass where we guessed its
metadata incorrectly.
""", # noqa: F541
):
y2.backward(gradient=torch.randn(2, 3))
def test_tangent_type_coercion(self):
def fn(x):
return x.clone()
ref_y = fn(WrapperSubclass(torch.randn(2, 3, requires_grad=True)))
ref_y.sum().backward()
fn_comp = torch.compile(fn, fullgraph=True)
x = TwoTensor(
torch.randn(2, 3, requires_grad=True), torch.randn(2, 3, requires_grad=True)
)
y = fn_comp(x)
y.backward(gradient=TwoTensor(torch.randn(2, 3), torch.randn(2, 3)))
x2 = TwoTensor(
torch.randn(2, 3, requires_grad=True), torch.randn(2, 3, requires_grad=True)
)
y2 = fn_comp(x2)
# Test coercion WrapperSubclass -> TwoTensor
y2.backward(gradient=WrapperSubclass(torch.randn(2, 3)))
y3 = torch.compile(fn, fullgraph=True)(torch.randn(2, 3, requires_grad=True))
# Test coercion WrapperSubclass -> Tensor
y3.backward(gradient=WrapperSubclass(torch.randn(2, 3)))
@torch._inductor.config.patch({"freezing": True})
def test_inductor_freezing_with_subclasses(self):
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.w = TwoTensor(torch.randn(3, 4), torch.randn(3, 4))
self.wt = torch.randn(3, 4)
def forward(self, x):
return (
x.index_select(
dim=0, index=torch.tensor([0, 2, 1], dtype=torch.int64)
)
+ self.w
+ self.wt
)
m = M()
inp = torch.randn(3, 4)
with torch.no_grad():
torch.compile(m, fullgraph=True)(inp)
def test_rrelu(self):
def fn(x):
return torch.rrelu(x, training=True)
def fn_(x):
torch.rrelu_(x, training=True)
return x
x = torch.randn(4, 4)
torch.compile(fn, backend="inductor", fullgraph=True)(x)
torch.compile(fn_, backend="inductor", fullgraph=True)(x)
def test_layer_norm(self):
def fn(x):
return F.layer_norm(x, normalized_shape=(8,))
x = torch.randn(2, 4, 8)
eager = fn(x)
aot_eager = torch.compile(backend="aot_eager")(fn)(x)
self.assertEqual(eager, aot_eager, atol=0, rtol=0)
@unittest.skipIf(not torch.cuda.is_available(), "CUDA is unavailable")
def test_rms_norm(self):
# Only CUDA rms norm fails to be decomposed
def fn(x):
return F.rms_norm(x, normalized_shape=(8,))
x = torch.randn(2, 4, 8, device="cuda")
eager = fn(x)
aot_eager = torch.compile(backend="aot_eager")(fn)(x)
self.assertEqual(eager, aot_eager, atol=0, rtol=0)
def test_subclass_parameters(self):
class _M(torch.nn.Module):
def __init__(self):
super().__init__()
self.p = torch.nn.Parameter(
TwoTensor(
TwoTensor(torch.zeros(3, 4), torch.randn(3, 4)),
torch.ones(3, 4),
)
)
def forward(self, x):
return x + self.p
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.p1 = torch.nn.Parameter(torch.ones(3, 4))
self.p2 = torch.nn.Parameter(
TwoTensor(
torch.ones(3, 4),
TwoTensor(torch.randn(3, 4), torch.randn(3, 4)),
)
)
self._m = _M()
def forward(self, x):
return self._m(x) + x + 2 * self.p1 + self.p2
m = M()
ref_x = torch.randn(3, 4)
ref_out = m(ref_x)
ref_out.sum().backward()
m.zero_grad()
from torch._functorch._aot_autograd.subclass_parametrization import (
unwrap_tensor_subclass_parameters,
)
unwrap_tensor_subclass_parameters(m)
ref_x2 = ref_x.detach().clone()
ref_out2 = m(ref_x2)
self.assertEqual(ref_out2, ref_out)
ref_out2.sum().backward()
self.assertEqual(ref_x2.grad, ref_x.grad)
m.zero_grad()
x = ref_x.detach().clone()
comp_fn = torch.compile(m, backend="aot_eager", fullgraph=True)
out = comp_fn(x)
self.assertEqual(ref_out, out)
out.sum().backward()
self.assertEqual(ref_x.grad, x.grad)
def test_subclass_parameters_torture_case(self):
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.p1 = torch.nn.Parameter(torch.ones(3, 4))
self.p2 = torch.nn.Parameter(
TwoTensor(
TwoTensor(
torch.ones(3, 4),
TwoTensor(torch.randn(3, 4), torch.randn(3, 4)),
),
TwoTensor(
TwoTensor(torch.randn(3, 4), torch.randn(3, 4)),
TwoTensor(torch.ones(3, 4), torch.randn(3, 4)),
),
)
)
def forward(self, x):
return x + 2 * self.p1 + self.p2.a.b
m = M()
ref_x = torch.randn(3, 4)
ref_out = m(ref_x)
ref_out.sum().backward()
m.zero_grad()
from torch._functorch._aot_autograd.subclass_parametrization import (
unwrap_tensor_subclass_parameters,
)
unwrap_tensor_subclass_parameters(m)
ref_x2 = ref_x.detach().clone()
ref_out2 = m(ref_x2)
self.assertEqual(ref_out2, ref_out)
ref_out2.sum().backward()
self.assertEqual(ref_x2.grad, ref_x.grad)
m.zero_grad()
x = ref_x.detach().clone()
comp_fn = torch.compile(m, backend="aot_eager", fullgraph=True)
out = comp_fn(x)
self.assertEqual(ref_out, out)
out.sum().backward()
self.assertEqual(ref_x.grad, x.grad)
def test_rrelu_with_noise_mutation(self):
def fn_functional(x):
noise = torch.ones_like(x)
result, noise_out = torch.ops.aten.rrelu_with_noise_functional(
x, noise, 0.2, 0.8, True
)
return result, noise_out
def fn_mutation(x):
noise = torch.ones_like(x)
result = torch.ops.aten.rrelu_with_noise(x, noise, 0.2, 0.8, True)
return result, noise
def fn_inplace(x):
noise = torch.ones_like(x, requires_grad=False)
torch.ops.aten.rrelu_with_noise_(x, noise, 0.2, 0.8, True)
return x, noise
def _test_fn(fn, check_backward=True):
x = -torch.abs(torch.randn(4, 4, dtype=torch.bfloat16, requires_grad=True))
ref_y, ref_noise = fn(x)
self.assertTrue(torch.all(ref_noise < torch.ones_like(ref_noise)).item())
comp_y, comp_noise = torch.compile(fn, backend="inductor", fullgraph=True)(
x
)
if check_backward:
comp_y.sum().backward()
self.assertTrue(torch.all(comp_noise < torch.ones_like(comp_noise)).item())
_test_fn(fn_functional)
_test_fn(fn_mutation)
_test_fn(fn_inplace, check_backward=False)
@unittest.skipIf(not torch.cuda.is_available(), "CUDA is unavailable")
@parametrize("dynamic_shapes", [True, False])
@parametrize("test_subclasses", [True, False])
@parametrize("device", ["cuda", "cpu"])
@patch("torch._functorch.config.guess_tangent_strides_as_outputs", True)
def test_noncontig_nonmemformat_tangents(
self, dynamic_shapes, test_subclasses, device
):
B = 2
T = 4
E = 6
def fn(x):
x = x + 1
return x.transpose(1, 2)
def _inp_dense():
t = torch.randn(B, T, E, device=device, requires_grad=True)
if dynamic_shapes:
for i in range(t.ndim):
torch._dynamo.mark_dynamic(t, i)
return t
def _inp_sc():
return TwoTensor(_inp_dense(), _inp_dense())
_inp = _inp_dense if not test_subclasses else _inp_sc
comp_fn = torch.compile(fn, backend="aot_eager", fullgraph=True)
def _tg3(y):
t = torch.randn(
2 * y.shape, dtype=y.dtype, layout=y.layout, device=y.device
)
return t.as_strided(y.shape, tuple(s * 2 for s in y.stride()))
TEST_CASES = [
(_inp, lambda y: torch.ones(y.shape, dtype=y.dtype, device=y.device)),
# Memory overlap, dense tangent
(
_inp,
lambda y: torch.tensor([1], dtype=y.dtype, device=y.device).as_strided(
y.shape, (0,) * y.ndim
),
),
# No memory overlap, not-dense tangent
(_inp, _tg3),
]
for inp_fn, tg_fn in TEST_CASES:
ref_x = inp_fn()
x = ref_x.detach().clone().requires_grad_()
ref_y = fn(ref_x)
y = comp_fn(x)
self.assertEqual(ref_y, y)
ref_tg = (
tg_fn(ref_y)
if not test_subclasses
else TwoTensor(tg_fn(ref_y), tg_fn(ref_y))
)
tg = ref_tg.clone()
ref_y.backward(ref_tg)
y.backward(tg)
self.assertEqual(ref_x.grad, x.grad)
@patch("torch._functorch.config.guess_tangent_strides_as_outputs", True)
def test_flex_attn_noncontiguous_tangents(self):
with GradsNoForceContiguousContextManager() as ctx:
E = 16 # embedding dim
H = 4 # number of heads
@torch.compile(backend="aot_eager", fullgraph=True)
def attn_fn(q, k, v):
y = flex_attention(query=q, key=k, value=v)
y = torch.ops._test_aotdispatch_lib.log_tangents_memory_format(y)
return y
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.c_attn = torch.nn.Linear(E, 3 * E)
def forward(self, x):
B, T, E = x.size()
q, k, v = self.c_attn(x).split(E, dim=2)
k = k.view(B, T, H, E // H).transpose(1, 2) # (B, nh, T, hs)
q = q.view(B, T, H, E // H).transpose(1, 2) # (B, nh, T, hs)
v = v.view(B, T, H, E // H).transpose(1, 2) # (B, nh, T, hs)
y = attn_fn(q, k, v)
return y.transpose(1, 2).contiguous().view(B, T, E)
m = M()
B = 1
T = 8
def _inp():
return torch.randn(B, T, E, requires_grad=True)
x = _inp()
y = m(x)
y.backward(torch.ones_like(y).contiguous())
self.assertEqual(1, len(ctx.tangent_strides))
self.assertEqual((128, 4, 16, 1), ctx.tangent_strides[0])
def _test_pack_hooks(
self,
fn,
inp_fn,
hooks,
symbolic_tracing=True,
pre_compile_fn=None,
backend="inductor",
):
ctx = torch.autograd.graph.saved_tensors_hooks
torch._dynamo.reset()
with ExitStack() as stack:
# All hooks in eager to get ref
for hook, _ in hooks:
pack, unpack = hook
stack.enter_context(ctx(pack, unpack))
ref_x = inp_fn()
def _f(t):
if t.dtype.is_floating_point:
return t.detach().clone().requires_grad_()
return t
x = pytree.tree_map_only(torch.Tensor, _f, ref_x)
ref_y = fn(*ref_x)
ref_y.sum().backward()
if pre_compile_fn:
pre_compile_fn()
with ExitStack() as stack:
for hook, inline in hooks:
pack, unpack = hook
if inline:
if symbolic_tracing:
stack.enter_context(
ctx(
*saved_tensors_hooks_to_gm(
pack,
unpack,
"pack_hash",
"unpack_hash",
)
)
)
else:
stack.enter_context(
ctx(
*saved_tensors_hooks_to_gm(
pack, unpack, "pack_hash", "unpack_hash"
)
)
)
else:
stack.enter_context(ctx(pack, unpack))
y = torch.compile(fn, backend=backend, fullgraph=True)(*x)
y.sum().backward()
self.assertEqual(ref_y, y, atol=1e-2, rtol=1e-2)
ref_x_grad = pytree.tree_map_only(torch.Tensor, lambda t: t.grad, ref_x)
x_grad = pytree.tree_map_only(torch.Tensor, lambda t: t.grad, x)
self.assertEqual(ref_x_grad, x_grad, atol=1e-2, rtol=1e-2)
@unittest.skipIf(not torch.cuda.is_available(), "CUDA is unavailable")
@unittest.skipIf(not SM80OrLater, "bfloat16, float8")
@parametrize("saved_tensors_hooks_filtering_mode", ["donated", "no_static", "all"])
def test_saved_tensors_hooks_base(self, saved_tensors_hooks_filtering_mode):
with patch(
"torch._functorch.config.saved_tensors_hooks_filtering_mode",
saved_tensors_hooks_filtering_mode,
):
# y argument is expected to test saving of int tensor,
# to check filtering functionality to not apply hooks for e.g. is_floating_point
class SAF(torch.autograd.Function):
@staticmethod
def forward(ctx, x, y):
ctx.save_for_backward(x, y)
return x
@staticmethod
def backward(ctx, gx):
(saved_x, saved_y) = ctx.saved_tensors
return gx + saved_x + saved_y, None
class AF(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
ctx.d1 = x.size(1)
return x
@staticmethod
def backward(ctx, gx):
(saved_x,) = ctx.saved_tensors
d1 = ctx.d1
return gx + saved_x * d1
def fn(x, y):
x = x.relu()
x = x + 1
x = x.relu()
x = 2 * x
x = AF.apply(x)
return x
def simple_fn(x, y):
x = x + 1
x = x.t()
x = x.relu()
x = x.t()
x = SAF.apply(x, y)
return x
device = torch.device("cuda:0")
def inp_fn():
x = torch.ones(2, 2, device=device, requires_grad=True)
torch._dynamo.mark_dynamic(x, 0)
torch._dynamo.mark_dynamic(x, 1)
y = torch.zeros(2, 2, device=device, dtype=torch.int64)
return x, y
def pack_dev_sym_cpu(x):
return x.dtype, x.device, x.size(1), x.cpu()
def unpack_dev_sym_cpu(packed):
dtype, device, dim1, x = packed
x = x.to(device=device)
return x.to(dtype)
def pack_tensor(x):
return x.device, x.cpu()
def unpack_tensor(packed):
device, t_cpu = packed
return t_cpu.to(device)
def pack_bf16(x):
return x.dtype, x.to(dtype=torch.bfloat16)
def unpack_bf16(packed):
dtype, x = packed
return x.to(dtype)
def pack_mul2(x):
return x.dtype, x * 2
def unpack_mul2(x):
dtype, x = x
x = x / 2
return x.to(dtype)
def pack_wrapper_sc(x):
return WrapperSubclass(x)
def unpack_wrapper_sc(x):
return x.a
def pack_wrapper_two_tensor(x):
return TwoTensor(x, x)
def unpack_wrapper_two_tensor(x):
return x.a + x.b
def pack_mul2_eager(x):
return x * 2
def unpack_mul2_eager(x):
return x / 2
def pack_cpu(x):
return x.to(device="cpu")
def unpack_cpu(x):
return x.to(device=device)
for test_fn in [simple_fn, fn]:
self._test_pack_hooks(
test_fn,
inp_fn,
[((pack_cpu, unpack_cpu), True)],
symbolic_tracing=False,
)
self._test_pack_hooks(
test_fn, inp_fn, [((pack_bf16, unpack_bf16), True)]
)
self._test_pack_hooks(
test_fn, inp_fn, [((pack_mul2, unpack_mul2), True)]
)
self._test_pack_hooks(
test_fn, inp_fn, [((pack_tensor, unpack_tensor), True)]
)
self._test_pack_hooks(
test_fn, inp_fn, [((pack_dev_sym_cpu, unpack_dev_sym_cpu), True)]
)
self._test_pack_hooks(
test_fn, inp_fn, [((pack_mul2_eager, unpack_mul2_eager), False)]
)
self._test_pack_hooks(
test_fn,
inp_fn,
[((pack_fp8, unpack_fp8), True)],
)
self._test_pack_hooks(
test_fn,
inp_fn,
[((pack_fp8_with_scale, unpack_fp8_with_scale), True)],
)
# Disable testing of Subclasses for now
# self._test_pack_hooks(test_fn, inp_fn, [(pack_wrapper_sc, unpack_wrapper_sc)])
# self._test_pack_hooks(
# test_fn, inp_fn, [(pack_wrapper_two_tensor, unpack_wrapper_two_tensor)]
# )
@unittest.skipIf(not torch.cuda.is_available(), "CUDA is unavailable")
@unittest.skipIf(not SM80OrLater, "bfloat16, float8")
def test_saved_tensors_hooks_params(self):
lib = torch.library.Library("_test_aotdispatch_lib", "FRAGMENT")
logged_shapes = []
logged_dtypes = []
lib.define("log(Tensor x) -> Tensor")
def log_impl(x):
logged_shapes.append(list(x.shape))
logged_dtypes.append(x.dtype)
return x.clone()
def log_meta(x):
return x.clone()
for backend in ["CPU", "CUDA"]:
lib.impl(
"log",
log_impl,
backend,
)
lib.impl("log", log_meta, "Meta")
def pack_fp8_with_scale_and_log(x):
torch.ops._test_aotdispatch_lib.log(x)
return _pack_fp8_with_scale_wrap(x)
def unpack_fp8_with_scale_and_log(packed):
return _unpack_fp8_with_scale_wrap(packed)
def m_inp_fn():
x = torch.ones(
2, 2, 2, device=device, dtype=torch.float64, requires_grad=True
)
torch._dynamo.mark_dynamic(x, 0)
torch._dynamo.mark_dynamic(x, 1)
return (x,)
class SAF0(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return x
@staticmethod
def backward(ctx, gx):
(saved_x,) = ctx.saved_tensors
return gx + saved_x
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(2, 2)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(2, 2)
def forward(self, x):
x = SAF0.apply(x)
x = x.to(dtype=torch.float32)
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
return x
def _reset_logged():
logged_shapes.clear()
logged_dtypes.clear()
device = torch.device("cuda:0")
m = M().to(device=device)
def _test_m():
self._test_pack_hooks(
m,
m_inp_fn,
[
(
(
pack_fp8_with_scale_and_log,
unpack_fp8_with_scale_and_log,
),
True,
)
],
pre_compile_fn=_reset_logged,
backend="aot_eager",
)
with patch(
"torch._functorch.config.saved_tensors_hooks_filtering_mode", "donated"
):
_reset_logged()
_test_m()
# Check that hooks were not applied to Parameters
# parameters excluded
self.assertFalse([2, 2] in logged_shapes)
self.assertTrue([2, 2, 2] in logged_shapes)
# input excluded
self.assertFalse(torch.float64 in logged_dtypes)
with patch(
"torch._functorch.config.saved_tensors_hooks_filtering_mode", "no_static"
):
_reset_logged()
_test_m()
# Check that hooks were not applied to Parameters
# parameters excluded
self.assertFalse([2, 2] in logged_shapes)
self.assertTrue([2, 2, 2] in logged_shapes)
self.assertTrue(torch.float64 in logged_dtypes)
with patch("torch._functorch.config.saved_tensors_hooks_filtering_mode", "all"):
_reset_logged()
_test_m()
# Check that hooks were applied to all saved tensors
self.assertTrue([2, 2] in logged_shapes)
self.assertTrue([2, 2, 2] in logged_shapes)
self.assertTrue(torch.float64 in logged_dtypes)
@unittest.skipIf(not torch.cuda.is_available(), "CUDA is unavailable")
@unittest.skipIf(not SM80OrLater, "bfloat16, float8")
@torch._functorch.config.patch(saved_tensors_hooks_filtering_mode="all")
def test_saved_tensors_hooks_recompile(self):
ctx = torch.autograd.graph.saved_tensors_hooks
def pack_bf16(x):
return x.to(dtype=torch.bfloat16)
def unpack_bf16(x):
return x.to(dtype=torch.float)
def pack_mul2(x):
return x * 2
def unpack_mul2(x):
return x / 2
def _test(hooks, inline, expected_compile_count):
class SAF(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return x
@staticmethod
def backward(ctx, gx):
(saved_x,) = ctx.saved_tensors
return gx + saved_x
class AF(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
ctx.d1 = x.size(1)
return x
@staticmethod
def backward(ctx, gx):
(saved_x,) = ctx.saved_tensors
d1 = ctx.d1
return gx + saved_x * d1
def fn(x):
x = x.relu()
x = x + 1
x = 2 * x
x = AF.apply(x)
return x
device = torch.device("cuda:0")
def inp_fn():
x = torch.ones(2, 3, device=device, requires_grad=True)
torch._dynamo.mark_dynamic(x, 0)
torch._dynamo.mark_dynamic(x, 1)
return x
from torch._dynamo.testing import CompileCounter
cnt = CompileCounter()
x = inp_fn()
y = torch.compile(fn, backend=cnt, fullgraph=True)(x)
y.sum().backward()
def _test_with_hooks(hooks):
with ExitStack() as stack:
pack, unpack = hooks
if inline:
stack.enter_context(
ctx(
*saved_tensors_hooks_to_gm(
pack, unpack, "pack_hash", "unpack_hash"
)
)
)
else:
stack.enter_context(ctx(pack, unpack))
x = inp_fn()
y = torch.compile(fn, backend=cnt, fullgraph=True)(x)
y.sum().backward()
_test_with_hooks(hooks[0])
_test_with_hooks(hooks[1])
self.assertEqual(cnt.frame_count, expected_compile_count)
_test(
((pack_bf16, unpack_bf16), (pack_mul2, unpack_mul2)),
inline=False,
expected_compile_count=1,
)
_test(
((pack_bf16, unpack_bf16), (pack_mul2, unpack_mul2)),
inline=True,
expected_compile_count=3,
)
@torch._functorch.config.patch(donated_buffer=True)
@torch._functorch.config.patch(saved_tensors_hooks_filtering_mode="no_static")
def test_saved_tensors_hooks_donated_buffers(self):
pack_gm, unpack_gm = saved_tensors_hooks_to_gm(
pack_fp8,
unpack_fp8,
"pack_hash",
"unpack_hash",
)
logger_name = "torch._functorch._aot_autograd.graph_compile"
class SAF(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return x
@staticmethod
def backward(ctx, gx):
(saved_x,) = ctx.saved_tensors
return gx + saved_x
def fn(x):
x0 = x
x = SAF.apply(x)
return x0, torch.nn.functional.relu(x)
inp = torch.rand([3, 3], requires_grad=True)
# 1. No donated buffers without hooks, as relu saves input which is also user output.
with self.assertLogs(logger_name, level="INFO") as captured:
out = torch.compile(fn, backend="aot_eager", fullgraph=True, dynamic=False)(
inp
)
out[1].sum().backward()
expected_msg = "bw_donated_idxs=[]"
FileCheck().check(expected_msg).run("\n".join(captured.output))
# 2. Hooks applied for all saved, as we set saved_tensors_hooks_no_filtering=True
# Results of the hooks become donated buffers.
inp = torch.rand([3, 3], requires_grad=True)
with torch.autograd.graph.saved_tensors_hooks(pack_gm, unpack_gm):
with self.assertLogs(logger_name, level="INFO") as captured:
out = torch.compile(
fn, backend="aot_eager", fullgraph=True, dynamic=False
)(inp)
out[1].sum().backward()
expected_msg = "bw_donated_idxs=[0, 1]"
FileCheck().check(expected_msg).run("\n".join(captured.output))
# entries in here don't work and need to be fixed.
# Each one of these is a bug (or needs to be investigated)
aot_autograd_failures = {
# data-dependent control flow
xfail("cov"),
xfail("nn.functional.gaussian_nll_loss"),
xfail("tensor_split"),
xfail("corrcoef"),
xfail("quantile"),
xfail("nanquantile"),
skip("narrow"),
xfail("istft"),
xfail("linalg.eig"),
skip("as_strided_scatter"),
skip("as_strided", "partial_views"), # flaky
# Given input size: (s0xs1x2). Calculated output size: ...
skip("max_pool2d_with_indices_backward"),
# Misc
xfail("to_sparse"),
xfail("corrcoef"),
xfail("cov"),
xfail("chalf"), # RuntimeError: "sum_cpu" not implemented for 'ComplexHalf'
xfail("sparse.sampled_addmm"),
xfail("sparse.mm", "reduce"),
skip("nn.functional.binary_cross_entropy_with_logits"), # seems to fail sometimes?
skip("nn.functional.margin_ranking_loss"), # seems flaky
skip("linalg.lu_solve"), # flaky
decorate("matmul", decorator=unittest.skipIf(IS_ARM64, "flaky")),
decorate("__rmatmul__", decorator=unittest.skipIf(IS_ARM64, "flaky")),
# overrides atol=1e-4, rtol=1e-5 would do as well
decorate(
"svd_lowrank",
decorator=toleranceOverride({torch.float32: tol(atol=1e-04, rtol=1e-05)}),
),
decorate(
"linalg.householder_product",
decorator=unittest.skipIf(IS_MACOS and IS_X86, "flaky"),
),
decorate(
"linalg.pinv",
"singular",
# This delta is coming entirely from the clone() on tangents
# in AOTDispatcher to make them contiguous
decorator=toleranceOverride({torch.float32: tol(atol=1e-02, rtol=1e-02)}),
),
decorate(
"nn.functional.interpolate",
"bicubic",
decorator=toleranceOverride({torch.float32: tol(atol=1e-04, rtol=1e-05)}),
),
# conv2d sometimes nondeterministic in this config?
decorate("nn.functional.conv2d", decorator=unittest.skipIf(IS_ARM64, "flaky")),
}
if not TEST_MKL:
aot_autograd_failures.update(
{
decorate(
"matmul",
decorator=toleranceOverride(
{torch.float32: tol(atol=6e-05, rtol=4e-06)}
),
),
decorate(
"__rmatmul__",
decorator=toleranceOverride(
{torch.float32: tol(atol=6e-05, rtol=4e-06)}
),
),
}
)
symbolic_aot_autograd_failures = {
xfail("combinations", ""), # aten.masked_select.default
xfail(
"index_fill", ""
), # Cannot call sizes() on tensor with symbolic sizes/strides
xfail(
"linalg.lstsq", ""
), # aten.linalg_lstsq.default - couldn't find symbolic meta function/decomposition
xfail(
"linalg.lstsq", "grad_oriented"
), # aten.linalg_lstsq.default - couldn't find symbolic meta funct...
xfail(
"linalg.lu_solve", ""
), # aten.linalg_lu_solve.default - couldn't find symbolic meta function/deco...
skip(
"nn.functional.batch_norm", ""
), # '0 is not tracked with proxy for <torch.fx.experimental.proxy_te..
xfail(
"nn.functional.binary_cross_entropy", ""
), # aten.fill_.Scalar - couldn't find symbolic meta funct...
xfail(
"nn.functional.cross_entropy", ""
), # Cannot call sizes() on tensor with symbolic sizes/strides
xfail(
"nn.functional.ctc_loss", ""
), # aten._ctc_loss.Tensor - couldn't find symbolic meta function/deco...
xfail(
"nn.functional.fractional_max_pool3d", ""
), # rand() received an invalid combination of arguments - g...
xfail("trace", ""), # Cannot call sizes() on tensor with symbolic sizes/strides
decorate(
"linalg.householder_product",
decorator=unittest.skipIf(IS_MACOS and IS_X86, "flaky"),
),
}
def _test_aot_autograd_helper(
self,
device,
dtype,
op,
dynamic=False,
disable_functionalization=False,
):
if not op.supports_autograd:
self.skipTest("Op does not support autograd")
# aot_autograd_check is able to check data specialization by
# randomizing the inputs. Here's a list of ops that really do not
# like random inputs for which we want to disable that.
cant_check_data_specialization = set(
{
"nn.functional.max_unpool1d",
"nn.functional.max_unpool2d",
"nn.functional.max_unpool3d",
}
)
try_check_data_specialization = op.name not in cant_check_data_specialization
sample_inputs_itr = op.sample_inputs(device, dtype, requires_grad=True)
for sample_input in sample_inputs_itr:
t_args = [sample_input.input] + list(sample_input.args)
t_kwargs = sample_input.kwargs
try:
aot_autograd_check(
op.op,
t_args,
t_kwargs,
dynamic,
self.assertRaisesRegex,
self.assertEqual,
check_gradients=True,
try_check_data_specialization=try_check_data_specialization,
skip_correctness_check=op.skip_correctness_check_compile_vs_eager,
disable_functionalization=disable_functionalization,
)
except DynamicOutputShapeException:
self.skipTest("Dynamic output shape operation in trace")
except GuardOnDataDependentSymNode:
# Carveout for getitem; I don't want to xfail the entire test
# because that will reject known to be good tests see
# https://github.com/pytorch/pytorch/issues/94705
if op.name == "__getitem__":
self.skipTest("Dynamic output shape operation in trace")
else:
raise
def _test_aot_autograd_module_helper(
self, device, dtype, training, module_info, *, dynamic=False
):
module_cls = module_info.module_cls
module_inputs = module_info.module_inputs_func(
module_info, device=device, dtype=dtype, requires_grad=True, training=training
)
for module_input in module_inputs:
if module_input.forward_input is None:
continue
args, kwargs = (
module_input.constructor_input.args,
module_input.constructor_input.kwargs,
)
m = module_cls(*args, **kwargs)
m.to(device).to(dtype)
m.train(training)
# Lazy modules need to see an input first to initialize params.
args, kwargs = (
module_input.forward_input.args,
module_input.forward_input.kwargs,
)
flat_args, args_spec = pytree.tree_flatten((args, kwargs))
# PackedSequence is only used for RNNs. It might be possible to fake-ify if they're pytrees but
# torchdynamo already doesn't support RNNs
if any(tuple(isinstance(flat_arg, PackedSequence) for flat_arg in flat_args)):
continue
if issubclass(module_info.module_cls, torch.nn.modules.lazy.LazyModuleMixin):
with torch.no_grad():
m(*args, **kwargs)
sentinel_val = -42
is_tensor_spec = [
sentinel_val if isinstance(arg, torch.Tensor) else arg for arg in flat_args
]
args = [arg for arg in flat_args if isinstance(arg, torch.Tensor)]
def f(params_buffers_args):
named_params, named_buffers, args = params_buffers_args
cur_flat_args = list(is_tensor_spec)
args = iter(args)
for idx, v in enumerate(cur_flat_args):
if v == sentinel_val:
cur_flat_args[idx] = next(args)
c_args, c_kwargs = pytree.tree_unflatten(cur_flat_args, args_spec)
params_and_buffers = {**named_params, **named_buffers}
return torch.func.functional_call(m, params_and_buffers, c_args, c_kwargs)
named_params = dict(m.named_parameters(remove_duplicate=False))
named_buffers = dict(m.named_buffers(remove_duplicate=False))
num_params_buffers = len(named_params) + len(named_buffers)
compiled_f = aot_function(
f, nop, num_params_buffers=num_params_buffers, dynamic=dynamic
)
params_buffers_args = [named_params, named_buffers, args]
_test_aot_autograd_forwards_backwards_helper(
f,
compiled_f,
params_buffers_args,
self.assertRaisesRegex,
self.assertEqual,
True,
)
|
TestAOTModuleSimplified
|
python
|
pytorch__pytorch
|
test/distributed/pipelining/test_schedule.py
|
{
"start": 9166,
"end": 15290
}
|
class ____(TestCase):
def setUp(self):
# Define a list of test cases with varying num_local_stages, num_microbatches, and group_size
# These should succeed since num_microbatches % group_size == 0
self.test_cases = [
# small number of stages
(2, 2, 2),
(2, 4, 4),
(2, 8, 2),
(2, 8, 4),
(2, 8, 8),
(4, 4, 4),
(4, 8, 4),
(4, 8, 8),
# large microbatches
(4, 16, 4),
(4, 32, 4),
(4, 64, 4),
# large groups
(4, 16, 16),
(4, 32, 32),
(4, 128, 64),
# odd num pipeline stages
(3, 2, 2),
(3, 8, 2),
(3, 12, 4),
# odd group_sizes
(4, 6, 3),
(4, 10, 5),
# n_mb non divisible by group_size
(2, 3, 4),
(2, 4, 4),
(2, 10, 4),
(2, 15, 4),
]
@parametrize(
"ScheduleClass",
[ScheduleInterleaved1F1B, ScheduleLoopedBFS],
)
def test_pipeline_order(self, ScheduleClass):
for num_local_stages, num_microbatches, group_size in self.test_cases:
with self.subTest(
num_local_stages=num_local_stages,
num_microbatches=num_microbatches,
group_size=group_size,
):
if num_microbatches % group_size != 0:
continue
logger.info(
"num_local_stages=%d num_microbatches=%d group_size=%d",
num_local_stages,
num_microbatches,
group_size,
)
num_stages = num_local_stages * group_size
stages = [
MockPipelineStage(group_size=group_size, num_stages=num_stages)
for i in range(num_local_stages)
]
schedule = ScheduleClass(stages, num_microbatches)
_formatted_pipeline_order = _format_pipeline_order(
schedule.pipeline_order
)
def stage_to_rank(stage):
return stage % group_size
comms_sch = _add_send_recv(
schedule.pipeline_order,
stage_to_rank=stage_to_rank,
num_stages=num_stages,
)
_simulate_comms_compute(
comms_sch,
stage_to_rank=stage_to_rank,
num_stages=num_stages,
)
@parametrize(
"ScheduleClass",
[ScheduleInterleaved1F1B, ScheduleInterleavedZeroBubble],
)
def test_pipeline_order_flex_and_zero_bubble(self, ScheduleClass):
for num_local_stages, num_microbatches, group_size in self.test_cases:
with self.subTest(
num_local_stages=num_local_stages,
num_microbatches=num_microbatches,
group_size=group_size,
):
warmups_ops_last_stage = (num_local_stages - 1) * (
num_microbatches // max(1, num_microbatches // group_size)
)
warmup_ops = warmups_ops_last_stage + 2 * (group_size - 1)
warmup_ops = min(warmup_ops, num_microbatches * num_local_stages)
num_stages = num_local_stages * group_size
stages = [
MockPipelineStage(group_size=group_size, num_stages=num_stages)
for i in range(num_local_stages)
]
schedule = ScheduleClass(stages, num_microbatches)
_format_pipeline_order(schedule.pipeline_order)
def stage_to_rank(stage):
return stage % group_size
comms_sch = _add_send_recv(
schedule.pipeline_order,
stage_to_rank=stage_to_rank,
num_stages=num_stages,
)
# print(_format_pipeline_order(comms_sch))
_simulate_comms_compute(
comms_sch,
stage_to_rank=stage_to_rank,
num_stages=num_stages,
)
@parametrize(
"ScheduleClass",
[ScheduleDualPipeV, ScheduleZBVZeroBubble],
)
def test_pipeline_order_for_v_schedules(self, ScheduleClass):
for num_local_stages, num_microbatches, group_size in self.test_cases:
with self.subTest(
num_local_stages=num_local_stages,
num_microbatches=num_microbatches,
group_size=group_size,
):
num_stages = num_local_stages * group_size
stages = [
MockPipelineStage(group_size=group_size, num_stages=num_stages)
for i in range(num_local_stages)
]
# V schedules only support 2 stages per rank so if num_local_stages is not 2, ensure an error is thrown
if num_local_stages != 2:
with self.assertRaises(ValueError):
ScheduleClass(
stages,
num_microbatches,
)
continue
# DualPipeV requires num_microbatches to be >= num_stages
if ScheduleClass == ScheduleDualPipeV and num_microbatches < num_stages:
with self.assertRaises(ValueError):
ScheduleClass(
stages,
num_microbatches,
)
continue
# Create schedule and validate it
schedule = ScheduleClass(stages, num_microbatches)
_validate_schedule(
schedule.pipeline_order, group_size, num_stages, num_microbatches
)
instantiate_parametrized_tests(TestSchedulePlan)
|
TestSchedulePlan
|
python
|
prabhupant__python-ds
|
data_structures/bst/min_max_value_in_bst.py
|
{
"start": 0,
"end": 588
}
|
class ____():
def __init__(self, val):
self.val = val
self.right = None
self.left = None
def min_value(root):
if not root:
return None
curr = root
while curr.left:
curr = curr.left
return curr.val
def max_value(root):
if not root:
return None
curr = root
while curr.right:
curr = curr.right
return curr.val
root = Node(5)
root.left = Node(3)
root.right = Node(7)
root.left.left = Node(1)
root.left.right = Node(4)
root.right.left = Node(6)
print(max_value(root))
print(min_value(root))
|
Node
|
python
|
apache__airflow
|
helm-tests/tests/helm_tests/airflow_aux/test_migrate_database_job.py
|
{
"start": 914,
"end": 16276
}
|
class ____:
"""Tests migrate DB job."""
def test_should_run_by_default(self):
docs = render_chart(show_only=["templates/jobs/migrate-database-job.yaml"])
assert docs[0]["kind"] == "Job"
assert jmespath.search("spec.template.spec.containers[0].name", docs[0]) == "run-airflow-migrations"
assert jmespath.search("spec.template.spec.securityContext.runAsUser", docs[0]) == 50000
@pytest.mark.parametrize(
("migrate_database_job_enabled", "created"),
[
(False, False),
(True, True),
],
)
def test_enable_migrate_database_job(self, migrate_database_job_enabled, created):
docs = render_chart(
values={
"migrateDatabaseJob": {"enabled": migrate_database_job_enabled},
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert bool(docs) is created
def test_should_support_annotations(self):
docs = render_chart(
values={"migrateDatabaseJob": {"annotations": {"foo": "bar"}, "jobAnnotations": {"fiz": "fuz"}}},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
annotations = jmespath.search("spec.template.metadata.annotations", docs[0])
assert "foo" in annotations
assert annotations["foo"] == "bar"
job_annotations = jmespath.search("metadata.annotations", docs[0])
assert "fiz" in job_annotations
assert job_annotations["fiz"] == "fuz"
def test_should_add_component_specific_labels(self):
docs = render_chart(
values={
"migrateDatabaseJob": {
"labels": {"test_label": "test_label_value"},
},
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert "test_label" in jmespath.search("spec.template.metadata.labels", docs[0])
assert jmespath.search("spec.template.metadata.labels", docs[0])["test_label"] == "test_label_value"
def test_should_merge_common_labels_and_component_specific_labels(self):
docs = render_chart(
values={
"labels": {"test_common_label": "test_common_label_value"},
"migrateDatabaseJob": {
"labels": {"test_specific_label": "test_specific_label_value"},
},
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert "test_common_label" in jmespath.search("spec.template.metadata.labels", docs[0])
assert (
jmespath.search("spec.template.metadata.labels", docs[0])["test_common_label"]
== "test_common_label_value"
)
assert "test_specific_label" in jmespath.search("spec.template.metadata.labels", docs[0])
assert (
jmespath.search("spec.template.metadata.labels", docs[0])["test_specific_label"]
== "test_specific_label_value"
)
def test_should_create_valid_affinity_tolerations_and_node_selector(self):
docs = render_chart(
values={
"migrateDatabaseJob": {
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{"key": "foo", "operator": "In", "values": ["true"]},
]
}
]
}
}
},
"tolerations": [
{"key": "dynamic-pods", "operator": "Equal", "value": "true", "effect": "NoSchedule"}
],
"nodeSelector": {"diskType": "ssd"},
}
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert jmespath.search("kind", docs[0]) == "Job"
assert (
jmespath.search(
"spec.template.spec.affinity.nodeAffinity."
"requiredDuringSchedulingIgnoredDuringExecution."
"nodeSelectorTerms[0]."
"matchExpressions[0]."
"key",
docs[0],
)
== "foo"
)
assert (
jmespath.search(
"spec.template.spec.nodeSelector.diskType",
docs[0],
)
== "ssd"
)
assert (
jmespath.search(
"spec.template.spec.tolerations[0].key",
docs[0],
)
== "dynamic-pods"
)
def test_scheduler_name(self):
docs = render_chart(
values={"schedulerName": "airflow-scheduler"},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert (
jmespath.search(
"spec.template.spec.schedulerName",
docs[0],
)
== "airflow-scheduler"
)
@pytest.mark.parametrize(
("use_default_image", "expected_image"),
[
(True, "apache/airflow:2.1.0"),
(False, "apache/airflow:user-image"),
],
)
def test_should_use_correct_image(self, use_default_image, expected_image):
docs = render_chart(
values={
"defaultAirflowRepository": "apache/airflow",
"defaultAirflowTag": "2.1.0",
"images": {
"airflow": {
"repository": "apache/airflow",
"tag": "user-image",
},
"useDefaultImageForMigration": use_default_image,
},
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert expected_image == jmespath.search("spec.template.spec.containers[0].image", docs[0])
def test_should_add_extra_containers(self):
docs = render_chart(
values={
"migrateDatabaseJob": {
"extraContainers": [
{"name": "{{ .Chart.Name }}", "image": "test-registry/test-repo:test-tag"}
],
},
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert jmespath.search("spec.template.spec.containers[-1]", docs[0]) == {
"name": "airflow",
"image": "test-registry/test-repo:test-tag",
}
def test_should_add_extra_init_containers(self):
docs = render_chart(
values={
"migrateDatabaseJob": {
"extraInitContainers": [
{"name": "{{ .Chart.Name }}", "image": "test-registry/test-repo:test-tag"}
],
},
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert jmespath.search("spec.template.spec.initContainers[0]", docs[0]) == {
"name": "airflow",
"image": "test-registry/test-repo:test-tag",
}
def test_should_template_extra_containers(self):
docs = render_chart(
values={
"migrateDatabaseJob": {
"extraContainers": [{"name": "{{ .Release.Name }}-test-container"}],
},
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert jmespath.search("spec.template.spec.containers[-1]", docs[0]) == {
"name": "release-name-test-container"
}
def test_set_resources(self):
docs = render_chart(
values={
"migrateDatabaseJob": {
"resources": {
"requests": {
"cpu": "1000mi",
"memory": "512Mi",
},
"limits": {
"cpu": "1000mi",
"memory": "512Mi",
},
},
},
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert jmespath.search("spec.template.spec.containers[0].resources", docs[0]) == {
"requests": {
"cpu": "1000mi",
"memory": "512Mi",
},
"limits": {
"cpu": "1000mi",
"memory": "512Mi",
},
}
def test_should_disable_default_helm_hooks(self):
docs = render_chart(
values={"migrateDatabaseJob": {"useHelmHooks": False}},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
annotations = jmespath.search("metadata.annotations", docs[0])
assert annotations is None
def test_should_set_correct_helm_hooks_weight(self):
docs = render_chart(
show_only=[
"templates/jobs/migrate-database-job.yaml",
],
)
annotations = jmespath.search("metadata.annotations", docs[0])
assert annotations["helm.sh/hook-weight"] == "1"
def test_should_add_extra_volumes(self):
docs = render_chart(
values={
"migrateDatabaseJob": {
"extraVolumes": [{"name": "myvolume-{{ .Chart.Name }}", "emptyDir": {}}],
},
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert jmespath.search("spec.template.spec.volumes[-1]", docs[0]) == {
"name": "myvolume-airflow",
"emptyDir": {},
}
def test_should_add_extra_volume_mounts(self):
docs = render_chart(
values={
"migrateDatabaseJob": {
"extraVolumeMounts": [{"name": "foobar-{{ .Chart.Name }}", "mountPath": "foo/bar"}],
},
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert jmespath.search("spec.template.spec.containers[0].volumeMounts[-1]", docs[0]) == {
"name": "foobar-airflow",
"mountPath": "foo/bar",
}
def test_should_add_global_volume_and_global_volume_mount(self):
docs = render_chart(
values={
"volumes": [{"name": "myvolume", "emptyDir": {}}],
"volumeMounts": [{"name": "foobar", "mountPath": "foo/bar"}],
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert jmespath.search("spec.template.spec.volumes[-1]", docs[0]) == {
"name": "myvolume",
"emptyDir": {},
}
assert jmespath.search("spec.template.spec.containers[0].volumeMounts[-1]", docs[0]) == {
"name": "foobar",
"mountPath": "foo/bar",
}
def test_job_ttl_after_finished(self):
docs = render_chart(
values={"migrateDatabaseJob": {"ttlSecondsAfterFinished": 1}},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
ttl = jmespath.search("spec.ttlSecondsAfterFinished", docs[0])
assert ttl == 1
def test_job_ttl_after_finished_zero(self):
docs = render_chart(
values={"migrateDatabaseJob": {"ttlSecondsAfterFinished": 0}},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
ttl = jmespath.search("spec.ttlSecondsAfterFinished", docs[0])
assert ttl == 0
def test_job_ttl_after_finished_nil(self):
docs = render_chart(
values={"migrateDatabaseJob": {"ttlSecondsAfterFinished": None}},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
spec = jmespath.search("spec", docs[0])
assert "ttlSecondsAfterFinished" not in spec
@pytest.mark.parametrize(
("airflow_version", "expected_arg"),
[
("1.10.14", "airflow upgradedb"),
("2.0.2", "airflow db upgrade"),
("2.7.1", "airflow db migrate"),
],
)
def test_default_command_and_args_airflow_version(self, airflow_version, expected_arg):
docs = render_chart(
values={
"airflowVersion": airflow_version,
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert jmespath.search("spec.template.spec.containers[0].command", docs[0]) is None
assert [
"bash",
"-c",
f"exec \\\n{expected_arg}",
] == jmespath.search("spec.template.spec.containers[0].args", docs[0])
@pytest.mark.parametrize("command", [None, ["custom", "command"]])
@pytest.mark.parametrize("args", [None, ["custom", "args"]])
def test_command_and_args_overrides(self, command, args):
docs = render_chart(
values={"migrateDatabaseJob": {"command": command, "args": args}},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert command == jmespath.search("spec.template.spec.containers[0].command", docs[0])
assert args == jmespath.search("spec.template.spec.containers[0].args", docs[0])
def test_command_and_args_overrides_are_templated(self):
docs = render_chart(
values={
"migrateDatabaseJob": {"command": ["{{ .Release.Name }}"], "args": ["{{ .Release.Service }}"]}
},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert jmespath.search("spec.template.spec.containers[0].command", docs[0]) == ["release-name"]
assert jmespath.search("spec.template.spec.containers[0].args", docs[0]) == ["Helm"]
def test_no_airflow_local_settings(self):
docs = render_chart(
values={"airflowLocalSettings": None}, show_only=["templates/jobs/migrate-database-job.yaml"]
)
volume_mounts = jmespath.search("spec.template.spec.containers[0].volumeMounts", docs[0])
assert "airflow_local_settings.py" not in str(volume_mounts)
def test_airflow_local_settings(self):
docs = render_chart(
values={"airflowLocalSettings": "# Well hello!"},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert {
"name": "config",
"mountPath": "/opt/airflow/config/airflow_local_settings.py",
"subPath": "airflow_local_settings.py",
"readOnly": True,
} in jmespath.search("spec.template.spec.containers[0].volumeMounts", docs[0])
@pytest.mark.parametrize(
"restart_policy",
[
"OnFailure",
"Never",
],
)
def test_restart_policy(self, restart_policy):
docs = render_chart(
values={"migrateDatabaseJob": {"restartPolicy": restart_policy}},
show_only=["templates/jobs/migrate-database-job.yaml"],
)
assert restart_policy == jmespath.search("spec.template.spec.restartPolicy", docs[0])
|
TestMigrateDatabaseJob
|
python
|
getsentry__sentry
|
tests/sentry/integrations/github/tasks/test_codecov_account_unlink.py
|
{
"start": 535,
"end": 4386
}
|
class ____(IntegrationTestCase):
provider = GitHubIntegrationProvider
def setUp(self):
super().setUp()
self.integration = self.create_integration(
organization=self.organization,
provider=IntegrationProviderSlug.GITHUB.value,
name="test-org",
external_id="123456",
metadata={"account_id": "789"},
status=ObjectStatus.DISABLED,
)
@patch("sentry.integrations.github.tasks.codecov_account_unlink.CodecovApiClient")
def test_codecov_account_unlink_success(self, mock_codecov_client_class):
mock_client = MagicMock()
mock_response = MagicMock()
mock_response.raise_for_status.return_value = None
mock_client.post.return_value = mock_response
mock_codecov_client_class.return_value = mock_client
codecov_account_unlink(
integration_id=self.integration.id,
organization_ids=[self.organization.id],
)
mock_codecov_client_class.assert_called_once_with(
git_provider_org="test-org", git_provider=IntegrationProviderSlug.GITHUB.value
)
expected_request_data = {
"sentry_org_ids": [str(self.organization.id)],
}
mock_client.post.assert_called_once_with(
endpoint=account_unlink_endpoint,
json=expected_request_data,
)
mock_response.raise_for_status.assert_called_once()
def test_codecov_account_unlink_missing_integration(self):
with patch("sentry.integrations.github.tasks.codecov_account_unlink.logger") as mock_logger:
codecov_account_unlink(
integration_id=99999, # Non-existent integration
organization_ids=[self.organization.id],
)
mock_logger.warning.assert_called_once_with(
"codecov.account_unlink.missing_integration", extra={"integration_id": 99999}
)
@patch("sentry.integrations.github.tasks.codecov_account_unlink.CodecovApiClient")
def test_codecov_account_unlink_configuration_error(self, mock_codecov_client_class):
mock_codecov_client_class.side_effect = ConfigurationError("Bad config")
with patch("sentry.integrations.github.tasks.codecov_account_unlink.logger") as mock_logger:
codecov_account_unlink(
integration_id=self.integration.id,
organization_ids=[self.organization.id],
)
mock_logger.exception.assert_called_once_with(
"codecov.account_unlink.configuration_error",
extra={
"github_org": "test-org",
"integration_id": self.integration.id,
},
)
@patch("sentry.integrations.github.tasks.codecov_account_unlink.CodecovApiClient")
def test_codecov_account_unlink_api_error(self, mock_codecov_client_class):
mock_client = MagicMock()
mock_response = MagicMock()
mock_response.raise_for_status.side_effect = Exception("API Error")
mock_client.post.return_value = mock_response
mock_codecov_client_class.return_value = mock_client
with patch("sentry.integrations.github.tasks.codecov_account_unlink.logger") as mock_logger:
codecov_account_unlink(
integration_id=self.integration.id,
organization_ids=[self.organization.id],
)
mock_logger.exception.assert_called_once_with(
"codecov.account_unlink.unexpected_error",
extra={
"github_org": "test-org",
"integration_id": self.integration.id,
"error": "API Error",
"error_type": "Exception",
},
)
|
CodecovAccountUnlinkTestCase
|
python
|
getsentry__sentry
|
tests/sentry/monitors/endpoints/test_organization_monitor_details.py
|
{
"start": 158,
"end": 301
}
|
class ____(BaseMonitorDetailsTest):
endpoint = "sentry-api-0-organization-monitor-details"
__test__ = True
|
OrganizationMonitorDetailsTest
|
python
|
tensorflow__tensorflow
|
tensorflow/python/kernel_tests/array_ops/matrix_band_part_op_test.py
|
{
"start": 2501,
"end": 3173
}
|
class ____(test_lib.TestCase):
pass # Filled in below
def _GetMatrixBandPartGradTest(dtype_, batch_shape_, shape_):
@test_util.run_v1_only("b/120545219")
def Test(self):
shape = batch_shape_ + shape_
x = constant_op.constant(np.random.rand(*shape), dtype=dtype_)
with self.session(use_gpu=False):
for lower in -1, 0, 1, shape_[-2] - 1:
for upper in -1, 0, 1, shape_[-1] - 1:
y = array_ops.matrix_band_part(x, lower, upper)
error = gradient_checker.compute_gradient_error(
x, x.get_shape().as_list(), y, y.get_shape().as_list())
self.assertLess(error, 1e-4)
return Test
|
MatrixBandPartGradTest
|
python
|
automl__auto-sklearn
|
test/test_util/test_dependencies.py
|
{
"start": 276,
"end": 3257
}
|
class ____(unittest.TestCase):
def test_existing_package(self, getDistributionMock):
requirement = "package"
distribution_mock = unittest.mock.Mock()
getDistributionMock.return_value = distribution_mock
distribution_mock.version = "1.0.0"
verify_packages(requirement)
getDistributionMock.assert_called_once_with("package")
def test_missing_package(self, getDistributionMock):
requirement = "package"
getDistributionMock.side_effect = pkg_resources.DistributionNotFound()
self.assertRaisesRegex(
MissingPackageError,
"Mandatory package 'package' not found",
verify_packages,
requirement,
)
@patch("importlib.import_module")
def test_package_can_only_be_imported(self, import_mock, getDistributionMock):
getDistributionMock.side_effect = pkg_resources.DistributionNotFound()
package = unittest.mock.Mock()
package.__version__ = np.__version__
import_mock.return_value = package
verify_packages("numpy")
def test_correct_package_versions(self, getDistributionMock):
requirement = "package==0.1.2\n" "package>0.1\n" "package>=0.1"
moduleMock = Mock()
moduleMock.version = "0.1.2"
getDistributionMock.return_value = moduleMock
verify_packages(requirement)
getDistributionMock.assert_called_with("package")
self.assertEqual(3, len(getDistributionMock.call_args_list))
def test_wrong_package_version(self, getDistributionMock):
requirement = "package>0.1.2"
moduleMock = Mock()
moduleMock.version = "0.1.2"
getDistributionMock.return_value = moduleMock
self.assertRaisesRegex(
IncorrectPackageVersionError,
re.escape(
"found 'package' version 0.1.2 but requires package version >0.1.2"
),
verify_packages,
requirement,
)
def test_outdated_requirement(self, getDistributionMock):
requirement = "package>=0.1"
moduleMock = Mock()
moduleMock.version = "0.0.9"
getDistributionMock.return_value = moduleMock
self.assertRaisesRegex(
IncorrectPackageVersionError,
re.escape(
"found 'package' version 0.0.9 but requires package version >=0.1"
),
verify_packages,
requirement,
)
def test_too_fresh_requirement(self, getDistributionMock):
requirement = "package==0.1.2"
moduleMock = Mock()
moduleMock.version = "0.1.3"
getDistributionMock.return_value = moduleMock
self.assertRaisesRegex(
IncorrectPackageVersionError,
re.escape(
"found 'package' version 0.1.3 but requires package version ==0.1.2"
),
verify_packages,
requirement,
)
|
VerifyPackagesTests
|
python
|
getsentry__sentry
|
tests/sentry/workflow_engine/endpoints/serializers/test_data_condition_serializer.py
|
{
"start": 363,
"end": 1708
}
|
class ____(TestCase):
def setUp(self) -> None:
super().setUp()
self.condition_group = self.create_data_condition_group(
organization_id=self.organization.id,
logic_type=DataConditionGroup.Type.ANY,
)
def test_serializer_simple(self) -> None:
condition = self.create_data_condition(
condition_group=self.condition_group,
type=Condition.GREATER,
comparison=100,
condition_result=DetectorPriorityLevel.HIGH,
)
result = serialize(condition)
assert result == {
"id": str(condition.id),
"type": "gt",
"comparison": 100,
"conditionResult": DetectorPriorityLevel.HIGH,
}
def test_complex_comparison(self) -> None:
condition = self.create_data_condition(
condition_group=self.condition_group,
type=Condition.GREATER,
comparison={"count": 100, "count_time": 60},
condition_result=DetectorPriorityLevel.HIGH,
)
result = serialize(condition)
assert result == {
"id": str(condition.id),
"type": "gt",
"comparison": {"count": 100, "countTime": 60},
"conditionResult": DetectorPriorityLevel.HIGH,
}
|
TestDataConditionSerializer
|
python
|
pytransitions__transitions
|
tests/test_core.py
|
{
"start": 779,
"end": 53036
}
|
class ____(TestCase):
def setUp(self):
self.stuff = Stuff()
self.machine_cls = Machine
def tearDown(self):
pass
def test_init_machine_with_hella_arguments(self):
states = [
State('State1'),
'State2',
{
'name': 'State3',
'on_enter': 'hello_world'
}
]
transitions = [
{'trigger': 'advance',
'source': 'State2',
'dest': 'State3'
}
]
s = Stuff()
m = s.machine_cls(model=s, states=states, transitions=transitions, initial='State2')
s.advance()
self.assertEqual(s.message, 'Hello World!')
def test_listify(self):
self.assertEqual(listify(4), [4])
self.assertEqual(listify(None), [])
self.assertEqual(listify((4, 5)), (4, 5))
self.assertEqual(listify([1, 3]), [1, 3])
class Foo:
pass
obj = Foo()
proxy = weakref.proxy(obj)
del obj
self.assertEqual(listify(proxy), [proxy])
def test_weakproxy_model(self):
d = DummyModel()
pr = weakref.proxy(d)
self.machine_cls(pr, states=['A', 'B'], transitions=[['go', 'A', 'B']], initial='A')
pr.go()
self.assertTrue(pr.is_B())
def test_property_initial(self):
states = ['A', 'B', 'C', 'D']
# Define with list of dictionaries
transitions = [
{'trigger': 'walk', 'source': 'A', 'dest': 'B'},
{'trigger': 'run', 'source': 'B', 'dest': 'C'},
{'trigger': 'sprint', 'source': 'C', 'dest': 'D'}
]
m = self.stuff.machine_cls(states=states, transitions=transitions, initial='A')
self.assertEqual(m.initial, 'A')
m = self.stuff.machine_cls(states=states, transitions=transitions, initial='C')
self.assertEqual(m.initial, 'C')
m = self.stuff.machine_cls(states=states, transitions=transitions)
self.assertEqual(m.initial, 'initial')
def test_transition_definitions(self):
states = ['A', 'B', 'C', 'D']
# Define with list of dictionaries
transitions = [
{'trigger': 'walk', 'source': 'A', 'dest': 'B'},
{'trigger': 'run', 'source': 'B', 'dest': 'C'},
{'trigger': 'sprint', 'source': 'C', 'dest': 'D'}
] # type: Sequence[TransitionConfig]
m = Machine(states=states, transitions=transitions, initial='A')
m.walk()
self.assertEqual(m.state, 'B')
# Define with list of lists
transitions = [
['walk', 'A', 'B'],
['run', 'B', 'C'],
['sprint', 'C', 'D']
]
m = Machine(states=states, transitions=transitions, initial='A')
m.to_C()
m.sprint()
self.assertEqual(m.state, 'D')
def test_add_states(self):
s = self.stuff
s.machine.add_state('X')
s.machine.add_state('Y')
s.machine.add_state('Z')
event = s.machine.events['to_{0}'.format(s.state)]
self.assertEqual(1, len(event.transitions['X']))
def test_transitioning(self):
s = self.stuff
s.machine.add_transition('advance', 'A', 'B')
s.machine.add_transition('advance', 'B', 'C')
s.machine.add_transition('advance', 'C', 'D')
s.advance()
self.assertEqual(s.state, 'B')
self.assertFalse(s.is_A())
self.assertTrue(s.is_B())
s.advance()
self.assertEqual(s.state, 'C')
def test_pass_state_instances_instead_of_names(self):
state_A = State('A')
state_B = State('B')
states = [state_A, state_B]
m = Machine(states=states, initial=state_A)
assert m.state == 'A'
m.add_transition('advance', state_A, state_B)
m.advance()
assert m.state == 'B'
state_B2 = State('B', on_enter='this_passes')
with self.assertRaises(ValueError):
m.add_transition('advance2', state_A, state_B2)
m2 = Machine(states=states, initial=state_A.name)
assert m.initial == m2.initial
with self.assertRaises(ValueError):
Machine(states=states, initial=State('A'))
def test_conditions(self):
s = self.stuff
s.machine.add_transition('advance', 'A', 'B', conditions='this_passes')
s.machine.add_transition('advance', 'B', 'C', unless=['this_fails'])
s.machine.add_transition('advance', 'C', 'D', unless=['this_fails',
'this_passes'])
s.advance()
self.assertEqual(s.state, 'B')
s.advance()
self.assertEqual(s.state, 'C')
s.advance()
self.assertEqual(s.state, 'C')
def test_uncallable_callbacks(self):
s = self.stuff
s.machine.add_transition('advance', 'A', 'B', conditions=['property_that_fails', 'is_false'])
# make sure parameters passed by trigger events can be handled
s.machine.add_transition('advance', 'A', 'C', before=['property_that_fails', 'is_false'])
s.advance(level='MaximumSpeed')
self.assertTrue(s.is_C())
def test_conditions_with_partial(self):
def check(result):
return result
s = self.stuff
s.machine.add_transition('advance', 'A', 'B',
conditions=partial(check, True))
s.machine.add_transition('advance', 'B', 'C',
unless=[partial(check, False)])
s.machine.add_transition('advance', 'C', 'D',
unless=[partial(check, False), partial(check, True)])
s.advance()
self.assertEqual(s.state, 'B')
s.advance()
self.assertEqual(s.state, 'C')
s.advance()
self.assertEqual(s.state, 'C')
def test_multiple_add_transitions_from_state(self):
s = self.stuff
s.machine.add_transition(
'advance', 'A', 'B', conditions=['this_fails'])
s.machine.add_transition('advance', 'A', 'C')
s.advance()
self.assertEqual(s.state, 'C')
def test_use_machine_as_model(self):
states = ['A', 'B', 'C', 'D']
m = Machine(states=states, initial='A')
m.add_transition('move', 'A', 'B')
m.add_transition('move_to_C', 'B', 'C')
m.move()
self.assertEqual(m.state, 'B')
def test_state_change_listeners(self):
s = self.stuff
s.machine.add_transition('advance', 'A', 'B')
s.machine.add_transition('reverse', 'B', 'A')
s.machine.on_enter_B('hello_world')
s.machine.on_exit_B('goodbye')
s.advance()
self.assertEqual(s.state, 'B')
self.assertEqual(s.message, 'Hello World!')
s.reverse()
self.assertEqual(s.state, 'A')
self.assertTrue(s.message is not None and s.message.startswith('So long'))
def test_before_after_callback_addition(self):
m = Machine(Stuff(), states=['A', 'B', 'C'], initial='A')
m.add_transition('move', 'A', 'B')
trans = m.events['move'].transitions['A'][0]
trans.add_callback('after', 'increase_level')
m.model.move()
self.assertEqual(m.model.level, 2)
def test_before_after_transition_listeners(self):
m = Machine(Stuff(), states=['A', 'B', 'C'], initial='A')
m.add_transition('move', 'A', 'B')
m.add_transition('move', 'B', 'C')
m.before_move('increase_level')
m.model.move()
self.assertEqual(m.model.level, 2)
m.model.move()
self.assertEqual(m.model.level, 3)
def test_prepare(self):
m = Machine(Stuff(), states=['A', 'B', 'C'], initial='A')
m.add_transition('move', 'A', 'B', prepare='increase_level')
m.add_transition('move', 'B', 'C', prepare='increase_level')
m.add_transition('move', 'C', 'A', prepare='increase_level', conditions='this_fails')
m.add_transition('dont_move', 'A', 'C', prepare='increase_level')
m.prepare_move('increase_level')
m.model.move()
self.assertEqual(m.model.state, 'B')
self.assertEqual(m.model.level, 3)
m.model.move()
self.assertEqual(m.model.state, 'C')
self.assertEqual(m.model.level, 5)
# State does not advance, but increase_level still runs
m.model.move()
self.assertEqual(m.model.state, 'C')
self.assertEqual(m.model.level, 7)
# An invalid transition shouldn't execute the callback
try:
m.model.dont_move()
except MachineError as e:
self.assertTrue("Can't trigger event" in str(e))
self.assertEqual(m.model.state, 'C')
self.assertEqual(m.model.level, 7)
def test_state_model_change_listeners(self):
s = self.stuff
s.machine.add_transition('go_e', 'A', 'E')
s.machine.add_transition('go_f', 'E', 'F')
s.machine.on_enter_F('hello_F')
s.go_e()
self.assertEqual(s.state, 'E')
self.assertEqual(s.message, 'I am E!')
s.go_f()
self.assertEqual(s.state, 'F')
self.assertEqual(s.exit_message, 'E go home...')
self.assertIn('I am F!', s.message or "")
self.assertIn('Hello F!', s.message or "")
def test_inheritance(self):
states = ['A', 'B', 'C', 'D', 'E']
s = InheritedStuff(states=states, initial='A')
s.add_transition('advance', 'A', 'B', conditions='this_passes')
s.add_transition('advance', 'B', 'C')
s.add_transition('advance', 'C', 'D')
s.advance()
self.assertEqual(s.state, 'B')
self.assertFalse(s.is_A())
self.assertTrue(s.is_B())
s.advance()
self.assertEqual(s.state, 'C')
class NewMachine(Machine):
def __init__(self, *args, **kwargs):
super(NewMachine, self).__init__(*args, **kwargs)
n = NewMachine(states=states, transitions=[['advance', 'A', 'B']], initial='A')
self.assertTrue(n.is_A())
n.advance()
self.assertTrue(n.is_B())
with self.assertRaises(ValueError):
NewMachine(state=['A', 'B'])
def test_send_event_data_callbacks(self):
states = ['A', 'B', 'C', 'D', 'E']
s = Stuff()
# First pass positional and keyword args directly to the callback
m = Machine(model=s, states=states, initial='A', send_event=False,
auto_transitions=True)
m.add_transition(
trigger='advance', source='A', dest='B', before='set_message')
s.advance(message='Hallo. My name is Inigo Montoya.')
self.assertTrue(s.message is not None and s.message.startswith('Hallo.'))
s.to_A()
s.advance('Test as positional argument')
self.assertTrue(s.message is not None and s.message.startswith('Test as'))
# Now wrap arguments in an EventData instance
m.send_event = True
m.add_transition(
trigger='advance', source='B', dest='C', before='extract_message')
s.advance(message='You killed my father. Prepare to die.')
self.assertTrue(s.message is not None and s.message.startswith('You'))
def test_send_event_data_conditions(self):
states = ['A', 'B', 'C', 'D']
s = Stuff()
# First pass positional and keyword args directly to the condition
m = Machine(model=s, states=states, initial='A', send_event=False)
m.add_transition(
trigger='advance', source='A', dest='B',
conditions='this_fails_by_default')
s.advance(boolean=True)
self.assertEqual(s.state, 'B')
# Now wrap arguments in an EventData instance
m.send_event = True
m.add_transition(
trigger='advance', source='B', dest='C',
conditions='extract_boolean')
s.advance(boolean=False)
self.assertEqual(s.state, 'B')
def test_auto_transitions(self):
states = ['A', {'name': 'B'}, State(name='C')] # type: Sequence[StateConfig]
m = Machine(states=states, initial='A', auto_transitions=True)
m.to_B()
self.assertEqual(m.state, 'B')
m.to_C()
self.assertEqual(m.state, 'C')
m.to_A()
self.assertEqual(m.state, 'A')
# Should fail if auto transitions is off...
m = Machine(states=states, initial='A', auto_transitions=False)
with self.assertRaises(AttributeError):
m.to_C()
def test_ordered_transitions(self):
states = ['beginning', 'middle', 'end']
m = Machine(states=states)
m.add_ordered_transitions()
self.assertEqual(m.state, 'initial')
m.next_state()
self.assertEqual(m.state, 'beginning')
m.next_state()
m.next_state()
self.assertEqual(m.state, 'end')
m.next_state()
self.assertEqual(m.state, 'initial')
# Include initial state in loop
m = Machine(states=states)
m.add_ordered_transitions(loop_includes_initial=False)
m.to_end()
m.next_state()
self.assertEqual(m.state, 'beginning')
# Do not loop transitions
m = Machine(states=states)
m.add_ordered_transitions(loop=False)
m.to_end()
with self.assertRaises(MachineError):
m.next_state()
# Test user-determined sequence and trigger name
m = Machine(states=states, initial='beginning')
m.add_ordered_transitions(['end', 'beginning'], trigger='advance')
m.advance()
self.assertEqual(m.state, 'end')
m.advance()
self.assertEqual(m.state, 'beginning')
# Via init argument
m = Machine(states=states, initial='beginning', ordered_transitions=True)
m.next_state()
self.assertEqual(m.state, 'middle')
# Alter initial state
m = Machine(states=states, initial='middle', ordered_transitions=True)
m.next_state()
self.assertEqual(m.state, 'end')
m.next_state()
self.assertEqual(m.state, 'beginning')
# Partial state machine without the initial state
m = Machine(states=states, initial='beginning')
m.add_ordered_transitions(['middle', 'end'])
self.assertEqual(m.state, 'beginning')
with self.assertRaises(MachineError):
m.next_state()
m.to_middle()
for s in ('end', 'middle', 'end'):
m.next_state()
self.assertEqual(m.state, s)
def test_ordered_transition_error(self):
m = Machine(states=['A'], initial='A')
with self.assertRaises(ValueError):
m.add_ordered_transitions()
m.add_state('B')
m.add_ordered_transitions()
m.add_state('C')
with self.assertRaises(ValueError):
m.add_ordered_transitions(['C'])
def test_ignore_invalid_triggers(self):
a_state = State('A')
transitions = [['a_to_b', 'A', 'B']]
# Exception is triggered by default
b_state = State('B')
m1 = Machine(states=[a_state, b_state], transitions=transitions,
initial='B')
with self.assertRaises(MachineError):
m1.a_to_b()
# Set default value on machine level
m2 = Machine(states=[a_state, b_state], transitions=transitions,
initial='B', ignore_invalid_triggers=True)
m2.a_to_b()
# Exception is suppressed, so this passes
b_state = State('B', ignore_invalid_triggers=True)
m3 = Machine(states=[a_state, b_state], transitions=transitions,
initial='B')
m3.a_to_b()
# Set for some states but not others
new_states = ['C', 'D']
m1.add_states(new_states, ignore_invalid_triggers=True)
m1.to_D()
m1.a_to_b() # passes because exception suppressed for D
m1.to_B()
with self.assertRaises(MachineError):
m1.a_to_b()
# State value overrides machine behaviour
m3 = Machine(states=[a_state, b_state], transitions=transitions,
initial='B', ignore_invalid_triggers=False)
m3.a_to_b()
def test_string_callbacks(self):
m = Machine(states=['A', 'B'],
before_state_change='before_state_change',
after_state_change='after_state_change', send_event=True,
initial='A', auto_transitions=True)
m.before_state_change = MagicMock()
m.after_state_change = MagicMock()
m.to_B()
self.assertTrue(m.before_state_change[0].called)
self.assertTrue(m.after_state_change[0].called)
# after_state_change should have been called with EventData
event_data = m.after_state_change[0].call_args[0][0]
self.assertIsInstance(event_data, EventData)
self.assertTrue(event_data.result)
def test_function_callbacks(self):
before_state_change = MagicMock()
after_state_change = MagicMock()
m = Machine(states=['A', 'B'],
before_state_change=before_state_change,
after_state_change=after_state_change, send_event=True,
initial='A', auto_transitions=True)
self.assertEqual(before_state_change, m.before_state_change[0])
self.assertEqual(after_state_change, m.after_state_change[0])
m.to_B()
self.assertTrue(before_state_change.called)
self.assertTrue(after_state_change.called)
def test_state_callbacks(self):
class Model:
def on_enter_A(self):
pass
def on_exit_A(self):
pass
def on_enter_B(self):
pass
def on_exit_B(self):
pass
states = [State(name='A', on_enter='on_enter_A', on_exit='on_exit_A'),
State(name='B', on_enter='on_enter_B', on_exit='on_exit_B')]
machine = Machine(Model(), states=states)
state_a = machine.get_state('A')
state_b = machine.get_state('B')
self.assertEqual(len(state_a.on_enter), 1)
self.assertEqual(len(state_a.on_exit), 1)
self.assertEqual(len(state_b.on_enter), 1)
self.assertEqual(len(state_b.on_exit), 1)
def test_state_callable_callbacks(self):
class Model:
def __init__(self):
self.exit_A_called = False
self.exit_B_called = False
def on_enter_A(self, event):
pass
def on_enter_B(self, event):
pass
states = [State(name='A', on_enter='on_enter_A', on_exit='tests.test_core.on_exit_A'),
State(name='B', on_enter='on_enter_B', on_exit=on_exit_B),
State(name='C', on_enter='tests.test_core.AAAA')]
model = Model()
machine = Machine(model, states=states, send_event=True, initial='A')
state_a = machine.get_state('A')
state_b = machine.get_state('B')
self.assertEqual(len(state_a.on_enter), 1)
self.assertEqual(len(state_a.on_exit), 1)
self.assertEqual(len(state_b.on_enter), 1)
self.assertEqual(len(state_b.on_exit), 1)
model.to_B()
self.assertTrue(model.exit_A_called)
model.to_A()
self.assertTrue(model.exit_B_called)
with self.assertRaises(AttributeError):
model.to_C()
def test_pickle(self):
import sys
if sys.version_info < (3, 4):
import dill as pickle
else:
import pickle
states = ['A', 'B', 'C', 'D']
# Define with list of dictionaries
transitions = [
{'trigger': 'walk', 'source': 'A', 'dest': 'B'},
{'trigger': 'run', 'source': 'B', 'dest': 'C'},
{'trigger': 'sprint', 'source': 'C', 'dest': 'D'}
] # type: Sequence[TransitionConfigDict]
m = Machine(states=states, transitions=transitions, initial='A')
m.walk()
dump = pickle.dumps(m)
self.assertIsNotNone(dump)
m2 = pickle.loads(dump)
self.assertEqual(m.state, m2.state)
m2.run()
def test_pickle_model(self):
import sys
if sys.version_info < (3, 4):
import dill as pickle
else:
import pickle
self.stuff.to_B()
dump = pickle.dumps(self.stuff)
self.assertIsNotNone(dump)
model2 = pickle.loads(dump)
self.assertEqual(self.stuff.state, model2.state)
model2.to_F()
def test_queued(self):
states = ['A', 'B', 'C', 'D']
# Define with list of dictionaries
def change_state(machine):
self.assertEqual(machine.state, 'A')
if machine.has_queue:
machine.run(machine=machine)
self.assertEqual(machine.state, 'A')
else:
with self.assertRaises(MachineError):
machine.run(machine=machine)
transitions = [
{'trigger': 'walk', 'source': 'A', 'dest': 'B', 'before': change_state},
{'trigger': 'run', 'source': 'B', 'dest': 'C'},
{'trigger': 'sprint', 'source': 'C', 'dest': 'D'}
] # type: Sequence[TransitionConfig]
m = Machine(states=states, transitions=transitions, initial='A')
m.walk(machine=m)
self.assertEqual(m.state, 'B')
m = Machine(states=states, transitions=transitions, initial='A', queued=True)
m.walk(machine=m)
self.assertEqual(m.state, 'C')
def test_queued_errors(self):
def before_change(machine):
if machine.has_queue:
machine.to_A(machine)
machine._queued = False
def after_change(machine):
machine.to_C(machine)
states = ['A', 'B', 'C']
transitions = [{
'trigger': 'do', 'source': '*', 'dest': 'C',
'before': partial(self.stuff.this_raises, ValueError)
}] # type: Sequence[TransitionConfig]
m = Machine(states=states, transitions=transitions, queued=True,
before_state_change=before_change, after_state_change=after_change)
with self.assertRaises(MachineError):
m.to_B(machine=m)
with self.assertRaises(ValueError):
m.do(machine=m)
def test_queued_remove(self):
m = self.machine_cls(model=None, states=['A', 'B', 'C'], initial='A', queued=True)
assert_equal = self.assertEqual
class BaseModel:
def on_enter_A(self):
pass
def on_enter_B(self):
pass
def on_enter_C(self):
pass
class SubModel(BaseModel):
def __init__(self):
self.inner = BaseModel()
def on_enter_A(self):
self.to_B()
self.inner.to_B()
def on_enter_B(self):
self.to_C()
self.inner.to_C()
# queue should contain to_B(), inner.to_B(), to_C(), inner.to_C()
assert_equal(4, len(m._transition_queue))
m.remove_model(self)
# since to_B() is currently executed it should still be in the list, to_C should be gone
assert_equal(3, len(m._transition_queue))
def on_enter_C(self):
raise RuntimeError("Event was not cancelled")
model = SubModel()
m.add_model([model, model.inner])
model.to_A()
# test whether models can be removed outside event queue
m.remove_model(model.inner)
self.assertTrue(model.inner.is_C())
def test___getattr___and_identify_callback(self):
m = self.machine_cls(Stuff(), states=['A', 'B', 'C'], initial='A')
m.add_transition('move', 'A', 'B')
m.add_transition('move', 'B', 'C')
callback = m.__getattr__('before_move')
self.assertTrue(callable(callback))
with self.assertRaises(AttributeError):
m.__getattr__('before_no_such_transition')
with self.assertRaises(AttributeError):
m.__getattr__('before_no_such_transition')
with self.assertRaises(AttributeError):
m.__getattr__('__no_such_method__')
with self.assertRaises(AttributeError):
m.__getattr__('')
type, target = m._identify_callback('on_exit_foobar')
self.assertEqual(type, 'on_exit')
self.assertEqual(target, 'foobar')
type, target = m._identify_callback('on_exitfoobar')
self.assertEqual(type, None)
self.assertEqual(target, None)
type, target = m._identify_callback('notacallback_foobar')
self.assertEqual(type, None)
self.assertEqual(target, None)
type, target = m._identify_callback('totallyinvalid')
self.assertEqual(type, None)
self.assertEqual(target, None)
type, target = m._identify_callback('before__foobar')
self.assertEqual(type, 'before')
self.assertEqual(target, '_foobar')
type, target = m._identify_callback('before__this__user__likes__underscores___')
self.assertEqual(type, 'before')
self.assertEqual(target, '_this__user__likes__underscores___')
type, target = m._identify_callback('before_stuff')
self.assertEqual(type, 'before')
self.assertEqual(target, 'stuff')
type, target = m._identify_callback('before_trailing_underscore_')
self.assertEqual(type, 'before')
self.assertEqual(target, 'trailing_underscore_')
type, target = m._identify_callback('before_')
self.assertIs(type, None)
self.assertIs(target, None)
type, target = m._identify_callback('__')
self.assertIs(type, None)
self.assertIs(target, None)
type, target = m._identify_callback('')
self.assertIs(type, None)
self.assertIs(target, None)
def test_state_and_transition_with_underscore(self):
m = Machine(Stuff(), states=['_A_', '_B_', '_C_'], initial='_A_')
m.add_transition('_move_', '_A_', '_B_', prepare='increase_level')
m.add_transition('_after_', '_B_', '_C_', prepare='increase_level')
m.add_transition('_on_exit_', '_C_', '_A_', prepare='increase_level', conditions='this_fails')
m.model._move_()
self.assertEqual(m.model.state, '_B_')
self.assertEqual(m.model.level, 2)
m.model._after_()
self.assertEqual(m.model.state, '_C_')
self.assertEqual(m.model.level, 3)
# State does not advance, but increase_level still runs
m.model._on_exit_()
self.assertEqual(m.model.state, '_C_')
self.assertEqual(m.model.level, 4)
def test_callback_identification(self):
m = Machine(Stuff(), states=['A', 'B', 'C', 'D', 'E', 'F'], initial='A')
m.add_transition('transition', 'A', 'B', before='increase_level')
m.add_transition('after', 'B', 'C', before='increase_level')
m.add_transition('on_exit_A', 'C', 'D', before='increase_level', conditions='this_fails')
m.add_transition('check', 'C', 'E', before='increase_level')
m.add_transition('prepare', 'E', 'F', before='increase_level')
m.add_transition('before', 'F', 'A', before='increase_level')
m.before_transition('increase_level')
m.before_after('increase_level')
m.before_on_exit_A('increase_level')
m.after_check('increase_level')
m.before_prepare('increase_level')
m.before_before('increase_level')
m.model.transition()
self.assertEqual(m.model.state, 'B')
self.assertEqual(m.model.level, 3)
m.model.after()
self.assertEqual(m.model.state, 'C')
self.assertEqual(m.model.level, 5)
m.model.on_exit_A()
self.assertEqual(m.model.state, 'C')
self.assertEqual(m.model.level, 5)
m.model.check()
self.assertEqual(m.model.state, 'E')
self.assertEqual(m.model.level, 7)
m.model.prepare()
self.assertEqual(m.model.state, 'F')
self.assertEqual(m.model.level, 9)
m.model.before()
self.assertEqual(m.model.state, 'A')
self.assertEqual(m.model.level, 11)
# An invalid transition shouldn't execute the callback
with self.assertRaises(MachineError):
m.model.on_exit_A()
def test_process_trigger(self):
m = Machine(states=['raw', 'processed'], initial='raw')
m.add_transition('process', 'raw', 'processed')
m.process()
self.assertEqual(m.state, 'processed')
def test_multiple_models(self):
s1, s2 = Stuff(), Stuff()
states = ['A', 'B', 'C']
m = Machine(model=[s1, s2], states=states,
initial=states[0])
self.assertEqual(len(m.models), 2)
self.assertTrue(isinstance(m.model, list) and len(m.model) == 2)
m.add_transition('advance', 'A', 'B')
s1.advance()
self.assertEqual(s1.state, 'B')
self.assertEqual(s2.state, 'A')
m = Machine(model=s1, states=states,
initial=states[0])
# for backwards compatibility model should return a model instance
# rather than a list
self.assertNotIsInstance(m.model, list)
def test_dispatch(self):
s1, s2 = Stuff(), Stuff()
states = ['A', 'B', 'C']
m = Machine(model=s1, states=states, ignore_invalid_triggers=True,
initial=states[0], transitions=[['go', 'A', 'B'], ['go', 'B', 'C']])
m.add_model(s2, initial='B')
assert m.dispatch('go')
self.assertEqual(s1.state, 'B')
self.assertEqual(s2.state, 'C')
def test_dispatch_with_error(self):
s1, s2 = Stuff(), Stuff()
states = ['A', 'B', 'C']
m = Machine(model=s1, states=states, ignore_invalid_triggers=True,
initial=states[0], transitions=[['go', 'B', 'C']])
m.add_model(s2, initial='B')
assert not m.dispatch('go')
self.assertEqual(s1.state, 'A')
self.assertEqual(s2.state, 'C')
def test_remove_model(self):
m = self.machine_cls()
self.assertIn(m, m.models)
m.remove_model(m)
self.assertNotIn(m, m.models)
def test_string_trigger(self):
def return_value(value):
return value
class Model:
def trigger(self, value):
return value
self.stuff.machine.add_transition('do', '*', 'C')
self.stuff.trigger('do')
self.assertTrue(self.stuff.is_C())
self.stuff.machine.add_transition('maybe', 'C', 'A', conditions=return_value)
self.assertFalse(self.stuff.trigger('maybe', value=False))
self.assertTrue(self.stuff.trigger('maybe', value=True))
self.assertTrue(self.stuff.is_A())
with self.assertRaises(AttributeError):
self.stuff.trigger('not_available')
with self.assertRaises(MachineError):
self.stuff.trigger('maybe')
model = Model()
m = Machine(model=model)
self.assertEqual(model.trigger(5), 5)
self.stuff.machine.add_transition('do_raise_keyerror', '*', 'C',
before=partial(self.stuff.this_raises, KeyError))
with self.assertRaises(KeyError):
self.stuff.trigger('do_raise_keyerror')
self.stuff.machine.get_model_state(self.stuff).ignore_invalid_triggers = True
self.stuff.trigger('should_not_raise_anything')
self.stuff.trigger('to_A')
self.assertTrue(self.stuff.is_A())
self.stuff.machine.ignore_invalid_triggers = True
self.stuff.trigger('should_not_raise_anything')
def test_get_triggers(self):
states = ['A', 'B', 'C']
transitions = [['a2b', 'A', 'B'],
['a2c', 'A', 'C'],
['c2b', 'C', 'B']]
machine = Machine(states=states, transitions=transitions, initial='A', auto_transitions=False)
self.assertEqual(len(machine.get_triggers('A')), 2)
self.assertEqual(len(machine.get_triggers('B')), 0)
self.assertEqual(len(machine.get_triggers('C')), 1)
# self stuff machine should have to-transitions to every state
m = self.stuff.machine
self.assertEqual(len(m.get_triggers('B')), len(m.states))
trigger_name = m.get_triggers('B')
trigger_state = m.get_triggers(m.states['B'])
self.assertEqual(trigger_name, trigger_state)
def test_skip_override(self):
local_mock = MagicMock()
class Model(object):
def go(self):
local_mock()
model = Model()
transitions = [['go', 'A', 'B'], ['advance', 'A', 'B']]
m = self.stuff.machine_cls(model=model, states=['A', 'B'], transitions=transitions, initial='A')
model.go()
self.assertEqual(model.state, 'A')
self.assertTrue(local_mock.called)
model.advance()
self.assertEqual(model.state, 'B')
model.to_A()
model.trigger('go')
self.assertEqual(model.state, 'B')
@skipIf(sys.version_info < (3, ),
"String-checking disabled on PY-2 because is different")
def test_repr(self):
def a_condition(event_data):
self.assertRegex(
str(event_data.transition.conditions),
r"\[<Condition\(<function TestTransitions.test_repr.<locals>"
r".a_condition at [^>]+>\)@\d+>\]")
return True
# No transition has been assigned to EventData yet
def check_prepare_repr(event_data):
self.assertRegex(
str(event_data),
r"<EventData\(<Event\('do_strcheck'\)@\d+>, "
r"<State\('A'\)@\d+>, "
r"None\)@\d+>")
def check_before_repr(event_data):
self.assertRegex(
str(event_data),
r"<EventData\(<Event\('do_strcheck'\)@\d+>, "
r"<State\('A'\)@\d+>, "
r"<Transition\('A', 'B'\)@\d+>\)@\d+>")
m.checked = True
m = Machine(states=['A', 'B'],
prepare_event=check_prepare_repr,
before_state_change=check_before_repr, send_event=True,
initial='A')
m.add_transition('do_strcheck', 'A', 'B', conditions=a_condition)
self.assertTrue(m.do_strcheck())
self.assertIn('checked', vars(m))
def test_machine_prepare(self):
global_mock = MagicMock()
local_mock = MagicMock()
def global_callback():
global_mock()
def local_callback():
local_mock()
def always_fails():
return False
transitions = [
{'trigger': 'go', 'source': 'A', 'dest': 'B', 'conditions': always_fails, 'prepare': local_callback},
{'trigger': 'go', 'source': 'A', 'dest': 'B', 'conditions': always_fails, 'prepare': local_callback},
{'trigger': 'go', 'source': 'A', 'dest': 'B', 'conditions': always_fails, 'prepare': local_callback},
{'trigger': 'go', 'source': 'A', 'dest': 'B', 'conditions': always_fails, 'prepare': local_callback},
{'trigger': 'go', 'source': 'A', 'dest': 'B', 'prepare': local_callback},
] # type: Sequence[TransitionConfig]
m = Machine(states=['A', 'B'], transitions=transitions,
prepare_event=global_callback, initial='A')
m.go()
self.assertEqual(global_mock.call_count, 1)
self.assertEqual(local_mock.call_count, len(transitions))
def test_machine_finalize(self):
finalize_mock = MagicMock()
def always_fails(event_data):
return False
transitions = [
{'trigger': 'go', 'source': 'A', 'dest': 'B'},
{'trigger': 'planA', 'source': 'B', 'dest': 'A', 'conditions': always_fails},
{'trigger': 'planB', 'source': 'B', 'dest': 'A',
'conditions': partial(self.stuff.this_raises, RuntimeError)}
]
m = self.stuff.machine_cls(states=['A', 'B'], transitions=transitions,
finalize_event=finalize_mock, initial='A', send_event=True)
m.go()
self.assertEqual(finalize_mock.call_count, 1)
m.planA()
event_data = finalize_mock.call_args[0][0]
self.assertIsInstance(event_data, EventData)
self.assertEqual(finalize_mock.call_count, 2)
self.assertFalse(event_data.result)
with self.assertRaises(RuntimeError):
m.planB()
m.finalize_event.append(partial(self.stuff.this_raises, ValueError))
# ValueError in finalize should be suppressed
# but mock should have been called anyway
with self.assertRaises(RuntimeError):
m.planB()
self.assertEqual(4, finalize_mock.call_count)
def test_machine_finalize_exception(self):
def finalize_callback(event):
self.assertIsInstance(event.error, ZeroDivisionError)
m = self.stuff.machine_cls(states=['A', 'B'], send_event=True, initial='A',
before_state_change=partial(self.stuff.this_raises, ZeroDivisionError),
finalize_event=finalize_callback)
with self.assertRaises(ZeroDivisionError):
m.to_B()
def test_prep_ordered_arg(self):
self.assertTrue(len(_prep_ordered_arg(3, None)) == 3)
self.assertTrue(all(a is None for a in _prep_ordered_arg(3, None)))
with self.assertRaises(ValueError):
# deliberately passing wrong arguments
_prep_ordered_arg(3, [None, None]) # type: ignore
def test_ordered_transition_callback(self):
class Model:
def __init__(self):
self.flag = False
def make_true(self):
self.flag = True
model = Model()
states = ['beginning', 'middle', 'end']
transits = [None, None, 'make_true']
m = Machine(model, states, initial='beginning')
m.add_ordered_transitions(before=transits)
model.next_state()
self.assertFalse(model.flag)
model.next_state()
model.next_state()
self.assertTrue(model.flag)
def test_ordered_transition_condition(self):
class Model:
def __init__(self):
self.blocker = False
def check_blocker(self):
return self.blocker
model = Model()
states = ['beginning', 'middle', 'end']
m = Machine(model, states, initial='beginning')
m.add_ordered_transitions(conditions=[None, None, 'check_blocker'])
model.to_end()
self.assertFalse(model.next_state())
model.blocker = True
self.assertTrue(model.next_state())
def test_get_transitions(self):
states = ['A', 'B', 'C', 'D']
m = self.machine_cls(states=states, initial='A', auto_transitions=False)
m.add_transition('go', ['A', 'B', 'C'], 'D')
m.add_transition('run', 'A', 'D')
self.assertEqual(
{(t.source, t.dest) for t in m.get_transitions('go')},
{('A', 'D'), ('B', 'D'), ('C', 'D')})
self.assertEqual(
[(t.source, t.dest)
for t in m.get_transitions(source='A', dest='D')],
[('A', 'D'), ('A', 'D')])
self.assertEqual(
sorted([(t.source, t.dest)
for t in m.get_transitions(dest='D')]),
[('A', 'D'), ('A', 'D'), ('B', 'D'), ('C', 'D')])
self.assertEqual(
[(t.source, t.dest)
for t in m.get_transitions(source=m.states['A'], dest=m.states['D'])],
[('A', 'D'), ('A', 'D')])
self.assertEqual(
sorted([(t.source, t.dest)
for t in m.get_transitions(dest=m.states['D'])]),
[('A', 'D'), ('A', 'D'), ('B', 'D'), ('C', 'D')])
def test_remove_transition(self):
self.stuff.machine.add_transition('go', ['A', 'B', 'C'], 'D')
self.stuff.machine.add_transition('walk', 'A', 'B')
self.stuff.go()
self.assertEqual(self.stuff.state, 'D')
self.stuff.to_A()
self.stuff.machine.remove_transition('go', source='A')
with self.assertRaises(MachineError):
self.stuff.go()
self.stuff.machine.add_transition('go', 'A', 'D')
self.stuff.walk()
self.stuff.go()
self.assertEqual(self.stuff.state, 'D')
self.stuff.to_C()
self.stuff.machine.remove_transition('go', dest='D')
self.assertFalse(hasattr(self.stuff, 'go'))
def test_remove_transition_state(self):
self.stuff.machine.add_transition('go', ['A', 'B', 'C'], 'D')
self.stuff.machine.add_transition('walk', 'A', 'B')
self.stuff.go()
self.assertEqual(self.stuff.state, 'D')
self.stuff.to_A()
self.stuff.machine.remove_transition('go', source=self.stuff.machine.states['A'])
with self.assertRaises(MachineError):
self.stuff.go()
self.stuff.machine.add_transition('go', 'A', 'D')
self.stuff.walk()
self.stuff.go()
self.assertEqual(self.stuff.state, 'D')
self.stuff.to_C()
self.stuff.machine.remove_transition('go', dest=self.stuff.machine.states['D'])
self.assertFalse(hasattr(self.stuff, 'go'))
def test_reflexive_transition(self):
self.stuff.machine.add_transition('reflex', ['A', 'B'], '=', after='increase_level')
self.assertEqual(self.stuff.state, 'A')
self.stuff.reflex()
self.assertEqual(self.stuff.state, 'A')
self.assertEqual(self.stuff.level, 2)
self.stuff.to_B()
self.assertEqual(self.stuff.state, 'B')
self.stuff.reflex()
self.assertEqual(self.stuff.state, 'B')
self.assertEqual(self.stuff.level, 3)
self.stuff.to_C()
with self.assertRaises(MachineError):
self.stuff.reflex()
self.assertEqual(self.stuff.level, 3)
def test_internal_transition(self):
m = Machine(Stuff(), states=['A', 'B'], initial='A')
m.add_transition('move', 'A', None, prepare='increase_level')
m.model.move()
self.assertEqual(m.model.state, 'A')
self.assertEqual(m.model.level, 2)
def test_dynamic_model_state_attribute(self):
class Model:
def __init__(self):
self.status = None
self.state = 'some_value'
m = self.machine_cls(Model(), states=['A', 'B'], initial='A', model_attribute='status')
self.assertEqual(m.model.status, 'A')
self.assertEqual(m.model.state, 'some_value')
m.add_transition('move', 'A', 'B')
m.model.move()
self.assertEqual(m.model.status, 'B')
self.assertEqual(m.model.state, 'some_value')
def test_multiple_machines_per_model(self):
class Model:
def __init__(self):
self.car_state = None
self.driver_state = None
instance = Model()
machine_a = Machine(instance, states=['A', 'B'], initial='A', model_attribute='car_state')
machine_a.add_transition('accelerate_car', 'A', 'B')
machine_b = Machine(instance, states=['A', 'B'], initial='B', model_attribute='driver_state')
machine_b.add_transition('driving', 'B', 'A')
assert instance.car_state == 'A'
assert instance.driver_state == 'B'
assert instance.is_car_state_A()
assert instance.is_driver_state_B()
instance.accelerate_car()
assert instance.car_state == 'B'
assert instance.driver_state == 'B'
assert not instance.is_car_state_A()
assert instance.is_car_state_B()
instance.driving()
assert instance.driver_state == 'A'
assert instance.car_state == 'B'
assert instance.is_driver_state_A()
assert not instance.is_driver_state_B()
assert instance.to_driver_state_B()
assert instance.driver_state == 'B'
def test_initial_not_registered(self):
m1 = self.machine_cls(states=['A', 'B'], initial=self.machine_cls.state_cls('C'))
self.assertTrue(m1.is_C())
self.assertTrue('C' in m1.states)
def test_trigger_name_cannot_be_equal_to_model_attribute(self):
m = self.machine_cls(states=['A', 'B'])
with self.assertRaises(ValueError):
m.add_transition(m.model_attribute, "A", "B")
def test_new_state_in_enter_callback(self):
machine = self.machine_cls(states=['A', 'B'], initial='A')
def on_enter_B():
state = self.machine_cls.state_cls(name='C')
machine.add_state(state)
machine.to_C()
machine.on_enter_B(on_enter_B)
machine.to_B()
def test_on_exception_callback(self):
mock = MagicMock()
def on_exception(event_data):
self.assertIsInstance(event_data.error, (ValueError, MachineError))
mock()
m = self.machine_cls(states=['A', 'B'], initial='A', transitions=[['go', 'A', 'B']], send_event=True,
after_state_change=partial(self.stuff.this_raises, ValueError))
with self.assertRaises(ValueError):
m.to_B()
self.assertTrue(m.is_B())
with self.assertRaises(MachineError):
m.go()
m.on_exception.append(on_exception)
m.to_B()
m.go()
self.assertTrue(mock.called)
self.assertEqual(2, mock.call_count)
def test_may_transition(self):
states = ['A', 'B', 'C']
d = DummyModel()
m = Machine(model=d, states=states, initial='A', auto_transitions=False)
m.add_transition('walk', 'A', 'B')
m.add_transition('stop', 'B', 'C')
m.add_transition('wait', 'B', None)
assert d.may_walk()
assert d.may_trigger("walk")
assert not d.may_stop()
assert not d.may_trigger("stop")
assert not d.may_wait()
assert not d.may_trigger("wait")
d.walk()
assert not d.may_walk()
assert not d.may_trigger("walk")
assert d.may_stop()
assert d.may_trigger("stop")
assert d.may_wait()
assert d.may_trigger("wait")
def test_may_transition_for_autogenerated_triggers(self):
states = ['A', 'B', 'C']
m = Machine(states=states, initial='A')
assert m.may_to_A()
assert m.may_trigger("to_A")
m.to_A()
assert m.to_B()
assert m.may_trigger("to_B")
m.to_B()
assert m.may_to_C()
assert m.may_trigger("to_C")
m.to_C()
def test_may_transition_with_conditions(self):
states = ['A', 'B', 'C']
d = DummyModel()
m = Machine(model=d, states=states, initial='A', auto_transitions=False)
m.add_transition('walk', 'A', 'B', conditions=[lambda: False])
m.add_transition('stop', 'B', 'C')
m.add_transition('run', 'A', 'C')
assert not d.may_walk()
assert not d.may_trigger("walk")
assert not d.may_stop()
assert not d.may_trigger("stop")
assert d.may_run()
assert d.may_trigger("run")
d.run()
assert not d.may_run()
assert not d.may_trigger("run")
def test_machine_may_transitions(self):
states = ['A', 'B', 'C']
m = self.machine_cls(states=states, initial='A', auto_transitions=False)
m.add_transition('walk', 'A', 'B', conditions=[lambda: False])
m.add_transition('stop', 'B', 'C')
m.add_transition('run', 'A', 'C')
m.add_transition('reset', 'C', 'A')
assert not m.may_walk()
assert not m.may_trigger("walk")
assert not m.may_stop()
assert not m.may_trigger("stop")
assert m.may_run()
assert m.may_trigger("run")
m.run()
assert not m.may_run()
assert not m.may_trigger("run")
assert not m.may_stop()
assert not m.may_trigger("stop")
assert not m.may_walk()
assert not m.may_trigger("walk")
def test_may_transition_with_invalid_state(self):
states = ['A', 'B', 'C']
d = DummyModel()
m = self.machine_cls(model=d, states=states, initial='A', auto_transitions=False)
m.add_transition('walk', 'A', 'UNKNOWN')
assert not d.may_walk()
assert not d.may_trigger("walk")
def test_may_transition_with_exception(self):
stuff = Stuff(machine_cls=self.machine_cls, extra_kwargs={"send_event": True})
stuff.machine.add_transition(trigger="raises", source="A", dest="B", prepare=partial(stuff.this_raises, RuntimeError("Prepare Exception")))
stuff.machine.add_transition(trigger="raises", source="B", dest="C", conditions=partial(stuff.this_raises, ValueError("Condition Exception")))
stuff.machine.add_transition(trigger="works", source="A", dest="B")
def process_exception(event_data):
assert event_data.error is not None
assert event_data.transition is not None
assert event_data.event.name == "raises"
assert event_data.machine == stuff.machine
with self.assertRaises(RuntimeError):
stuff.may_raises()
assert stuff.is_A()
assert stuff.may_works()
assert stuff.works()
with self.assertRaises(ValueError):
stuff.may_raises()
with self.assertRaises(ValueError):
stuff.may_trigger("raises")
assert stuff.is_B()
stuff.machine.on_exception.append(process_exception)
assert not stuff.may_raises()
assert not stuff.may_trigger("raises")
assert stuff.to_A()
assert not stuff.may_raises()
assert not stuff.may_trigger("raises")
def test_on_final(self):
final_mock = MagicMock()
machine = self.machine_cls(states=['A', {'name': 'B', 'final': True}], on_final=final_mock, initial='A')
self.assertFalse(final_mock.called)
machine.to_B()
self.assertTrue(final_mock.called)
machine.to_A()
self.assertEqual(1, final_mock.call_count)
machine.to_B()
self.assertEqual(2, final_mock.call_count)
def test_custom_transition(self):
class MyTransition(self.machine_cls.transition_cls): # type: ignore
def __init__(self, source, dest, conditions=None, unless=None, before=None,
after=None, prepare=None, my_int=None, my_none=None, my_str=None, my_dict=None):
super(MyTransition, self).__init__(source, dest, conditions, unless, before, after, prepare)
self.my_int = my_int
self.my_none = my_none
self.my_str = my_str
self.my_dict = my_dict
class MyMachine(self.machine_cls): # type: ignore
transition_cls = MyTransition
a_transition = {
"trigger": "go", "source": "B", "dest": "A",
"my_int": 42, "my_str": "foo", "my_dict": {"bar": "baz"}
}
transitions = [
["go", "A", "B"],
a_transition
]
m = MyMachine(states=["A", "B"], transitions=transitions, initial="A")
m.add_transition("reset", "*", "A",
my_int=23, my_str="foo2", my_none=None, my_dict={"baz": "bar"})
assert m.go()
trans = m.get_transitions("go", "B") # type: List[MyTransition]
assert len(trans) == 1
assert trans[0].my_str == a_transition["my_str"]
assert trans[0].my_int == a_transition["my_int"]
assert trans[0].my_dict == a_transition["my_dict"]
assert trans[0].my_none is None
trans = m.get_transitions("reset", "A")
assert len(trans) == 1
assert trans[0].my_str == "foo2"
assert trans[0].my_int == 23
assert trans[0].my_dict == {"baz": "bar"}
assert trans[0].my_none is None
|
TestTransitions
|
python
|
lazyprogrammer__machine_learning_examples
|
rl/approx_control.py
|
{
"start": 1576,
"end": 4287
}
|
class ____:
def __init__(self, grid):
# fit the featurizer to data
samples = gather_samples(grid)
# self.featurizer = Nystroem()
self.featurizer = RBFSampler()
self.featurizer.fit(samples)
dims = self.featurizer.n_components
# initialize linear model weights
self.w = np.zeros(dims)
def predict(self, s, a):
sa = merge_state_action(s, a)
x = self.featurizer.transform([sa])[0]
return x @ self.w
def predict_all_actions(self, s):
return [self.predict(s, a) for a in ALL_POSSIBLE_ACTIONS]
def grad(self, s, a):
sa = merge_state_action(s, a)
x = self.featurizer.transform([sa])[0]
return x
if __name__ == '__main__':
# use the standard grid again (0 for every step) so that we can compare
# to iterative policy evaluation
# grid = standard_grid()
grid = negative_grid(step_cost=-0.1)
# print rewards
print("rewards:")
print_values(grid.rewards, grid)
model = Model(grid)
reward_per_episode = []
state_visit_count = {}
# repeat until convergence
n_episodes = 20000
for it in range(n_episodes):
if (it + 1) % 100 == 0:
print(it + 1)
s = grid.reset()
state_visit_count[s] = state_visit_count.get(s, 0) + 1
episode_reward = 0
while not grid.game_over():
a = epsilon_greedy(model, s)
r = grid.move(a)
s2 = grid.current_state()
state_visit_count[s2] = state_visit_count.get(s2, 0) + 1
# get the target
if grid.game_over():
target = r
else:
values = model.predict_all_actions(s2)
target = r + GAMMA * np.max(values)
# update the model
g = model.grad(s, a)
err = target - model.predict(s, a)
model.w += ALPHA * err * g
# accumulate reward
episode_reward += r
# update state
s = s2
reward_per_episode.append(episode_reward)
plt.plot(reward_per_episode)
plt.title("Reward per episode")
plt.show()
# obtain V* and pi*
V = {}
greedy_policy = {}
states = grid.all_states()
for s in states:
if s in grid.actions:
values = model.predict_all_actions(s)
V[s] = np.max(values)
greedy_policy[s] = ALL_POSSIBLE_ACTIONS[np.argmax(values)]
else:
# terminal state or state we can't otherwise get to
V[s] = 0
print("values:")
print_values(V, grid)
print("policy:")
print_policy(greedy_policy, grid)
print("state_visit_count:")
state_sample_count_arr = np.zeros((grid.rows, grid.cols))
for i in range(grid.rows):
for j in range(grid.cols):
if (i, j) in state_visit_count:
state_sample_count_arr[i,j] = state_visit_count[(i, j)]
df = pd.DataFrame(state_sample_count_arr)
print(df)
|
Model
|
python
|
Pylons__pyramid
|
tests/test_scripts/dummy.py
|
{
"start": 1816,
"end": 1998
}
|
class ____:
def __init__(self, **attrs):
self.__request_attrs__ = attrs
def view(context, request): # pragma: no cover
pass
@implementer(IMultiView)
|
DummyView
|
python
|
kamyu104__LeetCode-Solutions
|
Python/sum-of-perfect-square-ancestors.py
|
{
"start": 604,
"end": 2263
}
|
class ____(object):
def sumOfAncestors(self, n, edges, nums):
"""
:type n: int
:type edges: List[List[int]]
:type nums: List[int]
:rtype: int
"""
def prime_factors(x):
result = 1
while x != 1:
if result%SPF[x] == 0:
result //= SPF[x]
else:
result *= SPF[x]
x //= SPF[x]
return result
def iter_dfs():
result = 0
stk = [(1, (0, -1))]
while stk:
step, args = stk.pop()
if step == 1:
u, p = args
x = prime_factors(nums[u])
result += cnt[x]
cnt[x] += 1
stk.append((3, (x,)))
stk.append((2, (u, p, 0)))
elif step == 2:
u, p, i = args
if i == len(adj[u]):
continue
stk.append((2, (u, p, i+1)))
v = adj[u][i]
if v == p:
continue
stk.append((1, (v, u)))
elif step == 3:
x = args[0]
cnt[x] -= 1
return result
adj = [[] for _ in xrange(n)]
for u, v in edges:
adj[u].append(v)
adj[v].append(u)
cnt = collections.defaultdict(int)
return iter_dfs()
# Time: precompute: O(r)
# runtime: O(nlogx)
# Space: O(r + n)
import collections
# number theory, dfs, freq table
|
Solution
|
python
|
apache__airflow
|
providers/microsoft/mssql/tests/unit/microsoft/mssql/hooks/test_mssql.py
|
{
"start": 3873,
"end": 10205
}
|
class ____:
def setup_method(self):
MsSqlHook._resolve_target_fields = True
def teardown_method(self, method):
MsSqlHook._resolve_target_fields = conf.getboolean(
"core", "dbapihook_resolve_target_fields", fallback=False
)
@mock.patch("airflow.providers.microsoft.mssql.hooks.mssql.MsSqlHook.get_conn")
@mock.patch("airflow.providers.common.sql.hooks.sql.DbApiHook.get_connection")
def test_get_conn_should_return_connection(self, get_connection, mssql_get_conn, mssql_connections):
get_connection.return_value = mssql_connections["default"]
mssql_get_conn.return_value = mock.Mock()
hook = MsSqlHook()
conn = hook.get_conn()
assert mssql_get_conn.return_value == conn
mssql_get_conn.assert_called_once()
@mock.patch("airflow.providers.microsoft.mssql.hooks.mssql.MsSqlHook.get_conn")
@mock.patch("airflow.providers.common.sql.hooks.sql.DbApiHook.get_connection")
def test_set_autocommit_should_invoke_autocommit(self, get_connection, mssql_get_conn, mssql_connections):
get_connection.return_value = mssql_connections["default"]
mssql_get_conn.return_value = mock.Mock()
autocommit_value = mock.Mock()
hook = MsSqlHook()
conn = hook.get_conn()
hook.set_autocommit(conn, autocommit_value)
mssql_get_conn.assert_called_once()
mssql_get_conn.return_value.autocommit.assert_called_once_with(autocommit_value)
@mock.patch("airflow.providers.microsoft.mssql.hooks.mssql.MsSqlHook.get_conn")
@mock.patch("airflow.providers.common.sql.hooks.sql.DbApiHook.get_connection")
def test_get_autocommit_should_return_autocommit_state(
self, get_connection, mssql_get_conn, mssql_connections
):
get_connection.return_value = mssql_connections["default"]
mssql_get_conn.return_value = mock.Mock()
mssql_get_conn.return_value.autocommit_state = "autocommit_state"
hook = MsSqlHook()
conn = hook.get_conn()
mssql_get_conn.assert_called_once()
assert hook.get_autocommit(conn) == "autocommit_state"
@pytest.mark.parametrize(("conn_id", "exp_uri"), URI_TEST_CASES)
@mock.patch("airflow.providers.microsoft.mssql.hooks.mssql.MsSqlHook.get_connection")
def test_get_uri_driver_rewrite(self, get_connection, mssql_connections, conn_id, exp_uri):
get_connection.return_value = mssql_connections[conn_id]
hook = MsSqlHook()
res_uri = hook.get_uri()
get_connection.assert_called()
assert res_uri == exp_uri
@mock.patch("airflow.providers.microsoft.mssql.hooks.mssql.MsSqlHook.get_connection")
def test_sqlalchemy_scheme_is_default(self, get_connection, mssql_connections):
get_connection.return_value = mssql_connections["default"]
hook = MsSqlHook()
assert hook.sqlalchemy_scheme == hook.DEFAULT_SQLALCHEMY_SCHEME
def test_sqlalchemy_scheme_is_from_hook(self):
hook = MsSqlHook(sqlalchemy_scheme="mssql+mytestdriver")
assert hook.sqlalchemy_scheme == "mssql+mytestdriver"
@mock.patch("airflow.providers.microsoft.mssql.hooks.mssql.MsSqlHook.get_connection")
def test_sqlalchemy_scheme_is_from_conn_extra(self, get_connection, mssql_connections):
get_connection.return_value = mssql_connections["alt_1"]
hook = MsSqlHook()
scheme = hook.sqlalchemy_scheme
get_connection.assert_called()
assert scheme == mssql_connections["alt_1"].extra_dejson["SQlalchemy_Scheme"]
@mock.patch("airflow.providers.microsoft.mssql.hooks.mssql.MsSqlHook.get_connection")
def test_get_sqlalchemy_engine(self, get_connection, mssql_connections):
get_connection.return_value = mssql_connections["default"]
hook = MsSqlHook()
hook.get_sqlalchemy_engine()
@mock.patch("airflow.providers.microsoft.mssql.hooks.mssql.MsSqlHook.get_connection")
@mock.patch(
"airflow.providers.microsoft.mssql.dialects.mssql.MsSqlDialect.get_target_fields",
get_target_fields,
)
@mock.patch(
"airflow.providers.microsoft.mssql.dialects.mssql.MsSqlDialect.get_primary_keys",
get_primary_keys,
)
def test_generate_insert_sql(self, get_connection):
PYMSSQL_CONN = Connection(
conn_type="mssql", host="ip", schema="share", login="username", password="password", port=8081
)
get_connection.return_value = PYMSSQL_CONN
hook = MsSqlHook(escape_word_format="[{}]")
sql = hook._generate_insert_sql(
table="YAMMER_GROUPS_ACTIVITY_DETAIL",
values=[
"2024-07-17",
"daa5b44c-80d6-4e22-85b5-a94e04cf7206",
"no-reply@microsoft.com",
"2024-07-17",
0,
0.0,
"MICROSOFT FABRIC (FREE)+MICROSOFT 365 E5",
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
"PT0S",
"PT0S",
"PT0S",
0,
0,
0,
"Yes",
0,
0,
"APACHE",
0.0,
0,
"Yes",
1,
"2024-07-17T00:00:00+00:00",
],
replace=True,
)
assert sql == load_file_from_resources(dirname(__file__), "..", "resources", "replace.sql")
def test_dialect_name(self):
hook = MsSqlHook()
assert hook.dialect_name == "mssql"
def test_dialect(self):
hook = MsSqlHook()
assert isinstance(hook.dialect, MsSqlDialect)
def test_reserved_words(self):
hook = MsSqlHook()
assert hook.reserved_words == sqlalchemy.dialects.mssql.base.RESERVED_WORDS
@pytest.mark.db_test
@mock.patch("airflow.providers.microsoft.mssql.hooks.mssql.MsSqlHook.get_connection")
def test_get_extra(self, get_connection, mssql_connections):
get_connection.return_value = mssql_connections["alt_2"]
hook = MsSqlHook()
assert hook.get_connection().extra
|
TestMsSqlHook
|
python
|
h5py__h5py
|
h5py/tests/test_dataset_getitem.py
|
{
"start": 11058,
"end": 15759
}
|
class ____(TestCase):
def setUp(self):
TestCase.setUp(self)
self.data = np.arange(13).astype('f')
self.dset = self.f.create_dataset('x', data=self.data)
def test_ndim(self):
""" Verify number of dimensions """
self.assertEqual(self.dset.ndim, 1)
def test_shape(self):
""" Verify shape """
self.assertEqual(self.dset.shape, (13,))
def test_ellipsis(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[...])
def test_tuple(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[()])
def test_slice_simple(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[0:4])
def test_slice_zerosize(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[4:4])
def test_slice_strides(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[1:7:3])
def test_slice_negindexes(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[-8:-2:3])
def test_slice_stop_less_than_start(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[7:5])
def test_slice_outofrange(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[100:400:3])
def test_slice_backwards(self):
""" we disallow negative steps """
with self.assertRaises(ValueError):
self.dset[::-1]
def test_slice_zerostride(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[::0])
def test_index_simple(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[3])
def test_index_neg(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[-4])
# FIXME: NumPy permits this... it adds a new axis in front
def test_index_none(self):
with self.assertRaises(TypeError):
self.dset[None]
def test_index_illegal(self):
""" Illegal slicing argument """
with self.assertRaises(TypeError):
self.dset[{}]
def test_index_outofrange(self):
with self.assertRaises(IndexError):
self.dset[100]
def test_indexlist_simple(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[[1,2,5]])
def test_indexlist_numpyarray(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[np.array([1, 2, 5])])
def test_indexlist_single_index_ellipsis(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[[0], ...])
def test_indexlist_numpyarray_single_index_ellipsis(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[np.array([0]), ...])
def test_indexlist_numpyarray_ellipsis(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[np.array([1, 2, 5]), ...])
def test_indexlist_empty(self):
self.assertNumpyBehavior(self.dset, self.data, np.s_[[]])
def test_indexlist_outofrange(self):
with self.assertRaises(IndexError):
self.dset[[100]]
def test_indexlist_nonmonotonic(self):
""" we require index list values to be strictly increasing """
with self.assertRaises(TypeError):
self.dset[[1,3,2]]
def test_indexlist_monotonic_negative(self):
# This should work: indices are logically increasing
self.assertNumpyBehavior(self.dset, self.data, np.s_[[0, 2, -2]])
with self.assertRaises(TypeError):
self.dset[[-2, -3]]
def test_indexlist_repeated(self):
""" we forbid repeated index values """
with self.assertRaises(TypeError):
self.dset[[1,1,2]]
def test_mask_true(self):
self.assertNumpyBehavior(
self.dset,
self.data,
np.s_[self.data > -100],
# Fast reader doesn't work with boolean masks
skip_fast_reader=True,
)
def test_mask_false(self):
self.assertNumpyBehavior(
self.dset,
self.data,
np.s_[self.data > 100],
# Fast reader doesn't work with boolean masks
skip_fast_reader=True,
)
def test_mask_partial(self):
self.assertNumpyBehavior(
self.dset,
self.data,
np.s_[self.data > 5],
# Fast reader doesn't work with boolean masks
skip_fast_reader=True,
)
def test_mask_wrongsize(self):
""" we require the boolean mask shape to match exactly """
with self.assertRaises(TypeError):
self.dset[np.ones((2,), dtype='bool')]
def test_fieldnames(self):
""" field name -> ValueError (no fields) """
with self.assertRaises(ValueError):
self.dset['field']
|
Test1DFloat
|
python
|
astropy__astropy
|
astropy/utils/data_info.py
|
{
"start": 17827,
"end": 26329
}
|
class ____(DataInfo):
"""Base info class for anything that can be a column in an astropy Table.
There are at least two classes that inherit from this:
ColumnInfo: for native astropy Column / MaskedColumn objects
MixinInfo: for mixin column objects
Note that this class is defined here so that mixins can use it
without importing the table package.
"""
attr_names = DataInfo.attr_names | {"parent_table", "indices"}
_attrs_no_copy = {"parent_table", "indices"}
# Context for serialization. This can be set temporarily via
# ``serialize_context_as(context)`` context manager to allow downstream
# code to understand the context in which a column is being serialized.
# Typical values are 'fits', 'hdf5', 'parquet', 'ecsv', 'yaml'. Objects
# like Time or SkyCoord will have different default serialization
# representations depending on context.
_serialize_context = None
__slots__ = ["_copy_indices", "_format_funcs"]
@property
def parent_table(self):
value = self._attrs.get("parent_table")
if callable(value):
value = value()
return value
@parent_table.setter
def parent_table(self, parent_table):
if parent_table is None:
self._attrs.pop("parent_table", None)
else:
parent_table = weakref.ref(parent_table)
self._attrs["parent_table"] = parent_table
def __init__(self, bound=False):
super().__init__(bound=bound)
# If bound to a data object instance then add a _format_funcs dict
# for caching functions for print formatting.
if bound:
self._format_funcs = {}
def __set__(self, instance, value):
# For Table columns do not set `info` when the instance is a scalar.
try:
if not instance.shape:
return
except AttributeError:
pass
super().__set__(instance, value)
def iter_str_vals(self):
"""
This is a mixin-safe version of Column.iter_str_vals.
"""
col = self._parent
if self.parent_table is None:
from astropy.table.column import FORMATTER as formatter
else:
formatter = self.parent_table.formatter
_pformat_col_iter = formatter._pformat_col_iter
yield from _pformat_col_iter(col, -1, False, False, {})
@property
def indices(self):
# Implementation note: the auto-generation as an InfoAttribute cannot
# be used here, since on access, one should not just return the
# default (empty list is this case), but set _attrs['indices'] so that
# if the list is appended to, it is registered here.
return self._attrs.setdefault("indices", [])
@indices.setter
def indices(self, indices):
self._attrs["indices"] = indices
def adjust_indices(self, index, value, col_len):
"""
Adjust info indices after column modification.
Parameters
----------
index : slice, int, list, or ndarray
Element(s) of column to modify. This parameter can
be a single row number, a list of row numbers, an
ndarray of row numbers, a boolean ndarray (a mask),
or a column slice.
value : int, list, or ndarray
New value(s) to insert
col_len : int
Length of the column
"""
if not self.indices:
return
if isinstance(index, slice):
# run through each key in slice
t = index.indices(col_len)
keys = list(range(*t))
elif isinstance(index, np.ndarray) and index.dtype.kind == "b":
# boolean mask
keys = np.where(index)[0]
else: # single int
keys = [index]
value = np.atleast_1d(value) # turn array(x) into array([x])
if value.size == 1:
# repeat single value
value = list(value) * len(keys)
for key, val in zip(keys, value):
for col_index in self.indices:
col_index.replace(key, self.name, val)
def slice_indices(self, col_slice, item, col_len):
"""
Given a sliced object, modify its indices
to correctly represent the slice.
Parameters
----------
col_slice : `~astropy.table.Column` or mixin
Sliced object. If not a column, it must be a valid mixin, see
https://docs.astropy.org/en/stable/table/mixin_columns.html
item : slice, list, or ndarray
Slice used to create col_slice
col_len : int
Length of original object
"""
from astropy.table.sorted_array import SortedArray
if not getattr(self, "_copy_indices", True):
# Necessary because MaskedArray will perform a shallow copy
col_slice.info.indices = []
return col_slice
elif isinstance(item, slice):
col_slice.info.indices = [x[item] for x in self.indices]
elif self.indices:
if isinstance(item, np.ndarray) and item.dtype.kind == "b":
# boolean mask
item = np.where(item)[0]
# Empirical testing suggests that recreating a BST/RBT index is
# more effective than relabelling when less than ~60% of
# the total number of rows are involved, and is in general
# more effective for SortedArray.
small = len(item) <= 0.6 * col_len
col_slice.info.indices = []
for index in self.indices:
if small or isinstance(index, SortedArray):
new_index = index.get_slice(col_slice, item)
else:
new_index = deepcopy(index)
new_index.replace_rows(item)
col_slice.info.indices.append(new_index)
return col_slice
@staticmethod
def merge_cols_attributes(cols, metadata_conflicts, name, attrs):
"""
Utility method to merge and validate the attributes ``attrs`` for the
input table columns ``cols``.
Note that ``dtype`` and ``shape`` attributes are handled specially.
These should not be passed in ``attrs`` but will always be in the
returned dict of merged attributes.
Parameters
----------
cols : list
List of input Table column objects
metadata_conflicts : str ('warn'|'error'|'silent')
How to handle metadata conflicts
name : str or None
Output column name
attrs : list
List of attribute names to be merged
Returns
-------
attrs : dict
Of merged attributes.
"""
from astropy.table.np_utils import TableMergeError
def warn_str_func(key, left, right):
out = (
f"In merged column '{name}' the '{key}' attribute does not match "
f"({left} != {right}). Using {right} for merged output"
)
return out
def getattrs(col):
return {
attr: getattr(col.info, attr)
for attr in attrs
if getattr(col.info, attr, None) is not None
}
out = getattrs(cols[0])
for col in cols[1:]:
out = metadata.merge(
out,
getattrs(col),
metadata_conflicts=metadata_conflicts,
warn_str_func=warn_str_func,
)
# Output dtype is the superset of all dtypes in in_cols
out["dtype"] = metadata.common_dtype(cols)
# Make sure all input shapes are the same
uniq_shapes = {col.shape[1:] for col in cols}
if len(uniq_shapes) != 1:
raise TableMergeError("columns have different shapes")
out["shape"] = uniq_shapes.pop()
# "Merged" output name is the supplied name
if name is not None:
out["name"] = str(name)
return out
def get_sortable_arrays(self):
"""
Return a list of arrays which can be lexically sorted to represent
the order of the parent column.
The base method raises NotImplementedError and must be overridden.
Returns
-------
arrays : list of ndarray
"""
raise NotImplementedError(f"column {self.name} is not sortable")
|
BaseColumnInfo
|
python
|
microsoft__pyright
|
packages/pyright-internal/src/tests/samples/memberAccess4.py
|
{
"start": 863,
"end": 996
}
|
class ____(Mixin2):
pass
A2.do_stuff()
# This should generate an error because B2 doesn't
# match the protocol.
B2.do_stuff()
|
B2
|
python
|
anthropics__anthropic-sdk-python
|
src/anthropic/types/beta/beta_server_tool_caller.py
|
{
"start": 197,
"end": 299
}
|
class ____(BaseModel):
tool_id: str
type: Literal["code_execution_20250825"]
|
BetaServerToolCaller
|
python
|
langchain-ai__langchain
|
libs/core/langchain_core/runnables/graph.py
|
{
"start": 2271,
"end": 3107
}
|
class ____(NamedTuple):
"""Node in a graph."""
id: str
"""The unique identifier of the node."""
name: str
"""The name of the node."""
data: type[BaseModel] | RunnableType | None
"""The data of the node."""
metadata: dict[str, Any] | None
"""Optional metadata for the node. """
def copy(
self,
*,
id: str | None = None,
name: str | None = None,
) -> Node:
"""Return a copy of the node with optional new id and name.
Args:
id: The new node id.
name: The new node name.
Returns:
A copy of the node with the new id and name.
"""
return Node(
id=id or self.id,
name=name or self.name,
data=self.data,
metadata=self.metadata,
)
|
Node
|
python
|
PrefectHQ__prefect
|
src/prefect/server/database/orm_models.py
|
{
"start": 11117,
"end": 11697
}
|
class ____(Base):
key: Mapped[str]
latest_id: Mapped[uuid.UUID]
task_run_id: Mapped[Optional[uuid.UUID]]
flow_run_id: Mapped[Optional[uuid.UUID]]
type: Mapped[Optional[str]]
data: Mapped[Optional[Any]] = mapped_column(sa_JSON)
description: Mapped[Optional[str]]
metadata_: Mapped[Optional[dict[str, str]]] = mapped_column(sa_JSON)
__table_args__: Any = (
sa.UniqueConstraint("key"),
sa.Index(
"ix_artifact_collection__key_latest_id",
"key",
"latest_id",
),
)
|
ArtifactCollection
|
python
|
langchain-ai__langchain
|
libs/core/langchain_core/messages/content.py
|
{
"start": 16500,
"end": 17784
}
|
class ____(TypedDict):
"""Video data.
!!! note "Factory function"
`create_video_block` may also be used as a factory to create a
`VideoContentBlock`. Benefits include:
* Automatic ID generation (when not provided)
* Required arguments strictly validated at creation time
"""
type: Literal["video"]
"""Type of the content block. Used for discrimination."""
id: NotRequired[str]
"""Content block identifier.
Either:
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
"""
file_id: NotRequired[str]
"""ID of the video file, e.g., from a file storage system."""
mime_type: NotRequired[str]
"""MIME type of the video. Required for base64.
[Examples from IANA](https://www.iana.org/assignments/media-types/media-types.xhtml#video)
"""
index: NotRequired[int | str]
"""Index of block in aggregate response. Used during streaming."""
url: NotRequired[str]
"""URL of the video."""
base64: NotRequired[str]
"""Data as a base64 string."""
extras: NotRequired[dict[str, Any]]
"""Provider-specific metadata. This shouldn't be used for the video data itself."""
|
VideoContentBlock
|
python
|
keras-team__keras
|
keras/src/models/model_test.py
|
{
"start": 5449,
"end": 44649
}
|
class ____(testing.TestCase):
def test_functional_rerouting(self):
model = _get_model()
self.assertIsInstance(model, Functional)
def test_json_serialization(self):
model = _get_model()
json_string = model.to_json()
new_model = model_from_json(json_string)
self.assertEqual(json_string, new_model.to_json())
def test_tuple_input_model_subclass(self):
# https://github.com/keras-team/keras/issues/324
class MultiInputModel(Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.dense1 = layers.Dense(4)
def call(self, inputs):
a, b = inputs
r = self.dense1(a)
return layers.concatenate([r, b])
model = MultiInputModel()
x1 = np.random.rand(3, 3)
x2 = np.random.rand(3, 2)
out = model((x1, x2))
self.assertEqual(out.shape, (3, 6))
def test_reviving_functional_from_config_custom_layer(self):
class CustomDense(layers.Layer):
def __init__(self, units, **kwargs):
super().__init__(**kwargs)
self.dense = layers.Dense(units)
def call(self, x):
return self.dense(x)
inputs = layers.Input((4,))
outputs = CustomDense(10)(inputs)
model = Model(inputs, outputs)
config = model.get_config()
new_model = Model.from_config(
config, custom_objects={"CustomDense": CustomDense}
)
self.assertIsInstance(new_model, Functional)
def test_reviving_functional_from_config_custom_model(self):
class CustomModel(Model):
def __init__(self, *args, param=1, **kwargs):
super().__init__(*args, **kwargs)
self.param = param
def get_config(self):
base_config = super().get_config()
config = {"param": self.param}
return base_config | config
inputs = layers.Input((3,))
outputs = layers.Dense(5)(inputs)
model = CustomModel(inputs=inputs, outputs=outputs, param=3)
new_model = CustomModel.from_config(model.get_config())
self.assertEqual(new_model.param, 3)
@parameterized.named_parameters(
("single_output_1", _get_model_single_output),
("single_output_2", _get_model_single_output),
("single_output_3", _get_model_single_output),
("single_output_4", _get_model_single_output),
("single_list_output_1", _get_model_single_output_list),
("single_list_output_2", _get_model_single_output_list),
("single_list_output_3", _get_model_single_output_list),
("single_list_output_4", _get_model_single_output_list),
)
def test_functional_pickling(self, model_fn):
model = model_fn()
self.assertIsInstance(model, Functional)
model.compile()
x = np.random.rand(8, 3)
reloaded_pickle = pickle.loads(pickle.dumps(model))
pred_reloaded = reloaded_pickle.predict(x)
pred = model.predict(x)
self.assertAllClose(np.array(pred_reloaded), np.array(pred))
@parameterized.named_parameters(
("single_output_1", _get_model_single_output, None),
("single_output_2", _get_model_single_output, "list"),
("single_output_3", _get_model_single_output, "dict"),
("single_output_4", _get_model_single_output, "dict_list"),
("single_list_output_1", _get_model_single_output_list, None),
("single_list_output_2", _get_model_single_output_list, "list"),
("single_list_output_3", _get_model_single_output_list, "dict"),
("single_list_output_4", _get_model_single_output_list, "dict_list"),
("single_dict_output_1", _get_model_single_output_dict, None),
("single_dict_output_2", _get_model_single_output_dict, "list"),
("single_dict_output_3", _get_model_single_output_dict, "dict"),
("single_dict_output_4", _get_model_single_output_dict, "dict_list"),
)
def test_functional_single_output(self, model_fn, loss_type):
model = model_fn()
self.assertIsInstance(model, Functional)
loss = "mean_squared_error"
if loss_type == "list":
loss = [loss]
elif loss_type == "dict":
loss = {"output_a": loss}
elif loss_type == "dict_list":
loss = {"output_a": [loss]}
model.compile(
optimizer="sgd",
loss=loss,
metrics={
"output_a": ["mean_squared_error", "mean_absolute_error"],
},
weighted_metrics={
"output_a": "mean_squared_error",
},
)
# Fit the model to make sure compile_metrics are built
x = np.random.rand(8, 3)
y = np.random.rand(8, 1)
hist = model.fit(
x,
y,
batch_size=2,
epochs=1,
verbose=0,
)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"mean_absolute_error",
"mean_squared_error",
"weighted_mean_squared_error",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_list_outputs_list_losses(self):
model = _get_model_multi_outputs_list()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss=["mean_squared_error", "binary_crossentropy"],
metrics=[
"mean_squared_error",
["mean_squared_error", "accuracy"],
],
loss_weights=[0.1, 2],
)
# Fit the model to make sure compile_metrics are built
hist = model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"output_a_loss",
"output_a_mean_squared_error",
"output_b_accuracy",
"output_b_loss",
"output_b_mean_squared_error",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_list_outputs_list_losses_abbr(self):
model = _get_model_multi_outputs_list()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss=["mse", "bce"],
metrics=[
["bce", "mse", "mae"],
["mse", "acc"],
],
loss_weights=[0.1, 2],
)
# Fit the model to make sure compile_metrics are built
hist = model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"output_a_loss",
"output_a_bce",
"output_a_mae",
"output_a_mse",
"output_b_acc",
"output_b_loss",
"output_b_mse",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_list_outputs_nested_list_losses(self):
model = _get_model_multi_outputs_list()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss=["mean_squared_error", ["binary_crossentropy"]],
metrics=[
"mean_squared_error",
["mean_squared_error", "accuracy"],
],
loss_weights=[0.1, 2],
)
# Fit the model to make sure compile_metrics are built
hist = model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"output_a_loss",
"output_a_mean_squared_error",
"output_b_accuracy",
"output_b_loss",
"output_b_mean_squared_error",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_dict_outputs_dict_losses(self):
model = _get_model_multi_outputs_dict()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss={
"output_a": "mean_squared_error",
"output_b": ["binary_crossentropy"],
},
metrics={
"output_a": ["mean_squared_error"],
"output_b": ["mean_squared_error", "accuracy"],
},
weighted_metrics={
"output_a": ["mean_squared_error"],
"output_b": ["mean_squared_error", "accuracy"],
},
)
# Check dict outputs.
outputs = model.predict(x)
self.assertIsInstance(outputs, dict)
self.assertEqual(outputs["output_a"].shape, (8, 1))
self.assertEqual(outputs["output_b"].shape, (8, 1))
# Fit the model to make sure compile_metrics are built
hist = model.fit(
x,
{"output_a": y1, "output_b": y2},
batch_size=2,
epochs=1,
verbose=0,
)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"output_a_loss",
"output_a_mean_squared_error",
"output_a_weighted_mean_squared_error",
"output_b_accuracy",
"output_b_loss",
"output_b_mean_squared_error",
"output_b_weighted_accuracy",
"output_b_weighted_mean_squared_error",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_dict_outputs_dict_losses_with_undefined_loss(self):
model = _get_model_multi_outputs_dict()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss={
"output_b": ["binary_crossentropy"],
},
metrics={
"output_b": ["mean_squared_error", "accuracy"],
},
weighted_metrics={
"output_b": ["mean_squared_error", "accuracy"],
},
)
# Check dict outputs.
outputs = model.predict(x)
self.assertIsInstance(outputs, dict)
self.assertEqual(outputs["output_a"].shape, (8, 1))
self.assertEqual(outputs["output_b"].shape, (8, 1))
# Fit the model to make sure compile_metrics are built
hist = model.fit(
x,
{"output_a": y1, "output_b": y2},
batch_size=2,
epochs=1,
verbose=0,
)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"output_b_accuracy",
"output_b_mean_squared_error",
"output_b_weighted_accuracy",
"output_b_weighted_mean_squared_error",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_list_outputs_dict_losses_metrics(self):
model = _get_model_multi_outputs_list()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss={
"output_a": "mean_squared_error",
"output_b": "binary_crossentropy",
},
metrics={
"output_a": ["mean_squared_error"],
"output_b": ["mean_squared_error", "accuracy"],
},
weighted_metrics={
"output_a": ["mean_squared_error"],
"output_b": ["mean_squared_error", "accuracy"],
},
)
# Check list outputs.
outputs = model.predict(x)
self.assertIsInstance(outputs, list)
self.assertEqual(outputs[0].shape, (8, 1))
self.assertEqual(outputs[1].shape, (8, 1))
# Fit the model to make sure compile_metrics are built
hist = model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"output_a_loss",
"output_a_mean_squared_error",
"output_a_weighted_mean_squared_error",
"output_b_accuracy",
"output_b_loss",
"output_b_mean_squared_error",
"output_b_weighted_accuracy",
"output_b_weighted_mean_squared_error",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_list_outputs_dict_losses_metrics_uniq_weighted(self):
model = _get_model_multi_outputs_list()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss={
"output_a": "mean_squared_error",
"output_b": "binary_crossentropy",
},
metrics={
"output_a": ["mean_squared_error"],
"output_b": ["mean_squared_error"],
},
weighted_metrics={
"output_a": ["mean_squared_error"],
"output_b": ["accuracy"],
},
)
# Fit the model to make sure compile_metrics are built
hist = model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
hist_keys = sorted(hist.history.keys())
# `output_b_accuracy` doesn't have `weighted_` in metric name.
# When a metric is only in weighted metrics, it skips `weighted_`
# prefix. This behavior matches`tf.keras`.
ref_keys = sorted(
[
"loss",
"output_a_loss",
"output_a_mean_squared_error",
"output_a_weighted_mean_squared_error",
"output_b_accuracy",
"output_b_loss",
"output_b_mean_squared_error",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_list_outputs_dict_losses_partial_metrics(self):
model = _get_model_multi_outputs_list()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss={
"output_a": "mean_squared_error",
"output_b": "binary_crossentropy",
},
metrics={
"output_b": ["mean_squared_error", "accuracy"],
},
)
# Fit the model to make sure compile_metrics are built
hist = model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"output_a_loss",
"output_b_accuracy",
"output_b_loss",
"output_b_mean_squared_error",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_dict_outputs_with_single_tensor(self):
model = _get_model_multi_outputs_dict_with_single_tensor()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
# `model` has 2 outputs, but there is actually only 1 output tensor.
self.assertLen(model.outputs, 2)
model.compile(
optimizer="sgd",
loss={
"output_a": "mean_squared_error",
"output_b": "binary_crossentropy",
},
)
hist = model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(["loss", "output_a_loss", "output_b_loss"])
self.assertListEqual(hist_keys, ref_keys)
def test_functional_list_outputs_with_custom_compute_loss(self):
model = _get_model_with_custom_compute_loss()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
# `model` has 1 output, but in `compute_loss` it is separated into 2.
self.assertLen(model.outputs, 1)
model.compile(
optimizer="sgd", loss=["mean_squared_error", "binary_crossentropy"]
)
hist = model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
["binary_crossentropy_loss", "loss", "mean_squared_error_loss"]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_list_outputs_dict_losses_invalid_keys(self):
model = _get_model_multi_outputs_list()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss={
"output_a": "mean_squared_error",
"output_c": "binary_crossentropy",
},
)
# Fit the model to make sure compile_metrics are built
with self.assertRaisesRegex(
ValueError,
"Expected keys",
):
model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
def test_functional_list_outputs_dict_losses_no_output_names(self):
model = _get_model_multi_outputs_list_no_output_names()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss={"output_a": "mean_squared_error"},
)
# Fit the model to make sure compile_metrics are built
with self.assertRaisesRegex(
ValueError,
"Expected keys",
):
model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
def test_functional_list_outputs_dict_metrics_invalid_keys(self):
model = _get_model_multi_outputs_list()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss={
"output_a": "mean_squared_error",
"output_b": "binary_crossentropy",
},
metrics={
"output_c": ["mean_squared_error", "accuracy"],
},
)
# Fit the model to make sure compile_metrics are built
with self.assertRaisesRegex(
ValueError,
"In the dict argument `metrics`, "
"key 'output_c' does not correspond to any model output",
):
model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
def test_functional_dict_outputs_dict_losses_invalid_keys(self):
model = _get_model_multi_outputs_dict()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss={
"output_a": "mean_squared_error",
"output_c": "binary_crossentropy",
},
)
# Fit the model to make sure compile_metrics are built
with self.assertRaisesRegex(
KeyError,
"in the `loss` argument, can't be found "
"in either the model's output",
):
model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
def test_functional_dict_outputs_dict_metrics_invalid_keys(self):
model = _get_model_multi_outputs_dict()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss={
"output_a": "mean_squared_error",
"output_b": "binary_crossentropy",
},
metrics={
"output_c": ["mean_squared_error", "accuracy"],
},
)
# Fit the model to make sure compile_metrics are built
with self.assertRaisesRegex(
ValueError,
"In the dict argument `metrics`, "
"key 'output_c' does not correspond to any model output",
):
model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
def test_functional_list_outputs_invalid_nested_list_losses(self):
model = _get_model_multi_outputs_list()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.randint(0, 2, (8, 1))
model.compile(
optimizer="sgd",
loss=[
"mean_squared_error",
["mean_squared_error", "binary_crossentropy"],
],
)
hist = model.fit(x, (y1, y2), batch_size=2, epochs=1, verbose=0)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(["loss", "output_a_loss", "output_b_loss"])
self.assertListEqual(hist_keys, ref_keys)
@parameterized.named_parameters(
("int8", "int8"),
("float8", "float8"),
)
def test_quantize(self, mode):
model = _get_model()
x1 = np.random.rand(2, 3)
x2 = np.random.rand(2, 3)
model.quantize(mode)
_ = model((x1, x2))
for layer in model._flatten_layers():
if isinstance(layer, (layers.Dense, layers.EinsumDense)):
self.assertEqual(
layer.dtype_policy.name, f"{mode}_from_float32"
)
self.assertEqual(layer.dtype_policy.quantization_mode, mode)
if mode == "int8":
self.assertLen(model.variables, 6)
if backend.backend() == "torch":
self.assertLen(list(model.named_parameters()), 6)
elif mode == "float8":
self.assertLen(model.variables, 16)
if backend.backend() == "torch":
self.assertLen(list(model.named_parameters()), 16)
@parameterized.named_parameters(
("int8", "int8"),
("float8", "float8"),
)
def test_quantize_unbuilt(self, mode):
class MyModel(Model):
def __init__(self):
super().__init__()
self.dense1 = layers.Dense(32, activation="relu")
self.dense2 = layers.Dense(5, activation="softmax")
self.dropout = layers.Dropout(0.5)
def call(self, inputs, training=False):
x = self.dense1(inputs)
x = self.dropout(x, training=training)
return self.dense2(x)
model = MyModel()
with self.assertRaisesRegex(
ValueError, "Cannot quantize a layer that isn't yet built."
):
model.quantize(mode)
x = np.random.rand(2, 3)
_ = model(x)
model.quantize(mode)
def test_quantize_invalid_args(self):
model = _get_model()
with self.assertRaisesRegex(
ValueError, "Invalid quantization mode. Expected one of"
):
model.quantize("abc")
with self.assertRaisesRegex(
ValueError, "Unrecognized keyword arguments"
):
model.quantize("int8", unrecognized_kwargs=None)
with self.assertRaisesRegex(ValueError, "Invalid quantization mode"):
model.quantize("int7")
@parameterized.named_parameters(
("int8", "int8"),
("float8", "float8"),
)
def test_quantize_nested_model(self, mode):
class NestedLayer(layers.Layer):
def __init__(self, units):
super().__init__()
self.dense = layers.Dense(units)
def call(self, x):
x = self.dense(x)
return x
class DoubleNestedLayer(layers.Layer):
def __init__(self, units):
super().__init__()
self.nested_dense1 = NestedLayer(units)
self.nested_dense2 = NestedLayer(units)
self.dense = layers.Dense(units)
def call(self, x):
x = self.nested_dense1(x)
x = self.nested_dense2(x)
x = self.dense(x)
return x
inputs = layers.Input([3])
outputs = DoubleNestedLayer(8)(inputs)
model = Model(inputs, outputs)
model.quantize(mode)
if mode == "int8":
kernel_count = 0
for weight in model.weights:
if weight.name == "kernel":
kernel_count += 1
self.assertEqual(
backend.standardize_dtype(weight.dtype), "int8"
)
self.assertEqual(kernel_count, 3)
if mode == "float8":
# kernel + bias + scale * 3 + amax_history * 3 == 8
self.assertEqual(len(model.weights), 3 * 8)
def test_get_state_tree(self):
model = _get_model_single_output()
model.compile(loss="mse", optimizer="adam")
state_tree = model.get_state_tree()
self.assertAllClose(
state_tree["trainable_variables"]["output_a"]["kernel"],
_get_variable_value_by_path(
model.trainable_variables, "output_a/kernel"
),
)
self.assertAllClose(
state_tree["trainable_variables"]["output_a"]["bias"],
_get_variable_value_by_path(
model.trainable_variables, "output_a/bias"
),
)
self.assertEqual(
state_tree["non_trainable_variables"],
{},
)
self.assertEqual(
state_tree["metrics_variables"]["loss"]["count"],
_get_variable_value_by_path(model.metrics_variables, "loss/count"),
)
self.assertEqual(
state_tree["metrics_variables"]["loss"]["total"],
_get_variable_value_by_path(model.metrics_variables, "loss/total"),
)
self.assertEqual(
state_tree["optimizer_variables"]["adam"]["iteration"],
_get_variable_value_by_path(
model.optimizer.variables, "adam/iteration"
),
)
self.assertEqual(
state_tree["optimizer_variables"]["adam"]["learning_rate"],
_get_variable_value_by_path(
model.optimizer.variables, "adam/learning_rate"
),
)
# Test with numpy
state_tree = model.get_state_tree(value_format="numpy_array")
self.assertIsInstance(
state_tree["trainable_variables"]["output_a"]["kernel"], np.ndarray
)
def test_set_state_tree(self):
variables = {
"optimizer_variables": {
"adam": {
"iteration": 0,
"learning_rate": 0.00001,
}
},
"trainable_variables": {
"output_a": {
"bias": [0.5],
"kernel": [[0.6], [0.7], [1.8]],
}
},
}
model = _get_model_single_output()
model.compile(optimizer="adam")
model.set_state_tree(variables)
self.assertEqual(
variables["optimizer_variables"]["adam"]["iteration"],
_get_variable_value_by_path(
model.optimizer.variables, "adam/iteration"
),
)
self.assertEqual(
variables["optimizer_variables"]["adam"]["learning_rate"],
_get_variable_value_by_path(
model.optimizer.variables, "adam/learning_rate"
),
)
self.assertAllClose(
variables["trainable_variables"]["output_a"]["bias"],
_get_variable_value_by_path(
model.trainable_variables, "output_a/bias"
),
)
self.assertAllClose(
variables["trainable_variables"]["output_a"]["kernel"],
_get_variable_value_by_path(
model.trainable_variables, "output_a/kernel"
),
)
def test_get_state_tree_with_duplicate_path(self):
model = _get_model_with_duplicate_variable_path()
with self.assertRaisesRegex(
ValueError,
"The following variable path is found twice in the model",
):
model.get_state_tree()
def test_layers_setter(self):
model = Model()
with self.assertRaisesRegex(
AttributeError, "`Model.layers` attribute is reserved"
):
model.layers = [layers.Dense(4)]
def get_struct_loss(self, structure):
def loss_fn(y_true, y_pred):
tree.assert_same_structure(structure, y_true)
tree.assert_same_structure(structure, y_pred)
tree.map_structure(
lambda spec, tensor: self.assertEqual(spec.ndim, tensor.ndim),
structure,
y_true,
)
tree.map_structure(
lambda spec, tensor: self.assertEqual(spec.ndim, tensor.ndim),
structure,
y_pred,
)
flat_y_pred = tree.flatten(y_pred)
flat_y_true = tree.flatten(y_true)
diff = 0
for y_p, y_t in zip(flat_y_pred, flat_y_true):
diff += losses.mean_absolute_error(y_t, y_p)
return diff
return loss_fn
@parameterized.product(
_type=[tuple, list], other_type=[list, tuple], weighted=[False, True]
)
def test_functional_struct_outputs_struct_losses(
self, _type, other_type, weighted
):
model = _get_model_multi_outputs_struct_list_like(_type)
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.rand(8, 1)
y = _type([y1, y2])
loss = other_type(
[
self.get_struct_loss(model.output),
_type(
[
self.get_struct_loss(model.output[0]),
self.get_struct_loss(model.output[1]),
]
),
]
)
if weighted:
loss_weights = tree.map_structure(lambda _: np.random.rand(), loss)
else:
loss_weights = None
model.compile(
optimizer="sgd",
loss=loss,
loss_weights=loss_weights,
)
if _type is other_type:
with self.assertRaisesRegex(
ValueError, f"[Ee]xpected.*{_type.__name__}"
):
model.fit(x, y, batch_size=2, epochs=1, verbose=0)
else:
# Check dict outputs.
outputs = model.predict(x)
self.assertIsInstance(outputs, _type)
# Fit the model to make sure compile_metrics are built
hist = model.fit(
x,
y,
batch_size=2,
epochs=1,
verbose=0,
)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"y1_loss",
"y2_loss",
"y1_y2_loss",
]
)
self.assertListEqual(hist_keys, ref_keys)
@parameterized.named_parameters(("weighted", True), ("not_weighted", False))
def test_functional_struct_outputs_dict_struct_losses(self, weighted):
model = _get_model_multi_outputs_struct_dict()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.rand(8, 1)
y = {"a": y1, "b": y2}
loss = [
self.get_struct_loss(model.output),
{
"a": self.get_struct_loss(model.output["a"]),
"b": self.get_struct_loss(model.output["a"]),
},
]
if weighted:
loss_weights = tree.map_structure(lambda _: np.random.rand(), loss)
else:
loss_weights = None
model.compile(
optimizer="sgd",
loss=loss,
loss_weights=loss_weights,
)
# Check dict outputs.
outputs = model.predict(x)
self.assertIsInstance(outputs, dict)
# Fit the model to make sure compile_metrics are built
hist = model.fit(
x,
y,
batch_size=2,
epochs=1,
verbose=0,
)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"a_loss",
"b_loss",
"a_b_loss",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_struct_outputs_namedtuple_struct_losses(self):
model, Y = _get_model_multi_outputs_struct_namedtuple()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.rand(8, 1)
y = Y(y1, y2)
model.compile(
optimizer="sgd",
loss=[
self.get_struct_loss(model.output),
Y(
self.get_struct_loss(model.output.y1),
self.get_struct_loss(model.output.y2),
),
],
)
# Check dict outputs.
outputs = model.predict(x)
self.assertIsInstance(outputs, tuple)
# Fit the model to make sure compile_metrics are built
hist = model.fit(
x,
y,
batch_size=2,
epochs=1,
verbose=0,
)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"loss",
"y1_loss",
"y2_loss",
"y1_y2_loss",
]
)
self.assertListEqual(hist_keys, ref_keys)
def test_functional_deeply_nested_outputs_struct_losses(self):
model = _get_model_multi_outputs_struct()
self.assertIsInstance(model, Functional)
x = np.random.rand(8, 3)
y1 = np.random.rand(8, 1)
y2 = np.random.rand(8, 1)
y3 = np.random.rand(8, 1)
y = {
"a": (y1, y2),
"b": {"b1": y1, "b2": y2},
"c": {"c1": (y1, y2), "c2": y2},
"d": y3,
}
model.compile(
optimizer="sgd",
loss={
"a": [
self.get_struct_loss(model.output["a"]),
(None, self.get_struct_loss(model.output["a"][1])),
],
"b": [
self.get_struct_loss(model.output["b"]),
{"b1": self.get_struct_loss(model.output["b"]["b1"])},
],
"c": [
self.get_struct_loss(model.output["c"]),
{"c1": self.get_struct_loss(model.output["c"]["c1"])},
],
"d": self.get_struct_loss(model.output["d"]),
},
)
# Check dict outputs.
outputs = model.predict(x)
self.assertIsInstance(outputs, dict)
# Fit the model to make sure compile_metrics are built
hist = model.fit(
x,
y,
batch_size=2,
epochs=1,
verbose=0,
)
hist_keys = sorted(hist.history.keys())
ref_keys = sorted(
[
"a/y2_loss",
"a_loss",
"b/b1_loss",
"b_loss",
"c/c1_loss",
"c_loss",
"d_loss",
"loss",
]
)
self.assertListEqual(hist_keys, ref_keys)
@parameterized.named_parameters(
("optional_none", True), ("optional_tensor", False)
)
def test_functional_optional_inputs(self, is_optional_none):
model = _get_model_optional_inputs()
x = np.ones((2, 2))
o = None if is_optional_none else np.ones((2, 2))
y_true = np.ones((2, 2))
model.compile(loss="mse", optimizer="adam")
model.fit(x={"x": x, "o": o}, y=y_true)
model.evaluate(x={"x": x, "o": o}, y=y_true)
model.predict(x={"x": x, "o": o})
@parameterized.named_parameters(
("optional_none", True), ("optional_tensor", False)
)
def test_functional_optional_inputs_generator(self, is_optional_none):
model = _get_model_optional_inputs()
x = np.ones((2, 2))
o = None if is_optional_none else np.ones((2, 2))
y_true = np.ones((2, 2))
def data_generator(with_y=True):
for _ in range(4):
yield ({"x": x, "o": o},) + ((y_true,) if with_y else ())
model.compile(loss="mse", optimizer="adam")
model.fit(data_generator())
model.evaluate(data_generator())
model.predict(data_generator(with_y=False))
def test_export_error(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = _get_model()
# Bad format
with self.assertRaisesRegex(ValueError, "Unrecognized format="):
model.export(temp_filepath, format="bad_format")
# Bad backend
if backend.backend() not in ("tensorflow", "jax", "torch"):
with self.assertRaisesRegex(
NotImplementedError,
(
r"`export_saved_model` only currently supports the "
r"tensorflow, jax and torch backends."
),
):
model.export(temp_filepath, format="tf_saved_model")
|
ModelTest
|
python
|
kubernetes-client__python
|
kubernetes/base/dynamic/client.py
|
{
"start": 2172,
"end": 14416
}
|
class ____(object):
""" A kubernetes client that dynamically discovers and interacts with
the kubernetes API
"""
def __init__(self, client, cache_file=None, discoverer=None):
# Setting default here to delay evaluation of LazyDiscoverer class
# until constructor is called
discoverer = discoverer or LazyDiscoverer
self.client = client
self.configuration = client.configuration
self.__discoverer = discoverer(self, cache_file)
@property
def resources(self):
return self.__discoverer
@property
def version(self):
return self.__discoverer.version
def ensure_namespace(self, resource, namespace, body):
namespace = namespace or body.get('metadata', {}).get('namespace')
if not namespace:
raise ValueError("Namespace is required for {}.{}".format(resource.group_version, resource.kind))
return namespace
def serialize_body(self, body):
"""Serialize body to raw dict so apiserver can handle it
:param body: kubernetes resource body, current support: Union[Dict, ResourceInstance]
"""
# This should match any `ResourceInstance` instances
if callable(getattr(body, 'to_dict', None)):
return body.to_dict()
return body or {}
def get(self, resource, name=None, namespace=None, **kwargs):
path = resource.path(name=name, namespace=namespace)
return self.request('get', path, **kwargs)
def create(self, resource, body=None, namespace=None, **kwargs):
body = self.serialize_body(body)
if resource.namespaced:
namespace = self.ensure_namespace(resource, namespace, body)
path = resource.path(namespace=namespace)
return self.request('post', path, body=body, **kwargs)
def delete(self, resource, name=None, namespace=None, body=None, label_selector=None, field_selector=None, **kwargs):
if not (name or label_selector or field_selector):
raise ValueError("At least one of name|label_selector|field_selector is required")
if resource.namespaced and not (label_selector or field_selector or namespace):
raise ValueError("At least one of namespace|label_selector|field_selector is required")
path = resource.path(name=name, namespace=namespace)
return self.request('delete', path, body=body, label_selector=label_selector, field_selector=field_selector, **kwargs)
def replace(self, resource, body=None, name=None, namespace=None, **kwargs):
body = self.serialize_body(body)
name = name or body.get('metadata', {}).get('name')
if not name:
raise ValueError("name is required to replace {}.{}".format(resource.group_version, resource.kind))
if resource.namespaced:
namespace = self.ensure_namespace(resource, namespace, body)
path = resource.path(name=name, namespace=namespace)
return self.request('put', path, body=body, **kwargs)
def patch(self, resource, body=None, name=None, namespace=None, **kwargs):
body = self.serialize_body(body)
name = name or body.get('metadata', {}).get('name')
if not name:
raise ValueError("name is required to patch {}.{}".format(resource.group_version, resource.kind))
if resource.namespaced:
namespace = self.ensure_namespace(resource, namespace, body)
content_type = kwargs.pop('content_type', 'application/strategic-merge-patch+json')
path = resource.path(name=name, namespace=namespace)
return self.request('patch', path, body=body, content_type=content_type, **kwargs)
def server_side_apply(self, resource, body=None, name=None, namespace=None, force_conflicts=None, **kwargs):
body = self.serialize_body(body)
name = name or body.get('metadata', {}).get('name')
if not name:
raise ValueError("name is required to patch {}.{}".format(resource.group_version, resource.kind))
if resource.namespaced:
namespace = self.ensure_namespace(resource, namespace, body)
# force content type to 'application/apply-patch+yaml'
kwargs.update({'content_type': 'application/apply-patch+yaml'})
path = resource.path(name=name, namespace=namespace)
return self.request('patch', path, body=body, force_conflicts=force_conflicts, **kwargs)
def watch(self, resource, namespace=None, name=None, label_selector=None, field_selector=None, resource_version=None, timeout=None, watcher=None, allow_watch_bookmarks=None):
"""
Stream events for a resource from the Kubernetes API
:param resource: The API resource object that will be used to query the API
:param namespace: The namespace to query
:param name: The name of the resource instance to query
:param label_selector: The label selector with which to filter results
:param field_selector: The field selector with which to filter results
:param resource_version: The version with which to filter results. Only events with
a resource_version greater than this value will be returned
:param timeout: The amount of time in seconds to wait before terminating the stream
:param watcher: The Watcher object that will be used to stream the resource
:param allow_watch_bookmarks: Ask the API server to send BOOKMARK events
:return: Event object with these keys:
'type': The type of event such as "ADDED", "DELETED", etc.
'raw_object': a dict representing the watched object.
'object': A ResourceInstance wrapping raw_object.
Example:
client = DynamicClient(k8s_client)
watcher = watch.Watch()
v1_pods = client.resources.get(api_version='v1', kind='Pod')
for e in v1_pods.watch(resource_version=0, namespace=default, timeout=5, watcher=watcher):
print(e['type'])
print(e['object'].metadata)
# If you want to gracefully stop the stream watcher
watcher.stop()
"""
if not watcher: watcher = watch.Watch()
# Use field selector to query for named instance so the watch parameter is handled properly.
if name:
field_selector = f"metadata.name={name}"
for event in watcher.stream(
resource.get,
namespace=namespace,
field_selector=field_selector,
label_selector=label_selector,
resource_version=resource_version,
serialize=False,
timeout_seconds=timeout,
allow_watch_bookmarks=allow_watch_bookmarks,
):
event['object'] = ResourceInstance(resource, event['object'])
yield event
@meta_request
def request(self, method, path, body=None, **params):
if not path.startswith('/'):
path = '/' + path
path_params = params.get('path_params', {})
query_params = params.get('query_params', [])
if params.get('pretty') is not None:
query_params.append(('pretty', params['pretty']))
if params.get('_continue') is not None:
query_params.append(('continue', params['_continue']))
if params.get('include_uninitialized') is not None:
query_params.append(('includeUninitialized', params['include_uninitialized']))
if params.get('field_selector') is not None:
query_params.append(('fieldSelector', params['field_selector']))
if params.get('label_selector') is not None:
query_params.append(('labelSelector', params['label_selector']))
if params.get('limit') is not None:
query_params.append(('limit', params['limit']))
if params.get('resource_version') is not None:
query_params.append(('resourceVersion', params['resource_version']))
if params.get('timeout_seconds') is not None:
query_params.append(('timeoutSeconds', params['timeout_seconds']))
if params.get('watch') is not None:
query_params.append(('watch', params['watch']))
if params.get('grace_period_seconds') is not None:
query_params.append(('gracePeriodSeconds', params['grace_period_seconds']))
if params.get('propagation_policy') is not None:
query_params.append(('propagationPolicy', params['propagation_policy']))
if params.get('orphan_dependents') is not None:
query_params.append(('orphanDependents', params['orphan_dependents']))
if params.get('dry_run') is not None:
query_params.append(('dryRun', params['dry_run']))
if params.get('field_manager') is not None:
query_params.append(('fieldManager', params['field_manager']))
if params.get('force_conflicts') is not None:
query_params.append(('force', params['force_conflicts']))
if params.get('allow_watch_bookmarks') is not None:
query_params.append(('allowWatchBookmarks', params['allow_watch_bookmarks']))
header_params = params.get('header_params', {})
form_params = []
local_var_files = {}
# Checking Accept header.
new_header_params = dict((key.lower(), value) for key, value in header_params.items())
if not 'accept' in new_header_params:
header_params['Accept'] = self.client.select_header_accept([
'application/json',
'application/yaml',
])
# HTTP header `Content-Type`
if params.get('content_type'):
header_params['Content-Type'] = params['content_type']
else:
header_params['Content-Type'] = self.client.select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
api_response = self.client.call_api(
path,
method.upper(),
path_params,
query_params,
header_params,
body=body,
post_params=form_params,
async_req=params.get('async_req'),
files=local_var_files,
auth_settings=auth_settings,
_preload_content=False,
_return_http_data_only=params.get('_return_http_data_only', True),
_request_timeout=params.get('_request_timeout')
)
if params.get('async_req'):
return api_response.get()
else:
return api_response
def validate(self, definition, version=None, strict=False):
"""validate checks a kubernetes resource definition
Args:
definition (dict): resource definition
version (str): version of kubernetes to validate against
strict (bool): whether unexpected additional properties should be considered errors
Returns:
warnings (list), errors (list): warnings are missing validations, errors are validation failures
"""
if not HAS_KUBERNETES_VALIDATE:
raise KubernetesValidateMissing()
errors = list()
warnings = list()
try:
if version is None:
try:
version = self.version['kubernetes']['gitVersion']
except KeyError:
version = kubernetes_validate.latest_version()
kubernetes_validate.validate(definition, version, strict)
except kubernetes_validate.utils.ValidationError as e:
errors.append("resource definition validation error at %s: %s" % ('.'.join([str(item) for item in e.path]), e.message)) # noqa: B306
except VersionNotSupportedError:
errors.append("Kubernetes version %s is not supported by kubernetes-validate" % version)
except kubernetes_validate.utils.SchemaNotFoundError as e:
warnings.append("Could not find schema for object kind %s with API version %s in Kubernetes version %s (possibly Custom Resource?)" %
(e.kind, e.api_version, e.version))
return warnings, errors
|
DynamicClient
|
python
|
PrefectHQ__prefect
|
src/prefect/server/models/task_workers.py
|
{
"start": 418,
"end": 3343
}
|
class ____:
def __init__(self) -> None:
self.workers: dict[WorkerId, Set[TaskKey]] = {}
self.task_keys: Dict[TaskKey, Set[WorkerId]] = defaultdict(set)
self.worker_timestamps: Dict[WorkerId, float] = {}
async def observe_worker(
self,
task_keys: List[TaskKey],
worker_id: WorkerId,
) -> None:
self.workers[worker_id] = self.workers.get(worker_id, set()) | set(task_keys)
self.worker_timestamps[worker_id] = time.monotonic()
for task_key in task_keys:
self.task_keys[task_key].add(worker_id)
async def forget_worker(
self,
worker_id: WorkerId,
) -> None:
if worker_id in self.workers:
task_keys = self.workers.pop(worker_id)
for task_key in task_keys:
self.task_keys[task_key].discard(worker_id)
if not self.task_keys[task_key]:
del self.task_keys[task_key]
self.worker_timestamps.pop(worker_id, None)
async def get_workers_for_task_keys(
self,
task_keys: List[TaskKey],
) -> List[TaskWorkerResponse]:
if not task_keys:
return await self.get_all_workers()
active_workers = set().union(*(self.task_keys[key] for key in task_keys))
return [self._create_worker_response(worker_id) for worker_id in active_workers]
async def get_all_workers(self) -> List[TaskWorkerResponse]:
return [
self._create_worker_response(worker_id)
for worker_id in self.worker_timestamps.keys()
]
def _create_worker_response(self, worker_id: WorkerId) -> TaskWorkerResponse:
timestamp = time.monotonic() - self.worker_timestamps[worker_id]
return TaskWorkerResponse(
identifier=worker_id,
task_keys=list(self.workers.get(worker_id, set())),
timestamp=now("UTC") - datetime.timedelta(seconds=timestamp),
)
def reset(self) -> None:
"""Testing utility to reset the state of the task worker tracker"""
self.workers.clear()
self.task_keys.clear()
self.worker_timestamps.clear()
# Global instance of the task worker tracker
task_worker_tracker: InMemoryTaskWorkerTracker = InMemoryTaskWorkerTracker()
# Main utilities to be used in the API layer
async def observe_worker(
task_keys: List[TaskKey],
worker_id: WorkerId,
) -> None:
await task_worker_tracker.observe_worker(task_keys, worker_id)
async def forget_worker(
worker_id: WorkerId,
) -> None:
await task_worker_tracker.forget_worker(worker_id)
async def get_workers_for_task_keys(
task_keys: List[TaskKey],
) -> List[TaskWorkerResponse]:
return await task_worker_tracker.get_workers_for_task_keys(task_keys)
async def get_all_workers() -> List[TaskWorkerResponse]:
return await task_worker_tracker.get_all_workers()
|
InMemoryTaskWorkerTracker
|
python
|
airbytehq__airbyte
|
airbyte-integrations/connectors/destination-databend/destination_databend/writer.py
|
{
"start": 2773,
"end": 4174
}
|
class ____(DatabendWriter):
"""
Data writer using the SQL writing strategy. Data is buffered in memory
and flushed using INSERT INTO SQL statement.
"""
flush_interval = 1000
def __init__(self, client: DatabendClient) -> None:
"""
:param client: Databend SDK connection class with established connection
to the databse.
"""
super().__init__(client)
def _flush(self) -> None:
"""
Intermediate data flush that's triggered during the
buffering operation. Writes data stored in memory via SQL commands.
databend connector insert into table using stage
"""
cursor = self.cursor
# id, written_at, data
for table, data in self._buffer.items():
cursor.execute(
f"INSERT INTO _airbyte_raw_{table} (_airbyte_ab_id,_airbyte_emitted_at,_airbyte_data) VALUES (%, %, %)",
list(chain.from_iterable(data)),
)
self._buffer.clear()
self._values = 0
def flush(self) -> None:
"""
Final data flush after all data has been written to memory.
"""
self._flush()
def create_databend_wirter(client: DatabendClient, logger: logging.Logger) -> DatabendWriter:
logger.info("Using the SQL writing strategy")
writer = DatabendSQLWriter(client)
return writer
|
DatabendSQLWriter
|
python
|
Unity-Technologies__ml-agents
|
utils/make_readme_table.py
|
{
"start": 1213,
"end": 7095
}
|
class ____(NamedTuple):
release_tag: str
csharp_version: str
python_verion: str
release_date: str
is_verified: bool = False
@property
def loose_version(self) -> LooseVersion:
return LooseVersion(self.python_verion)
@property
def is_develop(self) -> bool:
return self.release_tag == "develop"
@property
def release_datetime(self) -> datetime:
if self.is_develop:
return datetime.today()
return datetime.strptime(self.release_date, "%B %d, %Y")
@property
def elapsed_days(self) -> int:
"""
Days since this version was released.
:return:
"""
return (datetime.today() - self.release_datetime).days
@property
def display_name(self) -> str:
"""
Clean up the tag name for display, e.g. "release_1" -> "Release 1"
:return:
"""
if self.is_verified:
return f"Verified Package {self.csharp_version}"
elif self.is_develop:
return "develop (unstable)"
else:
return self.release_tag.replace("_", " ").title()
@property
def source_link(self):
if self.is_verified:
return f"https://github.com/Unity-Technologies/ml-agents/tree/com.unity.ml-agents_{self.csharp_version}"
else:
return f"https://github.com/Unity-Technologies/ml-agents/tree/{self.release_tag}"
@property
def download_link(self):
if self.is_verified:
tag = f"com.unity.ml-agents_{self.csharp_version}"
else:
tag = self.release_tag
return f"https://github.com/Unity-Technologies/ml-agents/archive/{tag}.zip"
@property
def doc_link(self):
if self.is_verified:
return "https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Readme.md"
if self.csharp_version == "develop":
return (
"https://github.com/Unity-Technologies/ml-agents/tree/"
"develop/com.unity.ml-agents/Documentation~/index.md"
)
# Prioritize Unity Package documentation over web docs
try:
StrictVersion(self.csharp_version).version
return "https://docs.unity3d.com/Packages/com.unity.ml-agents@latest"
except ValueError:
return "https://unity-technologies.github.io/ml-agents/ (DEPRECATED)"
@property
def package_link(self):
try:
v = StrictVersion(self.csharp_version).version
return f"https://docs.unity3d.com/Packages/com.unity.ml-agents@{v[0]}.{v[1]}/manual/index.html"
except ValueError:
return "--"
@property
def pypi_link(self):
return f"https://pypi.org/project/mlagents/{self.python_verion}/"
versions = [
ReleaseInfo("develop", "develop", "develop", "--"),
ReleaseInfo("release_1", "1.0.0", "0.16.0", "April 30, 2020"),
ReleaseInfo("release_2", "1.0.2", "0.16.1", "May 20, 2020"),
ReleaseInfo("release_3", "1.1.0", "0.17.0", "June 10, 2020"),
ReleaseInfo("release_4", "1.2.0", "0.18.0", "July 15, 2020"),
ReleaseInfo("release_5", "1.2.1", "0.18.1", "July 31, 2020"),
ReleaseInfo("release_6", "1.3.0", "0.19.0", "August 12, 2020"),
ReleaseInfo("release_7", "1.4.0", "0.20.0", "September 16, 2020"),
ReleaseInfo("release_8", "1.5.0", "0.21.0", "October 14, 2020"),
ReleaseInfo("release_9", "1.5.0", "0.21.1", "November 4, 2020"),
ReleaseInfo("release_10", "1.6.0", "0.22.0", "November 18, 2020"),
ReleaseInfo("release_11", "1.7.0", "0.23.0", "December 21, 2020"),
ReleaseInfo("release_12", "1.7.2", "0.23.0", "December 22, 2020"),
ReleaseInfo("release_13", "1.8.0", "0.24.0", "February 17, 2021"),
ReleaseInfo("release_14", "1.8.1", "0.24.1", "March 5, 2021"),
ReleaseInfo("release_15", "1.9.0", "0.25.0", "March 17, 2021"),
ReleaseInfo("release_16", "1.9.1", "0.25.1", "April 13, 2021"),
ReleaseInfo("release_17", "2.0.0", "0.26.0", "April 22, 2021"),
ReleaseInfo("release_18", "2.1.0", "0.27.0", "June 9, 2021"),
ReleaseInfo("release_19", "2.2.1", "0.28.0", "January 14, 2022"),
ReleaseInfo("release_20", "2.3.0", "0.30.0", "November 21, 2022"),
ReleaseInfo("release_21", "3.0.0-exp.1", "1.0.0", "October 9, 2023"),
ReleaseInfo("release_22", "3.0.0", "1.1.0", "October 5, 2024"),
ReleaseInfo("release_23", "4.0.0", "1.1.0", "August 15, 2025"),
# Verified releases
# ReleaseInfo("", "1.0.8", "0.16.1", "May 26, 2021", is_verified=True),
# ReleaseInfo("", "1.0.7", "0.16.1", "March 8, 2021", is_verified=True),
# ReleaseInfo("", "1.0.6", "0.16.1", "November 16, 2020", is_verified=True),
# ReleaseInfo("", "1.0.5", "0.16.1", "September 23, 2020", is_verified=True),
# ReleaseInfo("", "1.0.4", "0.16.1", "August 20, 2020", is_verified=True),
]
sorted_versions = sorted(versions, key=lambda x: x.release_datetime, reverse=True)
highlight_versions = set()
# Highlight the most recent verified version
# disabling verified versions.
# TODO replace this table entry with released version according to
# https://docs.unity3d.com/2022.3/Documentation/Manual/pack-safe.html
# highlight_versions.add([v for v in sorted_versions if v.is_verified][0])
# Highlight the most recent regular version
highlight_versions.add(
[v for v in sorted_versions if (not v.is_verified and not v.is_develop)][0]
)
count_by_verified = Counter()
for version_info in sorted_versions:
highlight = version_info in highlight_versions
if version_info.elapsed_days > MAX_DAYS:
# Make sure we always have at least regular and one verified entry
if count_by_verified[version_info.is_verified] > 0:
continue
print(table_line(version_info, highlight))
count_by_verified[version_info.is_verified] += 1
print("\n\n")
|
ReleaseInfo
|
python
|
apache__airflow
|
providers/google/src/airflow/providers/google/cloud/operators/bigquery.py
|
{
"start": 4446,
"end": 5460
}
|
class ____:
"""A class to handle the configuration for BigQueryHook.insert_job method."""
# Note: If you want to add this feature to a new operator you can include the class name in the type
# annotation of the `self`. Then you can inherit this class in the target operator.
# e.g: BigQueryCheckOperator, BigQueryTableCheckOperator
def include_encryption_configuration( # type:ignore[misc]
self: BigQueryCheckOperator
| BigQueryTableCheckOperator
| BigQueryValueCheckOperator
| BigQueryColumnCheckOperator
| BigQueryGetDataOperator
| BigQueryIntervalCheckOperator,
configuration: dict,
config_key: str,
) -> None:
"""Add encryption_configuration to destinationEncryptionConfiguration key if it is not None."""
if self.encryption_configuration is not None:
configuration[config_key]["destinationEncryptionConfiguration"] = self.encryption_configuration
|
_BigQueryOperatorsEncryptionConfigurationMixin
|
python
|
ApeWorX__ape
|
src/ape/utils/process.py
|
{
"start": 95,
"end": 1198
}
|
class ____(queue.Queue):
"""
A queue that can be joined, useful for multi-processing.
Borrowed from the ``py-geth`` library.
"""
def __iter__(self):
while True:
item = self.get()
is_stop_iteration_type = isinstance(item, type) and issubclass(item, StopIteration)
if isinstance(item, StopIteration) or is_stop_iteration_type:
return
elif isinstance(item, Exception):
raise item
elif isinstance(item, type) and issubclass(item, Exception):
raise item
yield item
def join(self, timeout=None):
with SubprocessTimeoutError(timeout) as _timeout:
while not self.empty():
time.sleep(0)
_timeout.check()
def spawn(target, *args, **kwargs):
"""
Spawn a new daemon thread. Borrowed from the ``py-geth`` library.
"""
thread = threading.Thread(
target=target,
args=args,
kwargs=kwargs,
)
thread.daemon = True
thread.start()
return thread
|
JoinableQueue
|
python
|
numba__numba
|
numba/core/byteflow.py
|
{
"start": 81221,
"end": 82391
}
|
class ____(object):
"""Adapt Flow to the old CFA class expected by Interpreter
"""
def __init__(self, flow):
self._flow = flow
self._blocks = {}
for offset, blockinfo in flow.block_infos.items():
self._blocks[offset] = AdaptCFBlock(blockinfo, offset)
backbone = self._flow.cfgraph.backbone()
graph = flow.cfgraph
# Find backbone
backbone = graph.backbone()
# Filter out in loop blocks (Assuming no other cyclic control blocks)
# This is to unavoid variables defined in loops being considered as
# function scope.
inloopblocks = set()
for b in self.blocks.keys():
if graph.in_loops(b):
inloopblocks.add(b)
self._backbone = backbone - inloopblocks
@property
def graph(self):
return self._flow.cfgraph
@property
def backbone(self):
return self._backbone
@property
def blocks(self):
return self._blocks
def iterliveblocks(self):
for b in sorted(self.blocks):
yield self.blocks[b]
def dump(self):
self._flow.cfgraph.dump()
|
AdaptCFA
|
python
|
numba__numba
|
numba/tests/test_operators.py
|
{
"start": 46953,
"end": 47815
}
|
class ____(TestCase):
"""
Test comparison of string constants
"""
def test_eq(self):
def test_impl1():
s = 'test'
return s == 'test'
def test_impl2():
s = 'test1'
return s == 'test'
cfunc1 = jit(nopython=True)(test_impl1)
cfunc2 = jit(nopython=True)(test_impl2)
self.assertEqual(test_impl1(), cfunc1())
self.assertEqual(test_impl2(), cfunc2())
def test_neq(self):
def test_impl1():
s = 'test'
return s != 'test'
def test_impl2():
s = 'test1'
return s != 'test'
cfunc1 = jit(nopython=True)(test_impl1)
cfunc2 = jit(nopython=True)(test_impl2)
self.assertEqual(test_impl1(), cfunc1())
self.assertEqual(test_impl2(), cfunc2())
|
TestStringConstComparison
|
python
|
fluentpython__example-code-2e
|
24-class-metaprog/checked/metaclass/checkedlib.py
|
{
"start": 3095,
"end": 3732
}
|
class ____(type):
def __new__(meta_cls, cls_name, bases, cls_dict): # <1>
if '__slots__' not in cls_dict: # <2>
slots = []
type_hints = cls_dict.get('__annotations__', {}) # <3>
for name, constructor in type_hints.items(): # <4>
field = Field(name, constructor) # <5>
cls_dict[name] = field # <6>
slots.append(field.storage_name) # <7>
cls_dict['__slots__'] = slots # <8>
return super().__new__(
meta_cls, cls_name, bases, cls_dict) # <9>
# end::CHECKED_META[]
# tag::CHECKED_CLASS[]
|
CheckedMeta
|
python
|
getsentry__sentry
|
src/sentry/grouping/api.py
|
{
"start": 5330,
"end": 20362
}
|
class ____(GroupingConfigLoader):
"""Does not affect grouping, runs in addition to measure performance impact"""
cache_prefix = "background-grouping-enhancements:"
def _get_config_id(self, _project: Project) -> str:
return options.get("store.background-grouping-config-id")
@sentry_sdk.tracing.trace
def get_grouping_config_dict_for_project(project: Project) -> GroupingConfig:
"""Fetches all the information necessary for grouping from the project
settings. The return value of this is persisted with the event on
ingestion so that the grouping algorithm can be re-run later.
This is called early on in normalization so that everything that is needed
to group the event is pulled into the event data.
"""
loader = PrimaryGroupingConfigLoader()
return loader.get_config_dict(project)
def _get_default_base64_enhancements(config_id: str | None = None) -> str:
base: str | None = DEFAULT_ENHANCEMENTS_BASE
if config_id is not None and config_id in GROUPING_CONFIG_CLASSES.keys():
base = GROUPING_CONFIG_CLASSES[config_id].enhancements_base
return EnhancementsConfig.from_rules_text("", bases=[base] if base else []).base64_string
def _get_default_fingerprinting_bases_for_project(
project: Project, config_id: str | None = None
) -> Sequence[str] | None:
"""Returns the default built-in fingerprinting bases (i.e. sets of rules) for a project."""
config_id = (
config_id
# TODO: add fingerprinting config to GroupingConfigLoader and use that here
or PrimaryGroupingConfigLoader()._get_config_id(project)
or DEFAULT_GROUPING_CONFIG
)
bases = GROUPING_CONFIG_CLASSES[config_id].fingerprinting_bases
return bases
def get_default_grouping_config_dict(config_id: str | None = None) -> GroupingConfig:
"""Returns the default grouping config."""
if config_id is None:
config_id = DEFAULT_GROUPING_CONFIG
return {"id": config_id, "enhancements": _get_default_base64_enhancements(config_id)}
def load_grouping_config(config_dict: GroupingConfig | None = None) -> StrategyConfiguration:
"""
Load the given grouping config, or the default config if none is provided or if the given
config is not recognized.
"""
if config_dict is None:
config_dict = get_default_grouping_config_dict()
elif "id" not in config_dict:
raise ValueError("Malformed configuration dictionary")
config_id = config_dict["id"]
if config_id not in GROUPING_CONFIG_CLASSES:
config_dict = get_default_grouping_config_dict()
config_id = config_dict["id"]
return GROUPING_CONFIG_CLASSES[config_id](base64_enhancements=config_dict["enhancements"])
def _load_default_grouping_config() -> StrategyConfiguration:
return load_grouping_config(config_dict=None)
def get_fingerprinting_config_for_project(
project: Project, config_id: str | None = None
) -> FingerprintingConfig:
"""
Returns the fingerprinting rules for a project.
Merges the project's custom fingerprinting rules (if any) with the default built-in rules.
"""
from sentry.grouping.fingerprinting import FingerprintingConfig
from sentry.grouping.fingerprinting.exceptions import InvalidFingerprintingConfig
bases = _get_default_fingerprinting_bases_for_project(project, config_id=config_id)
raw_rules = project.get_option("sentry:fingerprinting_rules")
if not raw_rules:
return FingerprintingConfig([], bases=bases)
from sentry.utils.cache import cache
from sentry.utils.hashlib import md5_text
cache_key = "fingerprinting-rules:" + md5_text(raw_rules).hexdigest()
config_json = cache.get(cache_key)
if config_json is not None:
return FingerprintingConfig.from_json(config_json, bases=bases)
try:
rules = FingerprintingConfig.from_config_string(raw_rules, bases=bases)
except InvalidFingerprintingConfig:
rules = FingerprintingConfig([], bases=bases)
cache.set(cache_key, rules.to_json())
return rules
def apply_server_side_fingerprinting(
event: MutableMapping[str, Any], fingerprinting_config: FingerprintingConfig
) -> None:
"""
Check the given event against the given rules and set various event values. Note that this does
not resolve fingerprint variables, except in the event title (if applicable).
If there is a client fingerprint, add it to `event["_fingprint_info"]`.
If a rule match is found:
- Set `event["fingerprint"]` to the raw (unresolved) fingerprint given by the matching rule.
- Add the matched rule to `event["_fingprint_info"]`.
- Set `event["title"]` if the rule includes title information.
"""
fingerprint_info = {}
client_fingerprint = event.get("fingerprint", [])
client_fingerprint_is_default = len(client_fingerprint) == 1 and is_default_fingerprint_var(
client_fingerprint[0]
)
if client_fingerprint and not client_fingerprint_is_default:
fingerprint_info["client_fingerprint"] = client_fingerprint
fingerprint_match = fingerprinting_config.get_fingerprint_values_for_event(event)
if fingerprint_match is not None:
# TODO: We don't need to return attributes as part of the fingerprint match anymore
matched_rule, new_fingerprint, attributes = fingerprint_match
event["fingerprint"] = new_fingerprint
fingerprint_info["matched_rule"] = matched_rule.to_json()
if fingerprint_info:
event["_fingerprint_info"] = fingerprint_info
def _get_variants_from_strategies(
event: Event, context: GroupingContext
) -> dict[str, ComponentVariant]:
winning_strategy: str | None = None
precedence_hint: str | None = None
all_strategies_components_by_variant: dict[str, list[ContributingComponent]] = {}
winning_strategy_components_by_variant = {}
# `iter_strategies` presents strategies in priority order, which allows us to go with the first
# one which produces a result. (See `src/sentry/grouping/strategies/configurations.py` for the
# strategies used by each config.)
for strategy in context.config.iter_strategies():
current_strategy_components_by_variant = strategy.get_grouping_components(
event, context=context
)
for variant_name, component in current_strategy_components_by_variant.items():
all_strategies_components_by_variant.setdefault(variant_name, []).append(component)
if component.contributes:
if winning_strategy is None:
# If we haven't yet found a winner.. now we have!
#
# The value of `current_strategy_components_by_variant` will change with each
# strategy, so grab a separate reference to the winning ones so we don't lose
# track of them
#
# Also, create a hint we can add to components from other strategies indicating
# that this one took precedence
winning_strategy_components_by_variant = current_strategy_components_by_variant
winning_strategy = strategy.name
variant_descriptor = "/".join(
sorted(
variant_name
for variant_name, component in current_strategy_components_by_variant.items()
if component.contributes
)
)
precedence_hint = "ignored because {} take{} precedence".format(
(
f"{variant_descriptor} {strategy.name}"
if variant_name != "default"
else strategy.name
),
"" if strategy.name.endswith("s") else "s",
)
# On the other hand, if another strategy before this one was already the winner, we
# don't want any of this strategy's components to contribute to grouping
elif strategy.name != winning_strategy:
component.update(contributes=False, hint=precedence_hint)
variants = {}
for variant_name, components in all_strategies_components_by_variant.items():
root_component = RootGroupingComponent(variant_name=variant_name, values=components)
# The root component will pull its `contributes` value from the components it wraps - if
# none of them contributes, it will also be marked as non-contributing. But those components
# might not have the same reasons for not contributing (`hint` values), so it can't pull
# that them - it's gotta be set here.
if not root_component.contributes and precedence_hint:
root_component.update(hint=precedence_hint)
winning_strategy_component = winning_strategy_components_by_variant.get(variant_name)
contributing_component = (
winning_strategy_component
if winning_strategy_component and winning_strategy_component.contributes
else None
)
variants[variant_name] = ComponentVariant(
root_component=root_component,
contributing_component=contributing_component,
strategy_config=context.config,
)
return variants
# This is called by the Event model in get_grouping_variants()
def get_grouping_variants_for_event(
event: Event, config: StrategyConfiguration | None = None
) -> dict[str, BaseVariant]:
"""Returns a dict of all grouping variants for this event."""
# If a checksum is set the only variant that comes back from this event is the checksum variant.
#
# TODO: Is there a reason we don't treat a checksum like a custom fingerprint, and run the other
# strategies but mark them as non-contributing, with explanations why?
checksum = event.data.get("checksum")
if checksum:
if HASH_RE.match(checksum):
return {"checksum": ChecksumVariant(checksum)}
else:
return {
"hashed_checksum": HashedChecksumVariant(hash_from_values(checksum), checksum),
}
# Otherwise we go to the various forms of grouping based on fingerprints and/or event data
# (stacktrace, message, etc.)
raw_fingerprint = event.data.get("fingerprint") or ["{{ default }}"]
fingerprint_info = event.data.get("_fingerprint_info", {})
fingerprint_type = get_fingerprint_type(raw_fingerprint)
resolved_fingerprint = (
raw_fingerprint
if fingerprint_type == "default"
else resolve_fingerprint_values(raw_fingerprint, event.data)
)
# Check if the fingerprint includes a custom title, and if so, set the event's title accordingly.
_apply_custom_title_if_needed(fingerprint_info, event)
# Run all of the event-data-based grouping strategies. Any which apply will create grouping
# components, which will then be grouped into variants by variant type (system, app, default).
context = GroupingContext(config or _load_default_grouping_config(), event)
strategy_component_variants: dict[str, ComponentVariant] = _get_variants_from_strategies(
event, context
)
# Create a separate container for these for now to preserve the typing of
# `strategy_component_variants`
additional_variants: dict[str, BaseVariant] = {}
# If the fingerprint is the default fingerprint, we can use the variants as is. If it's custom,
# we need to create a fingerprint variant and mark the existing variants as non-contributing.
# If it's hybrid, we'll replace the existing variants with "salted" versions which include
# the fingerprint.
if fingerprint_type == "custom":
matched_rule = fingerprint_info.get("matched_rule")
is_built_in_fingerprint = bool(matched_rule and matched_rule.get("is_builtin"))
fingerprint_source = (
"custom client"
if not matched_rule
else "built-in" if is_built_in_fingerprint else "custom server"
)
hint = f"ignored because {fingerprint_source} fingerprint takes precedence"
fingerprint_variant = CustomFingerprintVariant(resolved_fingerprint, fingerprint_info)
additional_variants[fingerprint_variant.key] = fingerprint_variant
for variant in strategy_component_variants.values():
variant.root_component.update(contributes=False, hint=hint)
elif fingerprint_type == "hybrid":
for variant_name, variant in strategy_component_variants.items():
# Since we're reusing the variant names, when all of the variants are combined, these
# salted versions will replace the unsalted versions
additional_variants[variant_name] = SaltedComponentVariant.from_component_variant(
variant, resolved_fingerprint, fingerprint_info
)
final_variants = {
**strategy_component_variants,
# Add these in second, so the salted versions of any variants replace the unsalted versions
**additional_variants,
}
# Ensure we have a fallback hash if nothing else works out
if not any(x.contributes for x in final_variants.values()):
final_variants["fallback"] = FallbackVariant()
return final_variants
def _apply_custom_title_if_needed(fingerprint_info: FingerprintInfo, event: Event) -> None:
"""
If the given event has a custom fingerprint which includes a title template, apply the custom
title to the event.
"""
custom_title_template = get_path(fingerprint_info, "matched_rule", "attributes", "title")
if custom_title_template:
resolved_title = expand_title_template(custom_title_template, event.data)
event.data["title"] = resolved_title
def get_contributing_variant_and_component(
variants: dict[str, BaseVariant],
) -> tuple[BaseVariant, ContributingComponent | None]:
"""
Given the full set of variants, pick out the one which contributes, along with its contributing
component.
"""
if len(variants) == 1:
contributing_variant = list(variants.values())[0]
else:
contributing_variant = (
variants["app"]
# TODO: We won't need this 'if' once we stop returning both app and system contributing
# variants
if "app" in variants and variants["app"].contributes
# Other than in the broken app/system case, there should only ever be a single
# contributing variant
else [variant for variant in variants.values() if variant.contributes][0]
)
contributing_component = (
contributing_variant.contributing_component
if hasattr(contributing_variant, "contributing_component")
else None
)
return (contributing_variant, contributing_component)
|
BackgroundGroupingConfigLoader
|
python
|
qdrant__qdrant-client
|
qdrant_client/http/models/models.py
|
{
"start": 40381,
"end": 40991
}
|
class ____(BaseModel, extra="forbid"):
should: Optional[Union[List["Condition"], "Condition"]] = Field(
default=None, description="At least one of those conditions should match"
)
min_should: Optional["MinShould"] = Field(
default=None, description="At least minimum amount of given conditions should match"
)
must: Optional[Union[List["Condition"], "Condition"]] = Field(default=None, description="All conditions must match")
must_not: Optional[Union[List["Condition"], "Condition"]] = Field(
default=None, description="All conditions must NOT match"
)
|
Filter
|
python
|
conda__conda
|
tests/shell/__init__.py
|
{
"start": 6498,
"end": 10911
}
|
class ____(metaclass=InteractiveShellType):
def __init__(
self,
shell: str | tuple[str, ...] | Shell,
*,
activator: str,
args: Iterable[str] = (),
init_command: str,
print_env_var: str,
exit_cmd: str | None = None,
base_shell: str | None = None, # ignored
env: dict[str, str] | None = None,
):
shell = Shell.resolve(shell)
self.shell_name = shell.name
self.shell_exe = quote_for_shell(shell.exe, *args)
self.shell_dir = dirname(shell.exe)
self.activator = activator_map[activator]()
self.args = args
self.init_command = init_command
self.print_env_var = print_env_var
self.exit_cmd = exit_cmd
self.env = env or {}
def __enter__(self):
self.p = PopenSpawn(
self.shell_exe,
timeout=30,
maxread=5000,
searchwindowsize=None,
logfile=sys.stdout,
cwd=os.getcwd(),
env={
**os.environ,
"CONDA_AUTO_ACTIVATE": "false",
"CONDA_AUTO_STACK": "0",
"CONDA_CHANGEPS1": "true",
# "CONDA_ENV_PROMPT": "({default_env}) ",
"PYTHONPATH": self.path_conversion(CONDA_SOURCE_ROOT),
"PATH": self.activator.pathsep_join(
self.path_conversion(
(
*self.activator._get_starting_path_list(),
self.shell_dir,
)
)
),
# ensure PATH is shared with any msys2 bash shell, rather than starting fresh
"MSYS2_PATH_TYPE": "inherit",
"CHERE_INVOKING": "1",
**self.env,
},
encoding="utf-8",
codec_errors="strict",
)
if self.init_command:
self.p.sendline(self.init_command)
# want CONDA_SHLVL=0 before running tests so deactivate any active environments
# since we do not know how many environments have been activated by the user/CI
# just to be safe deactivate a few times
for _ in range(5):
self.p.sendline("conda deactivate")
self.clear()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is not None:
print(f"Exception encountered: ({exc_type}) {exc_val}", file=sys.stderr)
if self.p:
if self.exit_cmd:
self.sendline(self.exit_cmd)
self.p.kill(SIGINT)
def sendline(self, *args, **kwargs):
return self.p.sendline(*args, **kwargs)
def expect(self, *args, **kwargs):
try:
return self.p.expect(*args, **kwargs)
except Exception:
print(f"{self.p.before=}", file=sys.stderr)
print(f"{self.p.after=}", file=sys.stderr)
raise
def expect_exact(self, *args, **kwargs):
try:
return self.p.expect_exact(*args, **kwargs)
except Exception:
print(f"{self.p.before=}", file=sys.stderr)
print(f"{self.p.after=}", file=sys.stderr)
raise
def assert_env_var(self, env_var, value, use_exact=False):
# value is actually a regex
self.sendline(self.print_env_var % env_var)
if use_exact:
self.expect_exact(value)
self.clear()
else:
self.expect(rf"{value}\r?\n")
def get_env_var(self, env_var, default=None):
self.sendline(self.print_env_var % env_var)
if self.shell_name == "cmd.exe":
self.expect(rf"@ECHO %{env_var}%\r?\n([^\r\n]*)\r?\n")
elif self.shell_name in ("powershell", "pwsh"):
self.expect(rf"\$Env:{env_var}\r?\n([^\r\n]*)\r?\n")
else:
marker = f"get_env_var-{uuid4().hex}"
self.sendline(f"echo {marker}")
self.expect(rf"([^\r\n]*)\r?\n{marker}\r?\n")
value = self.p.match.group(1)
return default if value is None else value
def clear(self) -> None:
marker = f"clear-{uuid4().hex}"
self.sendline(f"echo {marker}")
self.expect(rf"{marker}\r?\n")
def path_conversion(self, *args, **kwargs):
return self.activator.path_conversion(*args, **kwargs)
|
InteractiveShell
|
python
|
keras-team__keras
|
keras/src/layers/preprocessing/text_vectorization_test.py
|
{
"start": 329,
"end": 10672
}
|
class ____(testing.TestCase, parameterized.TestCase):
# TODO: increase coverage. Most features aren't being tested.
def test_config(self):
layer = layers.TextVectorization(
output_mode="int",
vocabulary=["one", "two"],
output_sequence_length=5,
)
self.run_class_serialization_test(layer)
def test_adapt_flow(self):
max_tokens = 5000
max_len = 4
layer = layers.TextVectorization(
max_tokens=max_tokens,
output_mode="int",
output_sequence_length=max_len,
)
layer.adapt(["foo bar", "bar baz", "baz bada boom"])
input_data = [["foo qux bar"], ["qux baz"]]
output = layer(input_data)
self.assertTrue(backend.is_tensor(output))
self.assertAllClose(output, np.array([[4, 1, 3, 0], [1, 2, 0, 0]]))
def test_fixed_vocabulary(self):
max_tokens = 5000
max_len = 4
layer = layers.TextVectorization(
max_tokens=max_tokens,
output_mode="int",
output_sequence_length=max_len,
vocabulary=["baz", "bar", "foo"],
)
input_data = [["foo qux bar"], ["qux baz"]]
output = layer(input_data)
self.assertTrue(backend.is_tensor(output))
self.assertAllClose(output, np.array([[4, 1, 3, 0], [1, 2, 0, 0]]))
def test_set_vocabulary(self):
max_tokens = 5000
max_len = 4
layer = layers.TextVectorization(
max_tokens=max_tokens,
output_mode="int",
output_sequence_length=max_len,
)
layer.set_vocabulary(["baz", "bar", "foo"])
input_data = [["foo qux bar"], ["qux baz"]]
output = layer(input_data)
self.assertTrue(backend.is_tensor(output))
self.assertAllClose(output, np.array([[4, 1, 3, 0], [1, 2, 0, 0]]))
@pytest.mark.skipif(
backend.backend() != "tensorflow", reason="Requires string input dtype"
)
def test_save_load_with_ngrams_flow(self):
input_data = np.array(["foo bar", "bar baz", "baz bada boom"])
model = Sequential(
[
layers.Input(dtype="string", shape=(1,)),
layers.TextVectorization(ngrams=(1, 2)),
]
)
model.layers[0].adapt(input_data)
output = model(input_data)
temp_filepath = os.path.join(self.get_temp_dir(), "model.keras")
model.save(temp_filepath)
model = saving.load_model(temp_filepath)
self.assertAllClose(output, model(input_data))
def test_tf_data_compatibility(self):
max_tokens = 5000
max_len = 4
layer = layers.TextVectorization(
max_tokens=max_tokens,
output_mode="int",
output_sequence_length=max_len,
vocabulary=["baz", "bar", "foo"],
)
input_data = [["foo qux bar"], ["qux baz"]]
ds = tf_data.Dataset.from_tensor_slices(input_data).batch(2).map(layer)
output = next(iter(ds)).numpy()
self.assertAllClose(output, np.array([[4, 1, 3, 0], [1, 2, 0, 0]]))
# Test adapt flow
layer = layers.TextVectorization(
max_tokens=max_tokens,
output_mode="int",
output_sequence_length=max_len,
)
layer.adapt(input_data)
ds = tf_data.Dataset.from_tensor_slices(input_data).batch(2).map(layer)
next(iter(ds)).numpy()
@parameterized.named_parameters(
[
("from_ragged", "whitespace"), # intermediate tensor is ragged
("from_dense", None), # intermediate tensor is dense
]
)
def test_static_output_sequence_length(self, split):
max_tokens = 5000
max_len = 4
layer = layers.TextVectorization(
max_tokens=max_tokens,
output_mode="int",
output_sequence_length=max_len,
split=split,
vocabulary=["baz", "bar", "foo"],
)
if split:
input_data = [["foo qux bar"], ["qux baz"]]
else:
input_data = [["foo"], ["baz"]]
def call_layer(x):
result = layer(x)
self.assertEqual(result.shape, (None, 4))
return result
ds = (
tf_data.Dataset.from_tensor_slices(input_data)
.batch(2)
.map(call_layer)
)
next(iter(ds))
@pytest.mark.skipif(
backend.backend() != "tensorflow", reason="Requires string tensors."
)
def test_tf_as_first_sequential_layer(self):
layer = layers.TextVectorization(
max_tokens=10,
output_mode="int",
output_sequence_length=3,
)
layer.set_vocabulary(["baz", "bar", "foo"])
model = models.Sequential(
[
layer,
layers.Embedding(5, 4),
]
)
model(backend.convert_to_tensor([["foo qux bar"], ["qux baz"]]))
@pytest.mark.skipif(
backend.backend() != "tensorflow", reason="Requires ragged tensors."
)
def test_ragged_tensor(self):
layer = layers.TextVectorization(
output_mode="int",
vocabulary=["baz", "bar", "foo"],
ragged=True,
)
input_data = [["foo qux bar"], ["qux baz"], ["foo"]]
output = layer(input_data)
self.assertIsInstance(output, tf.RaggedTensor)
self.assertEqual(output.shape, (3, None))
self.assertEqual(output.to_list(), [[4, 1, 3], [1, 2], [4]])
@pytest.mark.skipif(
backend.backend() != "tensorflow", reason="Requires ragged tensors."
)
def test_ragged_tensor_output_length(self):
layer = layers.TextVectorization(
output_mode="int",
vocabulary=["baz", "bar", "foo"],
ragged=True,
output_sequence_length=2,
)
input_data = [["foo qux bar"], ["qux baz"], ["foo"]]
output = layer(input_data)
self.assertIsInstance(output, tf.RaggedTensor)
self.assertEqual(output.shape, (3, None))
self.assertEqual(output.to_list(), [[4, 1], [1, 2], [4]])
@pytest.mark.skipif(
backend.backend() == "tensorflow",
reason="Verify raises exception for non-TF backends",
)
def test_raises_exception_ragged_tensor(self):
with self.assertRaises(ValueError):
_ = layers.TextVectorization(
output_mode="int",
vocabulary=["baz", "bar", "foo"],
ragged=True,
)
def test_multi_hot_output(self):
layer = layers.TextVectorization(
output_mode="multi_hot", vocabulary=["foo", "bar", "baz"]
)
input_data = [["foo bar"], ["baz foo foo"]]
output = layer(input_data)
"""
First batch
Tokens present: ["foo", "bar"]
For each token in vocabulary:
foo (index 1): present -> 1
bar (index 2): present -> 1
baz (index 3): absent -> 0
Result: [0, 1, 1, 0]
Second batch
Tokens: ["baz", "foo", "foo"]
For each token in vocabulary:
foo (index 1): present -> 1
bar (index 2): absent -> 0
baz (index 3): present -> 1
Result: [0, 1, 0, 1]
"""
self.assertAllClose(output, [[0, 1, 1, 0], [0, 1, 0, 1]])
def test_output_mode_count_output(self):
layer = layers.TextVectorization(
output_mode="count", vocabulary=["foo", "bar", "baz"]
)
output = layer(["foo bar", "baz foo foo"])
self.assertAllClose(output, [[0, 1, 1, 0], [0, 2, 0, 1]])
def test_output_mode_tf_idf_output(self):
layer = layers.TextVectorization(
output_mode="tf_idf",
vocabulary=["foo", "bar", "baz"],
idf_weights=[0.3, 0.5, 0.2],
)
output = layer(["foo bar", "baz foo foo"])
self.assertAllClose(
output, [[0.0, 0.3, 0.5, 0.0], [0.0, 0.6, 0.0, 0.2]]
)
def test_lower_and_strip_punctuation_standardization(self):
layer = layers.TextVectorization(
standardize="lower_and_strip_punctuation",
vocabulary=["hello", "world", "this", "is", "nice", "test"],
)
output = layer(["Hello, World!. This is just a nice test!"])
self.assertTrue(backend.is_tensor(output))
# test output sequence length, taking first batch.
self.assertEqual(len(output[0]), 8)
self.assertAllEqual(output, [[2, 3, 4, 5, 1, 1, 6, 7]])
def test_lower_standardization(self):
layer = layers.TextVectorization(
standardize="lower",
vocabulary=[
"hello,",
"hello",
"world",
"this",
"is",
"nice",
"test",
],
)
output = layer(["Hello, World!. This is just a nice test!"])
self.assertTrue(backend.is_tensor(output))
self.assertEqual(len(output[0]), 8)
"""
The input is lowercased and tokenized into words. The vocab is:
{0: '',
1: '[UNK]',
2: 'hello,',
3: 'hello',
4: 'world',
5: 'this',
6: 'is',
7: 'nice',
8: 'test'}
"""
self.assertAllEqual(output, [[2, 1, 5, 6, 1, 1, 7, 1]])
def test_char_splitting(self):
layer = layers.TextVectorization(
split="character", vocabulary=list("abcde"), output_mode="int"
)
output = layer(["abcf"])
self.assertTrue(backend.is_tensor(output))
self.assertEqual(len(output[0]), 4)
self.assertAllEqual(output, [[2, 3, 4, 1]])
def test_custom_splitting(self):
def custom_split(text):
return tf.strings.split(text, sep="|")
layer = layers.TextVectorization(
split=custom_split,
vocabulary=["foo", "bar", "foobar"],
output_mode="int",
)
output = layer(["foo|bar"])
self.assertTrue(backend.is_tensor(output))
# after custom split, the outputted index should be the last
# token in the vocab.
self.assertAllEqual(output, [[4]])
|
TextVectorizationTest
|
python
|
airbytehq__airbyte
|
airbyte-ci/connectors/pipelines/pipelines/helpers/execution/run_steps.py
|
{
"start": 2340,
"end": 3933
}
|
class ____:
"""Options for the run_step function."""
fail_fast: bool = True
skip_steps: List[str] = field(default_factory=list)
keep_steps: List[str] = field(default_factory=list)
log_step_tree: bool = True
concurrency: int = 10
step_params: Dict[CONNECTOR_TEST_STEP_ID, STEP_PARAMS] = field(default_factory=dict)
def __post_init__(self) -> None:
if self.skip_steps and self.keep_steps:
raise ValueError("Cannot use both skip_steps and keep_steps at the same time")
def get_step_ids_to_skip(self, runnables: STEP_TREE) -> List[str]:
if self.skip_steps:
return self.skip_steps
if self.keep_steps:
step_ids_to_keep = set(self.keep_steps)
dependency_graph = _get_dependency_graph(runnables)
all_step_ids = set(dependency_graph.keys())
for step_id in self.keep_steps:
step_ids_to_keep.update(_get_transitive_dependencies_for_step_id(dependency_graph, step_id))
return list(all_step_ids - step_ids_to_keep)
return []
@staticmethod
def get_item_or_default(options: Dict[str, List[Any]], key: str, default: Any) -> Any: # noqa: ANN401
try:
item = dpath.util.get(options, key, separator="/")
except KeyError:
return default
if not isinstance(item, List):
return item
if len(item) > 1:
raise ValueError(f"Only one value for {key} is allowed. Got {len(item)}")
return item[0] if item else default
@dataclass(frozen=True)
|
RunStepOptions
|
python
|
coleifer__peewee
|
peewee.py
|
{
"start": 260307,
"end": 260933
}
|
class ____(_ModelQueryHelper):
def __init__(self, model, *args, **kwargs):
self.model = model
super(_ModelWriteQueryHelper, self).__init__(model, *args, **kwargs)
def returning(self, *returning):
accum = []
for item in returning:
if is_model(item):
accum.extend(item._meta.sorted_fields)
else:
accum.append(item)
return super(_ModelWriteQueryHelper, self).returning(*accum)
def _set_table_alias(self, ctx):
table = self.model._meta.table
ctx.alias_manager[table] = table.__name__
|
_ModelWriteQueryHelper
|
python
|
kamyu104__LeetCode-Solutions
|
Python/sum-of-distances-in-tree.py
|
{
"start": 50,
"end": 1111
}
|
class ____(object):
def sumOfDistancesInTree(self, N, edges):
"""
:type N: int
:type edges: List[List[int]]
:rtype: List[int]
"""
def dfs(graph, node, parent, count, result):
for nei in graph[node]:
if nei != parent:
dfs(graph, nei, node, count, result)
count[node] += count[nei]
result[node] += result[nei]+count[nei]
def dfs2(graph, node, parent, count, result):
for nei in graph[node]:
if nei != parent:
result[nei] = result[node]-count[nei] + \
len(count)-count[nei]
dfs2(graph, nei, node, count, result)
graph = collections.defaultdict(list)
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
count = [1] * N
result = [0] * N
dfs(graph, 0, None, count, result)
dfs2(graph, 0, None, count, result)
return result
|
Solution
|
python
|
numba__numba
|
numba/cuda/stubs.py
|
{
"start": 5131,
"end": 5611
}
|
class ____(Stub):
'''
match_all_sync(mask, value)
Nvvm intrinsic for performing a compare and broadcast across a warp.
Returns a tuple of (mask, pred), where mask is a mask of threads that have
same value as the given value from within the masked warp, if they
all have the same value, otherwise it is 0. Pred is a boolean of whether
or not all threads in the mask warp have the same warp.
'''
_description_ = '<match_all_sync()>'
|
match_all_sync
|
python
|
django__django
|
tests/foreign_object/tests.py
|
{
"start": 26100,
"end": 31278
}
|
class ____(TestCase):
def test_equality(self):
"""
The path_infos and reverse_path_infos attributes are equivalent to
calling the get_<method>() with no arguments.
"""
foreign_object = Membership._meta.get_field("person")
self.assertEqual(
foreign_object.path_infos,
foreign_object.get_path_info(),
)
self.assertEqual(
foreign_object.reverse_path_infos,
foreign_object.get_reverse_path_info(),
)
def test_copy_removes_direct_cached_values(self):
"""
Shallow copying a ForeignObject (or a ForeignObjectRel) removes the
object's direct cached PathInfo values.
"""
foreign_object = Membership._meta.get_field("person")
# Trigger storage of cached_property into ForeignObject's __dict__.
foreign_object.path_infos
foreign_object.reverse_path_infos
# The ForeignObjectRel doesn't have reverse_path_infos.
foreign_object.remote_field.path_infos
self.assertIn("path_infos", foreign_object.__dict__)
self.assertIn("reverse_path_infos", foreign_object.__dict__)
self.assertIn("path_infos", foreign_object.remote_field.__dict__)
# Cached value is removed via __getstate__() on ForeignObjectRel
# because no __copy__() method exists, so __reduce_ex__() is used.
remote_field_copy = copy.copy(foreign_object.remote_field)
self.assertNotIn("path_infos", remote_field_copy.__dict__)
# Cached values are removed via __copy__() on ForeignObject for
# consistency of behavior.
foreign_object_copy = copy.copy(foreign_object)
self.assertNotIn("path_infos", foreign_object_copy.__dict__)
self.assertNotIn("reverse_path_infos", foreign_object_copy.__dict__)
# ForeignObjectRel's remains because it's part of a shallow copy.
self.assertIn("path_infos", foreign_object_copy.remote_field.__dict__)
def test_deepcopy_removes_cached_values(self):
"""
Deep copying a ForeignObject removes the object's cached PathInfo
values, including those of the related ForeignObjectRel.
"""
foreign_object = Membership._meta.get_field("person")
# Trigger storage of cached_property into ForeignObject's __dict__.
foreign_object.path_infos
foreign_object.reverse_path_infos
# The ForeignObjectRel doesn't have reverse_path_infos.
foreign_object.remote_field.path_infos
self.assertIn("path_infos", foreign_object.__dict__)
self.assertIn("reverse_path_infos", foreign_object.__dict__)
self.assertIn("path_infos", foreign_object.remote_field.__dict__)
# Cached value is removed via __getstate__() on ForeignObjectRel
# because no __deepcopy__() method exists, so __reduce_ex__() is used.
remote_field_copy = copy.deepcopy(foreign_object.remote_field)
self.assertNotIn("path_infos", remote_field_copy.__dict__)
# Field.__deepcopy__() internally uses __copy__() on both the
# ForeignObject and ForeignObjectRel, so all cached values are removed.
foreign_object_copy = copy.deepcopy(foreign_object)
self.assertNotIn("path_infos", foreign_object_copy.__dict__)
self.assertNotIn("reverse_path_infos", foreign_object_copy.__dict__)
self.assertNotIn("path_infos", foreign_object_copy.remote_field.__dict__)
def test_pickling_foreignobjectrel(self):
"""
Pickling a ForeignObjectRel removes the path_infos attribute.
ForeignObjectRel implements __getstate__(), so copy and pickle modules
both use that, but ForeignObject implements __reduce__() and __copy__()
separately, so doesn't share the same behavior.
"""
foreign_object_rel = Membership._meta.get_field("person").remote_field
# Trigger storage of cached_property into ForeignObjectRel's __dict__.
foreign_object_rel.path_infos
self.assertIn("path_infos", foreign_object_rel.__dict__)
foreign_object_rel_restored = pickle.loads(pickle.dumps(foreign_object_rel))
self.assertNotIn("path_infos", foreign_object_rel_restored.__dict__)
def test_pickling_foreignobject(self):
"""
Pickling a ForeignObject does not remove the cached PathInfo values.
ForeignObject will always keep the path_infos and reverse_path_infos
attributes within the same process, because of the way
Field.__reduce__() is used for restoring values.
"""
foreign_object = Membership._meta.get_field("person")
# Trigger storage of cached_property into ForeignObjectRel's __dict__
foreign_object.path_infos
foreign_object.reverse_path_infos
self.assertIn("path_infos", foreign_object.__dict__)
self.assertIn("reverse_path_infos", foreign_object.__dict__)
foreign_object_restored = pickle.loads(pickle.dumps(foreign_object))
self.assertIn("path_infos", foreign_object_restored.__dict__)
self.assertIn("reverse_path_infos", foreign_object_restored.__dict__)
|
TestCachedPathInfo
|
python
|
tensorflow__tensorflow
|
tensorflow/python/distribute/one_device_strategy_test.py
|
{
"start": 6539,
"end": 7013
}
|
class ____(
strategy_test_lib.DistributionTestBase,
strategy_test_lib.OneDeviceDistributionTestBase):
def testDeviceAndInputDeviceAreColocated(self, distribution):
self._test_device_and_input_device_are_colocated(distribution)
def testDeviceAndInputDeviceAreColocatedWithFunction(self, distribution):
self._test_device_and_input_device_are_colocated_with_function(distribution)
if __name__ == "__main__":
test.main()
|
OneDeviceStrategyOnRemoteWorkerTest
|
python
|
ethereum__web3.py
|
web3/types.py
|
{
"start": 6881,
"end": 7007
}
|
class ____(TypedDict, total=False):
id: RPCId
jsonrpc: Literal["2.0"]
method: RPCEndpoint
params: Any
|
RPCRequest
|
python
|
allegroai__clearml
|
clearml/backend_api/services/v2_23/tasks.py
|
{
"start": 463071,
"end": 467111
}
|
class ____(Request):
"""
Publish tasks
:param ids: IDs of the tasks to publish
:type ids: Sequence[str]
:param status_reason: Reason for status change
:type status_reason: str
:param status_message: Extra information regarding status change
:type status_message: str
:param force: If not true, call fails if the task status is not 'stopped'
:type force: bool
:param publish_model: Indicates that the task output model (if exists) should
be published. Optional, the default value is True.
:type publish_model: bool
"""
_service = "tasks"
_action = "publish_many"
_version = "2.23"
_schema = {
"definitions": {},
"properties": {
"force": {
"default": False,
"description": "If not true, call fails if the task status is not 'stopped'",
"type": "boolean",
},
"ids": {
"description": "IDs of the tasks to publish",
"items": {"type": "string"},
"type": "array",
},
"publish_model": {
"description": (
"Indicates that the task output model (if exists) should be published. Optional, the default value "
"is True."
),
"type": "boolean",
},
"status_message": {
"description": "Extra information regarding status change",
"type": "string",
},
"status_reason": {
"description": "Reason for status change",
"type": "string",
},
},
"required": ["ids"],
"type": "object",
}
def __init__(
self,
ids,
status_reason=None,
status_message=None,
force=False,
publish_model=None,
**kwargs
):
super(PublishManyRequest, self).__init__(**kwargs)
self.ids = ids
self.status_reason = status_reason
self.status_message = status_message
self.force = force
self.publish_model = publish_model
@schema_property("ids")
def ids(self):
return self._property_ids
@ids.setter
def ids(self, value):
if value is None:
self._property_ids = None
return
self.assert_isinstance(value, "ids", (list, tuple))
self.assert_isinstance(value, "ids", six.string_types, is_array=True)
self._property_ids = value
@schema_property("status_reason")
def status_reason(self):
return self._property_status_reason
@status_reason.setter
def status_reason(self, value):
if value is None:
self._property_status_reason = None
return
self.assert_isinstance(value, "status_reason", six.string_types)
self._property_status_reason = value
@schema_property("status_message")
def status_message(self):
return self._property_status_message
@status_message.setter
def status_message(self, value):
if value is None:
self._property_status_message = None
return
self.assert_isinstance(value, "status_message", six.string_types)
self._property_status_message = value
@schema_property("force")
def force(self):
return self._property_force
@force.setter
def force(self, value):
if value is None:
self._property_force = None
return
self.assert_isinstance(value, "force", (bool,))
self._property_force = value
@schema_property("publish_model")
def publish_model(self):
return self._property_publish_model
@publish_model.setter
def publish_model(self, value):
if value is None:
self._property_publish_model = None
return
self.assert_isinstance(value, "publish_model", (bool,))
self._property_publish_model = value
|
PublishManyRequest
|
python
|
ray-project__ray
|
python/ray/serve/tests/unit/test_schema.py
|
{
"start": 22714,
"end": 29342
}
|
class ____:
def test_deploy_config_duplicate_apps(self):
deploy_config_dict = {
"applications": [
{
"name": "app1",
"route_prefix": "/alice",
"import_path": "module.graph",
},
{
"name": "app2",
"route_prefix": "/charlie",
"import_path": "module.graph",
},
],
}
ServeDeploySchema.parse_obj(deploy_config_dict)
# Duplicate app1
deploy_config_dict["applications"].append(
{"name": "app1", "route_prefix": "/bob", "import_path": "module.graph"},
)
with pytest.raises(ValidationError) as e:
ServeDeploySchema.parse_obj(deploy_config_dict)
assert "app1" in str(e.value) and "app2" not in str(e.value)
# Duplicate app2
deploy_config_dict["applications"].append(
{"name": "app2", "route_prefix": "/david", "import_path": "module.graph"}
)
with pytest.raises(ValidationError) as e:
ServeDeploySchema.parse_obj(deploy_config_dict)
assert "app1" in str(e.value) and "app2" in str(e.value)
def test_deploy_config_duplicate_routes1(self):
"""Test that apps with duplicate route prefixes raises validation error"""
deploy_config_dict = {
"applications": [
{
"name": "app1",
"route_prefix": "/alice",
"import_path": "module.graph",
},
{"name": "app2", "route_prefix": "/bob", "import_path": "module.graph"},
],
}
ServeDeploySchema.parse_obj(deploy_config_dict)
# Duplicate route prefix /alice
deploy_config_dict["applications"].append(
{"name": "app3", "route_prefix": "/alice", "import_path": "module.graph"},
)
with pytest.raises(ValidationError) as e:
ServeDeploySchema.parse_obj(deploy_config_dict)
assert "alice" in str(e.value) and "bob" not in str(e.value)
# Duplicate route prefix /bob
deploy_config_dict["applications"].append(
{"name": "app4", "route_prefix": "/bob", "import_path": "module.graph"},
)
with pytest.raises(ValidationError) as e:
ServeDeploySchema.parse_obj(deploy_config_dict)
assert "alice" in str(e.value) and "bob" in str(e.value)
def test_deploy_config_duplicate_routes2(self):
"""Test that multiple apps with route_prefix set to None parses with no error"""
deploy_config_dict = {
"applications": [
{
"name": "app1",
"route_prefix": "/app1",
"import_path": "module.graph",
},
{"name": "app2", "route_prefix": None, "import_path": "module.graph"},
{"name": "app3", "route_prefix": None, "import_path": "module.graph"},
],
}
ServeDeploySchema.parse_obj(deploy_config_dict)
@pytest.mark.parametrize("option,value", [("host", "127.0.0.1"), ("port", 8000)])
def test_deploy_config_nested_http_options(self, option, value):
"""
The application configs inside a deploy config should not have http options set.
"""
deploy_config_dict = {
"http_options": {
"host": "127.0.0.1",
"port": 8000,
},
"applications": [
{
"name": "app1",
"route_prefix": "/app1",
"import_path": "module.graph",
},
],
}
deploy_config_dict["applications"][0][option] = value
with pytest.raises(ValidationError) as e:
ServeDeploySchema.parse_obj(deploy_config_dict)
assert option in str(e.value)
def test_deploy_empty_name(self):
"""The application configs inside a deploy config should have nonempty names."""
deploy_config_dict = {
"applications": [
{
"name": "",
"route_prefix": "/app1",
"import_path": "module.graph",
},
],
}
with pytest.raises(ValidationError) as e:
ServeDeploySchema.parse_obj(deploy_config_dict)
# Error message should be descriptive, mention name must be nonempty
assert "name" in str(e.value) and "empty" in str(e.value)
def test_deploy_no_applications(self):
"""Applications must be specified."""
deploy_config_dict = {
"http_options": {
"host": "127.0.0.1",
"port": 8000,
},
}
with pytest.raises(ValidationError):
ServeDeploySchema.parse_obj(deploy_config_dict)
def test_deploy_with_grpc_options(self):
"""gRPC options can be specified."""
deploy_config_dict = {
"grpc_options": {
"port": 9000,
"grpc_servicer_functions": ["foo.bar"],
},
"applications": [],
}
ServeDeploySchema.parse_obj(deploy_config_dict)
@pytest.mark.parametrize(
"input_val,error,output_val",
[
# Can be omitted and defaults to `None`.
(None, False, None),
# Can be an int or a float.
(50, False, 50),
(33.33, False, 33.33), # "... repeating, of course."
# Can be 0 or 100, inclusive.
(0, False, 0.0),
(0.0, False, 0.0),
(100, False, 100.0),
(100.0, False, 100.0),
# Cannot be < 0 or > 100.
(-0.1, True, None),
(-1, True, None),
(100.1, True, None),
(101, True, None),
],
)
def test_target_capacity(
self,
input_val: Union[None, int, float],
error: bool,
output_val: Optional[float],
):
"""Test validation of `target_capacity` field."""
deploy_config_dict = {
"applications": [],
}
if input_val is not None:
deploy_config_dict["target_capacity"] = input_val
if error:
with pytest.raises(ValidationError):
ServeDeploySchema.parse_obj(deploy_config_dict)
else:
s = ServeDeploySchema.parse_obj(deploy_config_dict)
assert s.target_capacity == output_val
|
TestServeDeploySchema
|
python
|
langchain-ai__langchain
|
libs/core/langchain_core/runnables/graph_ascii.py
|
{
"start": 658,
"end": 1371
}
|
class ____:
"""VertexViewer class.
Class to define vertex box boundaries that will be accounted for during
graph building by grandalf.
"""
HEIGHT = 3 # top and bottom box edges + text
"""Height of the box."""
def __init__(self, name: str) -> None:
"""Create a VertexViewer.
Args:
name: name of the vertex.
"""
self._h = self.HEIGHT # top and bottom box edges + text
self._w = len(name) + 2 # right and left bottom edges + text
@property
def h(self) -> int:
"""Height of the box."""
return self._h
@property
def w(self) -> int:
"""Width of the box."""
return self._w
|
VertexViewer
|
python
|
allegroai__clearml
|
clearml/backend_api/services/v2_20/models.py
|
{
"start": 145245,
"end": 151614
}
|
class ____(Request):
"""
Create or update a new model for a task
:param task: Task id
:type task: str
:param uri: URI for the model. Exactly one of uri or override_model_id is a
required.
:type uri: str
:param name: Model name Unique within the company.
:type name: str
:param comment: Model comment
:type comment: str
:param tags: User-defined tags list
:type tags: Sequence[str]
:param system_tags: System tags list. This field is reserved for system use,
please don't use it.
:type system_tags: Sequence[str]
:param override_model_id: Override model ID. If provided, this model is updated
in the task. Exactly one of override_model_id or uri is required.
:type override_model_id: str
:param iteration: Iteration (used to update task statistics)
:type iteration: int
"""
_service = "models"
_action = "update_for_task"
_version = "2.20"
_schema = {
"definitions": {},
"properties": {
"comment": {"description": "Model comment", "type": "string"},
"iteration": {
"description": "Iteration (used to update task statistics)",
"type": "integer",
},
"name": {
"description": "Model name Unique within the company.",
"type": "string",
},
"override_model_id": {
"description": "Override model ID. If provided, this model is updated in the task. Exactly one of override_model_id or uri is required.",
"type": "string",
},
"system_tags": {
"description": "System tags list. This field is reserved for system use, please don't use it.",
"items": {"type": "string"},
"type": "array",
},
"tags": {
"description": "User-defined tags list",
"items": {"type": "string"},
"type": "array",
},
"task": {"description": "Task id", "type": "string"},
"uri": {
"description": "URI for the model. Exactly one of uri or override_model_id is a required.",
"type": "string",
},
},
"required": ["task"],
"type": "object",
}
def __init__(
self,
task: str,
uri: Optional[str] = None,
name: Optional[str] = None,
comment: Optional[str] = None,
tags: Optional[List[str]] = None,
system_tags: Optional[List[str]] = None,
override_model_id: Optional[str] = None,
iteration: Optional[int] = None,
**kwargs: Any
) -> None:
super(UpdateForTaskRequest, self).__init__(**kwargs)
self.task = task
self.uri = uri
self.name = name
self.comment = comment
self.tags = tags
self.system_tags = system_tags
self.override_model_id = override_model_id
self.iteration = iteration
@schema_property("task")
def task(self) -> str:
return self._property_task
@task.setter
def task(self, value: str) -> None:
if value is None:
self._property_task = None
return
self.assert_isinstance(value, "task", six.string_types)
self._property_task = value
@schema_property("uri")
def uri(self) -> Optional[str]:
return self._property_uri
@uri.setter
def uri(self, value: Optional[str]) -> None:
if value is None:
self._property_uri = None
return
self.assert_isinstance(value, "uri", six.string_types)
self._property_uri = value
@schema_property("name")
def name(self) -> Optional[str]:
return self._property_name
@name.setter
def name(self, value: Optional[str]) -> None:
if value is None:
self._property_name = None
return
self.assert_isinstance(value, "name", six.string_types)
self._property_name = value
@schema_property("comment")
def comment(self) -> Optional[str]:
return self._property_comment
@comment.setter
def comment(self, value: Optional[str]) -> None:
if value is None:
self._property_comment = None
return
self.assert_isinstance(value, "comment", six.string_types)
self._property_comment = value
@schema_property("tags")
def tags(self) -> Optional[List[str]]:
return self._property_tags
@tags.setter
def tags(self, value: Optional[List[str]]) -> None:
if value is None:
self._property_tags = None
return
self.assert_isinstance(value, "tags", (list, tuple))
self.assert_isinstance(value, "tags", six.string_types, is_array=True)
self._property_tags = value
@schema_property("system_tags")
def system_tags(self) -> Optional[List[str]]:
return self._property_system_tags
@system_tags.setter
def system_tags(self, value: Optional[List[str]]) -> None:
if value is None:
self._property_system_tags = None
return
self.assert_isinstance(value, "system_tags", (list, tuple))
self.assert_isinstance(value, "system_tags", six.string_types, is_array=True)
self._property_system_tags = value
@schema_property("override_model_id")
def override_model_id(self) -> Optional[str]:
return self._property_override_model_id
@override_model_id.setter
def override_model_id(self, value: Optional[str]) -> None:
if value is None:
self._property_override_model_id = None
return
self.assert_isinstance(value, "override_model_id", six.string_types)
self._property_override_model_id = value
@schema_property("iteration")
def iteration(self) -> Optional[int]:
return self._property_iteration
@iteration.setter
def iteration(self, value: Optional[int]) -> None:
if value is None:
self._property_iteration = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "iteration", six.integer_types)
self._property_iteration = value
|
UpdateForTaskRequest
|
python
|
aimacode__aima-python
|
utils4e.py
|
{
"start": 548,
"end": 12877
}
|
class ____:
"""A Queue in which the minimum (or maximum) element (as determined by f and order) is returned first.
If order is 'min', the item with minimum f(x) is
returned first; if order is 'max', then it is the item with maximum f(x).
Also supports dict-like lookup."""
def __init__(self, order='min', f=lambda x: x):
self.heap = []
if order == 'min':
self.f = f
elif order == 'max': # now item with max f(x)
self.f = lambda x: -f(x) # will be popped first
else:
raise ValueError("Order must be either 'min' or 'max'.")
def append(self, item):
"""Insert item at its correct position."""
heapq.heappush(self.heap, (self.f(item), item))
def extend(self, items):
"""Insert each item in items at its correct position."""
for item in items:
self.append(item)
def pop(self):
"""Pop and return the item (with min or max f(x) value)
depending on the order."""
if self.heap:
return heapq.heappop(self.heap)[1]
else:
raise Exception('Trying to pop from empty PriorityQueue.')
def __len__(self):
"""Return current capacity of PriorityQueue."""
return len(self.heap)
def __contains__(self, key):
"""Return True if the key is in PriorityQueue."""
return any([item == key for _, item in self.heap])
def __getitem__(self, key):
"""Returns the first value associated with key in PriorityQueue.
Raises KeyError if key is not present."""
for value, item in self.heap:
if item == key:
return value
raise KeyError(str(key) + " is not in the priority queue")
def __delitem__(self, key):
"""Delete the first occurrence of key."""
try:
del self.heap[[item == key for _, item in self.heap].index(True)]
except ValueError:
raise KeyError(str(key) + " is not in the priority queue")
heapq.heapify(self.heap)
# ______________________________________________________________________________
# Functions on Sequences and Iterables
def sequence(iterable):
"""Converts iterable to sequence, if it is not already one."""
return (iterable if isinstance(iterable, collections.abc.Sequence)
else tuple([iterable]))
def remove_all(item, seq):
"""Return a copy of seq (or string) with all occurrences of item removed."""
if isinstance(seq, str):
return seq.replace(item, '')
elif isinstance(seq, set):
rest = seq.copy()
rest.remove(item)
return rest
else:
return [x for x in seq if x != item]
def unique(seq):
"""Remove duplicate elements from seq. Assumes hashable elements."""
return list(set(seq))
def count(seq):
"""Count the number of items in sequence that are interpreted as true."""
return sum(map(bool, seq))
def multimap(items):
"""Given (key, val) pairs, return {key: [val, ....], ...}."""
result = collections.defaultdict(list)
for (key, val) in items:
result[key].append(val)
return dict(result)
def multimap_items(mmap):
"""Yield all (key, val) pairs stored in the multimap."""
for (key, vals) in mmap.items():
for val in vals:
yield key, val
def product(numbers):
"""Return the product of the numbers, e.g. product([2, 3, 10]) == 60"""
result = 1
for x in numbers:
result *= x
return result
def first(iterable, default=None):
"""Return the first element of an iterable; or default."""
return next(iter(iterable), default)
def is_in(elt, seq):
"""Similar to (elt in seq), but compares with 'is', not '=='."""
return any(x is elt for x in seq)
def mode(data):
"""Return the most common data item. If there are ties, return any one of them."""
[(item, count)] = collections.Counter(data).most_common(1)
return item
def power_set(iterable):
"""power_set([1,2,3]) --> (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"""
s = list(iterable)
return list(chain.from_iterable(combinations(s, r) for r in range(len(s) + 1)))[1:]
def extend(s, var, val):
"""Copy dict s and extend it by setting var to val; return copy."""
return {**s, var: val}
def flatten(seqs):
return sum(seqs, [])
# ______________________________________________________________________________
# argmin and argmax
identity = lambda x: x
def argmin_random_tie(seq, key=identity):
"""Return a minimum element of seq; break ties at random."""
return min(shuffled(seq), key=key)
def argmax_random_tie(seq, key=identity):
"""Return an element with highest fn(seq[i]) score; break ties at random."""
return max(shuffled(seq), key=key)
def shuffled(iterable):
"""Randomly shuffle a copy of iterable."""
items = list(iterable)
random.shuffle(items)
return items
# part2. Mathematical and Statistical util functions
# ______________________________________________________________________________
def histogram(values, mode=0, bin_function=None):
"""Return a list of (value, count) pairs, summarizing the input values.
Sorted by increasing value, or if mode=1, by decreasing count.
If bin_function is given, map it over values first."""
if bin_function:
values = map(bin_function, values)
bins = {}
for val in values:
bins[val] = bins.get(val, 0) + 1
if mode:
return sorted(list(bins.items()), key=lambda x: (x[1], x[0]), reverse=True)
else:
return sorted(bins.items())
def element_wise_product(x, y):
if hasattr(x, '__iter__') and hasattr(y, '__iter__'):
assert len(x) == len(y)
return [element_wise_product(_x, _y) for _x, _y in zip(x, y)]
elif hasattr(x, '__iter__') == hasattr(y, '__iter__'):
return x * y
else:
raise Exception('Inputs must be in the same size!')
def vector_add(a, b):
"""Component-wise addition of two vectors."""
if not (a and b):
return a or b
if hasattr(a, '__iter__') and hasattr(b, '__iter__'):
assert len(a) == len(b)
return list(map(vector_add, a, b))
else:
try:
return a + b
except TypeError:
raise Exception('Inputs must be in the same size!')
def scalar_vector_product(x, y):
"""Return vector as a product of a scalar and a vector recursively."""
return [scalar_vector_product(x, _y) for _y in y] if hasattr(y, '__iter__') else x * y
def map_vector(f, x):
"""Apply function f to iterable x."""
return [map_vector(f, _x) for _x in x] if hasattr(x, '__iter__') else list(map(f, [x]))[0]
def probability(p):
"""Return true with probability p."""
return p > random.uniform(0.0, 1.0)
def weighted_sample_with_replacement(n, seq, weights):
"""Pick n samples from seq at random, with replacement, with the
probability of each element in proportion to its corresponding
weight."""
sample = weighted_sampler(seq, weights)
return [sample() for _ in range(n)]
def weighted_sampler(seq, weights):
"""Return a random-sample function that picks from seq weighted by weights."""
totals = []
for w in weights:
totals.append(w + totals[-1] if totals else w)
return lambda: seq[bisect.bisect(totals, random.uniform(0, totals[-1]))]
def weighted_choice(choices):
"""A weighted version of random.choice"""
# NOTE: Should be replaced by random.choices if we port to Python 3.6
total = sum(w for _, w in choices)
r = random.uniform(0, total)
upto = 0
for c, w in choices:
if upto + w >= r:
return c, w
upto += w
def rounder(numbers, d=4):
"""Round a single number, or sequence of numbers, to d decimal places."""
if isinstance(numbers, (int, float)):
return round(numbers, d)
else:
constructor = type(numbers) # Can be list, set, tuple, etc.
return constructor(rounder(n, d) for n in numbers)
def num_or_str(x): # TODO: rename as `atom`
"""The argument is a string; convert to a number if
possible, or strip it."""
try:
return int(x)
except ValueError:
try:
return float(x)
except ValueError:
return str(x).strip()
def euclidean_distance(x, y):
return np.sqrt(sum((_x - _y) ** 2 for _x, _y in zip(x, y)))
def manhattan_distance(x, y):
return sum(abs(_x - _y) for _x, _y in zip(x, y))
def hamming_distance(x, y):
return sum(_x != _y for _x, _y in zip(x, y))
def rms_error(x, y):
return np.sqrt(ms_error(x, y))
def ms_error(x, y):
return mean((x - y) ** 2 for x, y in zip(x, y))
def mean_error(x, y):
return mean(abs(x - y) for x, y in zip(x, y))
def mean_boolean_error(x, y):
return mean(_x != _y for _x, _y in zip(x, y))
# part3. Neural network util functions
# ______________________________________________________________________________
def cross_entropy_loss(x, y):
"""Cross entropy loss function. x and y are 1D iterable objects."""
return (-1.0 / len(x)) * sum(x * np.log(_y) + (1 - _x) * np.log(1 - _y) for _x, _y in zip(x, y))
def mean_squared_error_loss(x, y):
"""Min square loss function. x and y are 1D iterable objects."""
return (1.0 / len(x)) * sum((_x - _y) ** 2 for _x, _y in zip(x, y))
def normalize(dist):
"""Multiply each number by a constant such that the sum is 1.0"""
if isinstance(dist, dict):
total = sum(dist.values())
for key in dist:
dist[key] = dist[key] / total
assert 0 <= dist[key] <= 1 # probabilities must be between 0 and 1
return dist
total = sum(dist)
return [(n / total) for n in dist]
def random_weights(min_value, max_value, num_weights):
return [random.uniform(min_value, max_value) for _ in range(num_weights)]
def conv1D(x, k):
"""1D convolution. x: input vector; K: kernel vector."""
return np.convolve(x, k, mode='same')
def gaussian_kernel(size=3):
return [gaussian((size - 1) / 2, 0.1, x) for x in range(size)]
def gaussian_kernel_1D(size=3, sigma=0.5):
return [gaussian((size - 1) / 2, sigma, x) for x in range(size)]
def gaussian_kernel_2D(size=3, sigma=0.5):
x, y = np.mgrid[-size // 2 + 1:size // 2 + 1, -size // 2 + 1:size // 2 + 1]
g = np.exp(-((x ** 2 + y ** 2) / (2.0 * sigma ** 2)))
return g / g.sum()
def step(x):
"""Return activation value of x with sign function."""
return 1 if x >= 0 else 0
def gaussian(mean, st_dev, x):
"""Given the mean and standard deviation of a distribution, it returns the probability of x."""
return 1 / (np.sqrt(2 * np.pi) * st_dev) * np.exp(-0.5 * (float(x - mean) / st_dev) ** 2)
def linear_kernel(x, y=None):
if y is None:
y = x
return np.dot(x, y.T)
def polynomial_kernel(x, y=None, degree=2.0):
if y is None:
y = x
return (1.0 + np.dot(x, y.T)) ** degree
def rbf_kernel(x, y=None, gamma=None):
"""Radial-basis function kernel (aka squared-exponential kernel)."""
if y is None:
y = x
if gamma is None:
gamma = 1.0 / x.shape[1] # 1.0 / n_features
return np.exp(-gamma * (-2.0 * np.dot(x, y.T) +
np.sum(x * x, axis=1).reshape((-1, 1)) + np.sum(y * y, axis=1).reshape((1, -1))))
# part4. Self defined data structures
# ______________________________________________________________________________
# Grid Functions
orientations = EAST, NORTH, WEST, SOUTH = [(1, 0), (0, 1), (-1, 0), (0, -1)]
turns = LEFT, RIGHT = (+1, -1)
def turn_heading(heading, inc, headings=orientations):
return headings[(headings.index(heading) + inc) % len(headings)]
def turn_right(heading):
return turn_heading(heading, RIGHT)
def turn_left(heading):
return turn_heading(heading, LEFT)
def distance(a, b):
"""The distance between two (x, y) points."""
xA, yA = a
xB, yB = b
return np.hypot((xA - xB), (yA - yB))
def distance_squared(a, b):
"""The square of the distance between two (x, y) points."""
xA, yA = a
xB, yB = b
return (xA - xB) ** 2 + (yA - yB) ** 2
# ______________________________________________________________________________
# Misc Functions
|
PriorityQueue
|
python
|
getsentry__sentry
|
src/sentry/workflow_engine/types.py
|
{
"start": 8676,
"end": 9062
}
|
class ____(ABC):
"""
A ConfigTransformer is used to transform the config between API and internal representations.
"""
@abstractmethod
def from_api(self, config: dict[str, Any]) -> dict[str, Any]:
raise NotImplementedError
@abstractmethod
def to_api(self, config: dict[str, Any]) -> dict[str, Any]:
raise NotImplementedError
|
ConfigTransformer
|
python
|
mwaskom__seaborn
|
seaborn/_marks/line.py
|
{
"start": 7156,
"end": 7505
}
|
class ____(Paths):
"""
A faster but less-flexible mark for drawing many lines.
See also
--------
Line : A mark connecting data points with sorting along the orientation axis.
Examples
--------
.. include:: ../docstrings/objects.Lines.rst
"""
_sort: ClassVar[bool] = True
@document_properties
@dataclass
|
Lines
|
python
|
getsentry__sentry
|
src/sentry/backup/comparators.py
|
{
"start": 16714,
"end": 21038
}
|
class ____(ObfuscatingComparator):
"""
Comparator that safely truncates passwords to ensure that they do not leak out in logs, stack
traces, etc. Additionally, it validates that the left and right "claimed" status is correct.
Namely, we want the following behaviors:
- If the left side is `is_unclaimed = True` but the right side is `is_unclaimed = False`, error.
- If the right side is `is_unclaimed = True`, make sure the password has changed.
- If the right side is `is_unclaimed = False`, make sure that the password stays the same.
"""
def __init__(self):
super().__init__("password")
def compare(self, on: InstanceID, left: Any, right: Any) -> list[ComparatorFinding]:
findings = []
# Error case: there is no importing action that can "claim" a user.
if left["fields"].get("is_unclaimed") and not right["fields"].get("is_unclaimed"):
findings.append(
ComparatorFinding(
kind=self.get_kind(),
on=on,
left_pk=left["pk"],
right_pk=right["pk"],
reason="""the left value of `is_unclaimed` was `True` but the right value was `False`, even though the act of importing cannot claim users""",
)
)
# Old user, all fields must remain constant.
if not right["fields"].get("is_unclaimed"):
findings.extend(super().compare(on, left, right))
return findings
# New user, password must change.
left_password = left["fields"]["password"]
right_password = right["fields"]["password"]
left_lpc = left["fields"].get("last_password_change") or UNIX_EPOCH
right_lpc = right["fields"].get("last_password_change") or UNIX_EPOCH
if left_password == right_password:
left_pw_truncated = self.truncate(
[left_password] if not isinstance(left_password, list) else left_password
)[0]
right_pw_truncated = self.truncate(
[right_password] if not isinstance(right_password, list) else right_password
)[0]
findings.append(
ComparatorFinding(
kind=self.get_kind(),
on=on,
left_pk=left["pk"],
right_pk=right["pk"],
reason=f"""the left value ("{left_pw_truncated}") of `password` was equal to the
right value ("{right_pw_truncated}"), which is disallowed when
`is_unclaimed` is `True`""",
)
)
# Ensure that the `last_password_change` field was not nulled or less than the left side.
if parser.parse(left_lpc) > parser.parse(right_lpc):
findings.append(
ComparatorFinding(
kind=self.get_kind(),
on=on,
left_pk=left["pk"],
right_pk=right["pk"],
reason=f"""the left value ({left_lpc}) of `last_password_change` was not less than or equal to the right value ({right_lpc})""",
)
)
if right["fields"].get("is_password_expired"):
findings.append(
ComparatorFinding(
kind=self.get_kind(),
on=on,
left_pk=left["pk"],
right_pk=right["pk"],
reason="""the right value of `is_password_expired` must be `False` for unclaimed
users""",
)
)
return findings
def truncate(self, data: list[str]) -> list[str]:
truncated = []
for d in data:
length = len(d)
if length > 80:
# Retains algorithm identifying prefix, plus a few characters on the end.
truncated.append(f"{d[:12]}...{d[-6:]}")
elif length > 40:
# Smaller hashes expose less information
truncated.append(f"{d[:6]}...{d[-4:]}")
else:
# Very small hashes expose no information at all.
truncated.append("...")
return truncated
|
UserPasswordObfuscatingComparator
|
python
|
doocs__leetcode
|
solution/0800-0899/0879.Profitable Schemes/Solution.py
|
{
"start": 0,
"end": 490
}
|
class ____:
def profitableSchemes(
self, n: int, minProfit: int, group: List[int], profit: List[int]
) -> int:
@cache
def dfs(i: int, j: int, k: int) -> int:
if i >= len(group):
return 1 if k == minProfit else 0
ans = dfs(i + 1, j, k)
if j + group[i] <= n:
ans += dfs(i + 1, j + group[i], min(k + profit[i], minProfit))
return ans % (10**9 + 7)
return dfs(0, 0, 0)
|
Solution
|
python
|
sqlalchemy__sqlalchemy
|
test/orm/test_collection.py
|
{
"start": 77392,
"end": 79236
}
|
class ____(fixtures.ORMTest):
def test_name_setup(self):
class Base:
@collection.iterator
def base_iterate(self, x):
return "base_iterate"
@collection.appender
def base_append(self, x):
return "base_append"
@collection.remover
def base_remove(self, x):
return "base_remove"
from sqlalchemy.orm.collections import _instrument_class
_instrument_class(Base)
eq_(Base._sa_remover(Base(), 5), "base_remove")
eq_(Base._sa_appender(Base(), 5), "base_append")
eq_(Base._sa_iterator(Base(), 5), "base_iterate")
class Sub(Base):
@collection.remover
def sub_remove(self, x):
return "sub_remove"
_instrument_class(Sub)
eq_(Sub._sa_appender(Sub(), 5), "base_append")
eq_(Sub._sa_remover(Sub(), 5), "sub_remove")
eq_(Sub._sa_iterator(Sub(), 5), "base_iterate")
def test_uncooperative_descriptor_in_sweep(self):
class DoNotTouch:
def __get__(self, obj, owner):
raise AttributeError
class Touchy(list):
no_touch = DoNotTouch()
assert "no_touch" in Touchy.__dict__
assert not hasattr(Touchy, "no_touch")
assert "no_touch" in dir(Touchy)
collections._instrument_class(Touchy)
def test_referenced_by_owner(self):
class Foo:
pass
instrumentation.register_class(Foo)
_register_attribute(Foo, "attr", uselist=True, useobject=True)
f1 = Foo()
f1.attr.append(3)
adapter = collections.collection_adapter(f1.attr)
assert adapter._referenced_by_owner
f1.attr = []
assert not adapter._referenced_by_owner
|
InstrumentationTest
|
python
|
kamyu104__LeetCode-Solutions
|
Python/count-partitions-with-even-sum-difference.py
|
{
"start": 42,
"end": 408
}
|
class ____(object):
def countPartitions(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
result = left = 0
right = sum(nums)
for i in xrange(len(nums)-1):
left += nums[i]
right -= nums[i]
if left%2 == right%2:
result += 1
return result
|
Solution
|
python
|
coleifer__peewee
|
tests/psycopg3_ext.py
|
{
"start": 19683,
"end": 20196
}
|
class ____(DatabaseTestCase):
database = db_loader('postgres', db_class=Psycopg3Database,
isolation_level=3) # SERIALIZABLE.
def test_isolation_level(self):
conn = self.database.connection()
self.assertEqual(conn.isolation_level, 3)
conn.isolation_level = 2
self.assertEqual(conn.isolation_level, 2)
self.database.close()
conn = self.database.connection()
self.assertEqual(conn.isolation_level, 3)
|
TestPsycopg3IsolationLevel
|
python
|
dagster-io__dagster
|
python_modules/dagster-graphql/dagster_graphql/schema/inputs.py
|
{
"start": 13894,
"end": 14302
}
|
class ____(graphene.InputObjectType):
class Meta:
name = "InstigationSelector"
description = (
"""This type represents the fields necessary to identify a schedule or sensor."""
)
repositoryName = graphene.NonNull(graphene.String)
repositoryLocationName = graphene.NonNull(graphene.String)
name = graphene.NonNull(graphene.String)
|
GrapheneInstigationSelector
|
python
|
django__django
|
tests/serializers/test_yaml.py
|
{
"start": 476,
"end": 1258
}
|
class ____:
"""Provides a wrapped import_module function to simulate yaml ImportError
In order to run tests that verify the behavior of the YAML serializer
when run on a system that has yaml installed (like the django CI server),
mock import_module, so that it raises an ImportError when the yaml
serializer is being imported. The importlib.import_module() call is
being made in the serializers.register_serializer().
Refs: #12756
"""
def __init__(self):
self._import_module = importlib.import_module
def import_module(self, module_path):
if module_path == serializers.BUILTIN_SERIALIZERS["yaml"]:
raise ImportError(YAML_IMPORT_ERROR_MESSAGE)
return self._import_module(module_path)
|
YamlImportModuleMock
|
python
|
doocs__leetcode
|
solution/0900-0999/0944.Delete Columns to Make Sorted/Solution.py
|
{
"start": 0,
"end": 309
}
|
class ____:
def minDeletionSize(self, strs: List[str]) -> int:
m, n = len(strs[0]), len(strs)
ans = 0
for j in range(m):
for i in range(1, n):
if strs[i][j] < strs[i - 1][j]:
ans += 1
break
return ans
|
Solution
|
python
|
pytorch__pytorch
|
torch/distributed/elastic/rendezvous/_etcd_stub.py
|
{
"start": 728,
"end": 852
}
|
class ____(Exception):
def __init__(self, *args: Any, **kwargs: Any) -> None:
raise EtcdStubError
|
EtcdAlreadyExist
|
python
|
django__django
|
tests/template_tests/syntax_tests/test_width_ratio.py
|
{
"start": 116,
"end": 6571
}
|
class ____(SimpleTestCase):
libraries = {"custom": "template_tests.templatetags.custom"}
@setup({"widthratio01": "{% widthratio a b 0 %}"})
def test_widthratio01(self):
output = self.engine.render_to_string("widthratio01", {"a": 50, "b": 100})
self.assertEqual(output, "0")
@setup({"widthratio02": "{% widthratio a b 100 %}"})
def test_widthratio02(self):
output = self.engine.render_to_string("widthratio02", {"a": 0, "b": 0})
self.assertEqual(output, "0")
@setup({"widthratio03": "{% widthratio a b 100 %}"})
def test_widthratio03(self):
output = self.engine.render_to_string("widthratio03", {"a": 0, "b": 100})
self.assertEqual(output, "0")
@setup({"widthratio04": "{% widthratio a b 100 %}"})
def test_widthratio04(self):
output = self.engine.render_to_string("widthratio04", {"a": 50, "b": 100})
self.assertEqual(output, "50")
@setup({"widthratio05": "{% widthratio a b 100 %}"})
def test_widthratio05(self):
output = self.engine.render_to_string("widthratio05", {"a": 100, "b": 100})
self.assertEqual(output, "100")
@setup({"widthratio06": "{% widthratio a b 100 %}"})
def test_widthratio06(self):
"""
62.5 should round to 62
"""
output = self.engine.render_to_string("widthratio06", {"a": 50, "b": 80})
self.assertEqual(output, "62")
@setup({"widthratio07": "{% widthratio a b 100 %}"})
def test_widthratio07(self):
"""
71.4 should round to 71
"""
output = self.engine.render_to_string("widthratio07", {"a": 50, "b": 70})
self.assertEqual(output, "71")
# Raise exception if we don't have 3 args, last one an integer
@setup({"widthratio08": "{% widthratio %}"})
def test_widthratio08(self):
with self.assertRaises(TemplateSyntaxError):
self.engine.get_template("widthratio08")
@setup({"widthratio09": "{% widthratio a b %}"})
def test_widthratio09(self):
with self.assertRaises(TemplateSyntaxError):
self.engine.render_to_string("widthratio09", {"a": 50, "b": 100})
@setup({"widthratio10": "{% widthratio a b 100.0 %}"})
def test_widthratio10(self):
output = self.engine.render_to_string("widthratio10", {"a": 50, "b": 100})
self.assertEqual(output, "50")
@setup({"widthratio11": "{% widthratio a b c %}"})
def test_widthratio11(self):
"""
#10043: widthratio should allow max_width to be a variable
"""
output = self.engine.render_to_string(
"widthratio11", {"a": 50, "c": 100, "b": 100}
)
self.assertEqual(output, "50")
# #18739: widthratio should handle None args consistently with
# non-numerics
@setup({"widthratio12a": "{% widthratio a b c %}"})
def test_widthratio12a(self):
output = self.engine.render_to_string(
"widthratio12a", {"a": "a", "c": 100, "b": 100}
)
self.assertEqual(output, "")
@setup({"widthratio12b": "{% widthratio a b c %}"})
def test_widthratio12b(self):
output = self.engine.render_to_string(
"widthratio12b", {"a": None, "c": 100, "b": 100}
)
self.assertEqual(output, "")
@setup({"widthratio13a": "{% widthratio a b c %}"})
def test_widthratio13a(self):
output = self.engine.render_to_string(
"widthratio13a", {"a": 0, "c": 100, "b": "b"}
)
self.assertEqual(output, "")
@setup({"widthratio13b": "{% widthratio a b c %}"})
def test_widthratio13b(self):
output = self.engine.render_to_string(
"widthratio13b", {"a": 0, "c": 100, "b": None}
)
self.assertEqual(output, "")
@setup({"widthratio14a": "{% widthratio a b c %}"})
def test_widthratio14a(self):
with self.assertRaises(TemplateSyntaxError):
self.engine.render_to_string("widthratio14a", {"a": 0, "c": "c", "b": 100})
@setup({"widthratio14b": "{% widthratio a b c %}"})
def test_widthratio14b(self):
with self.assertRaises(TemplateSyntaxError):
self.engine.render_to_string("widthratio14b", {"a": 0, "c": None, "b": 100})
@setup({"widthratio15": '{% load custom %}{% widthratio a|noop:"x y" b 0 %}'})
def test_widthratio15(self):
"""
Test whitespace in filter argument
"""
output = self.engine.render_to_string("widthratio15", {"a": 50, "b": 100})
self.assertEqual(output, "0")
# Widthratio with variable assignment
@setup({"widthratio16": "{% widthratio a b 100 as variable %}-{{ variable }}-"})
def test_widthratio16(self):
output = self.engine.render_to_string("widthratio16", {"a": 50, "b": 100})
self.assertEqual(output, "-50-")
@setup({"widthratio17": "{% widthratio a b 100 as variable %}-{{ variable }}-"})
def test_widthratio17(self):
output = self.engine.render_to_string("widthratio17", {"a": 100, "b": 100})
self.assertEqual(output, "-100-")
@setup({"widthratio18": "{% widthratio a b 100 as %}"})
def test_widthratio18(self):
with self.assertRaises(TemplateSyntaxError):
self.engine.get_template("widthratio18")
@setup({"widthratio19": "{% widthratio a b 100 not_as variable %}"})
def test_widthratio19(self):
with self.assertRaises(TemplateSyntaxError):
self.engine.get_template("widthratio19")
@setup({"widthratio20": "{% widthratio a b 100 %}"})
def test_widthratio20(self):
output = self.engine.render_to_string(
"widthratio20", {"a": float("inf"), "b": float("inf")}
)
self.assertEqual(output, "")
@setup({"widthratio21": "{% widthratio a b 100 %}"})
def test_widthratio21(self):
output = self.engine.render_to_string(
"widthratio21", {"a": float("inf"), "b": 2}
)
self.assertEqual(output, "")
@setup({"t": "{% widthratio a b 100 as variable %}-{{ variable }}-"})
def test_zerodivisionerror_as_var(self):
output = self.engine.render_to_string("t", {"a": 0, "b": 0})
self.assertEqual(output, "-0-")
@setup({"t": "{% widthratio a b c as variable %}-{{ variable }}-"})
def test_typeerror_as_var(self):
output = self.engine.render_to_string("t", {"a": "a", "c": 100, "b": 100})
self.assertEqual(output, "--")
|
WidthRatioTagTests
|
python
|
chardet__chardet
|
chardet/jpcntx.py
|
{
"start": 25312,
"end": 26325
}
|
class ____(JapaneseContextAnalysis):
def __init__(self) -> None:
super().__init__()
self._charset_name = "SHIFT_JIS"
@property
def charset_name(self) -> str:
return self._charset_name
def get_order(self, byte_str: Union[bytes, bytearray]) -> Tuple[int, int]: # type: ignore[reportIncompatibleMethodOverride]
if not byte_str:
return -1, 1
# find out current char's byte length
first_char = byte_str[0]
if (0x81 <= first_char <= 0x9F) or (0xE0 <= first_char <= 0xFC):
char_len = 2
if (first_char == 0x87) or (0xFA <= first_char <= 0xFC):
self._charset_name = "CP932"
else:
char_len = 1
# return its order if it is hiragana
if len(byte_str) > 1:
second_char = byte_str[1]
if (first_char == 202) and (0x9F <= second_char <= 0xF1):
return second_char - 0x9F, char_len
return -1, char_len
|
SJISContextAnalysis
|
python
|
microsoft__pyright
|
packages/pyright-internal/src/tests/samples/namedTuple7.py
|
{
"start": 446,
"end": 570
}
|
class ____(NT1[str]): ...
reveal_type(NT2("", 4, []), expected_text="NT2")
# This should generate an error.
NT2(1, 4, [])
|
NT2
|
python
|
run-llama__llama_index
|
llama-index-integrations/graph_stores/llama-index-graph-stores-memgraph/llama_index/graph_stores/memgraph/property_graph.py
|
{
"start": 2139,
"end": 42966
}
|
class ____(PropertyGraphStore):
r"""
Memgraph Property Graph Store.
This class implements a Memgraph property graph store.
Args:
username (str): The username for the Memgraph database.
password (str): The password for the Memgraph database.
url (str): The URL for the Memgraph database.
database (Optional[str]): The name of the database to connect to. Defaults to "memgraph".
Examples:
```python
from llama_index.core.indices.property_graph import PropertyGraphIndex
from llama_index.graph_stores.memgraph import MemgraphPropertyGraphStore
# Create a MemgraphPropertyGraphStore instance
graph_store = MemgraphPropertyGraphStore(
username="memgraph",
password="password",
url="bolt://localhost:7687",
database="memgraph"
)
# Create the index
index = PropertyGraphIndex.from_documents(
documents,
property_graph_store=graph_store,
)
# Close the Memgraph connection explicitly.
graph_store.close()
```
"""
supports_structured_queries: bool = True
supports_vector_queries: bool = True
text_to_cypher_template: PromptTemplate = DEFAULT_CYPHER_TEMPALTE
def __init__(
self,
username: str,
password: str,
url: str,
database: Optional[str] = "memgraph",
refresh_schema: bool = True,
sanitize_query_output: bool = True,
enhanced_schema: bool = False,
create_indexes: bool = True,
**neo4j_kwargs: Any,
) -> None:
self.sanitize_query_output = sanitize_query_output
self.enhanced_schema = enhanced_schema
self._driver = neo4j.GraphDatabase.driver(
url, auth=(username, password), **neo4j_kwargs
)
self._database = database
self.structured_schema = {}
if refresh_schema:
self.refresh_schema()
# Check if we can use vector index
self.verify_vector_support()
if create_indexes:
# Create index for faster imports and retrieval
self.structured_query(f"""CREATE INDEX ON :{BASE_NODE_LABEL}(id);""")
self.structured_query(f"""CREATE INDEX ON :{BASE_ENTITY_LABEL}(id);""")
@property
def client(self):
return self._driver
def close(self) -> None:
"""Close the database driver connection."""
self._driver.close()
def get_schema_subset(self, schema_result: Dict[str, Any]) -> None:
"""Refresh the schema using the SHOW SCHEMA INFO."""
# Parse the 'schema' field for each entry
parsed_data = []
for entry in schema_result:
schema_str = entry.get("schema", "{}")
try:
parsed_schema = json.loads(schema_str)
parsed_data.append(parsed_schema)
except json.JSONDecodeError as decode_error:
print(f"Failed to parse schema: {decode_error}")
continue
node_properties = []
rel_properties = []
relationships = []
for schema in parsed_data:
# Extract node properties
for node in schema.get("nodes", []):
node_label = node.get("labels", [None])[0]
if node_label in [
BASE_ENTITY_LABEL,
BASE_NODE_LABEL,
]:
continue
properties = [
{
"property": prop.get("key"),
"type": prop.get("types", [{}])[0].get("type"),
}
for prop in node.get("properties", [])
]
if node_label and properties:
node_properties.append(
{"labels": node_label, "properties": properties}
)
# Extract relationship properties, types & count
for edge in schema.get("edges", []):
rel_type = edge.get("type")
properties = [
{
"property": prop.get("key"),
"type": prop.get("types", [{}])[0].get("type"),
}
for prop in edge.get("properties", [])
]
if rel_type and properties:
rel_properties.append(
{"properties": properties, "type": f":`{rel_type}`"}
)
start = edge.get("start_node_labels", [None])[0]
end = edge.get("end_node_labels", [None])[0]
if start and end and rel_type:
relationships.append({"start": start, "end": end, "type": rel_type})
self.structured_schema = {
"node_props": {el["labels"]: el["properties"] for el in node_properties},
"rel_props": {el["type"]: el["properties"] for el in rel_properties},
"relationships": relationships,
}
def refresh_schema(self) -> None:
"""Refresh the schema."""
# Leave schema empty if db is empty
if self.structured_query("MATCH (n) RETURN n LIMIT 1") == []:
return
# First try with SHOW SCHEMA INFO
try:
node_query_results = self.structured_query(
SHOW_SCHEMA_INFO,
param_map={
"EXCLUDED_LABELS": [
BASE_ENTITY_LABEL,
BASE_NODE_LABEL,
]
},
)
if node_query_results is not None and isinstance(
node_query_results, (str, ast.AST)
):
schema_result = ast.literal_eval(node_query_results)
else:
schema_result = node_query_results
assert schema_result is not None
self.get_schema_subset(schema_result)
return
except neo4j.exceptions.Neo4jError as decode_error:
if (
decode_error.code == "Memgraph.ClientError.MemgraphError.MemgraphError"
and "SchemaInfo disabled" in decode_error.message
):
logger.info(
"Schema generation with SHOW SCHEMA INFO query failed. "
"Set --schema-info-enabled=true to use SHOW SCHEMA INFO query. "
"Falling back to alternative queries."
)
# fallback on Cypher without SHOW SCHEMA INFO
node_query_results = self.structured_query(
NODE_PROPERTIES_QUERY,
param_map={
"EXCLUDED_LABELS": [
BASE_ENTITY_LABEL,
BASE_NODE_LABEL,
]
},
)
node_properties = {}
for result in node_query_results:
if result["output"]["labels"] in [
BASE_ENTITY_LABEL,
BASE_NODE_LABEL,
]:
continue
label = result["output"]["labels"]
properties = result["output"]["properties"]
if label in node_properties:
node_properties[label]["properties"].extend(
prop
for prop in properties
if prop not in node_properties[label]["properties"]
)
else:
node_properties[label] = {"properties": properties}
node_properties = [
{"labels": label, **value} for label, value in node_properties.items()
]
rels_query_result = self.structured_query(REL_PROPERTIES_QUERY)
rel_properties = (
[
result["output"]
for result in rels_query_result
if any(
prop["property"] for prop in result["output"].get("properties", [])
)
]
if rels_query_result
else []
)
rel_objs_query_result = self.structured_query(
REL_QUERY,
param_map={
"EXCLUDED_LABELS": [
BASE_ENTITY_LABEL,
BASE_NODE_LABEL,
]
},
)
relationships = [
el["output"]
for el in rel_objs_query_result
if rel_objs_query_result
and el["output"]["start"] not in [BASE_ENTITY_LABEL, BASE_NODE_LABEL]
and el["output"]["end"] not in [BASE_ENTITY_LABEL, BASE_NODE_LABEL]
]
self.structured_schema = {
"node_props": {el["labels"]: el["properties"] for el in node_properties},
"rel_props": {el["type"]: el["properties"] for el in rel_properties},
"relationships": relationships,
}
def upsert_nodes(self, nodes: List[LabelledNode]) -> None:
# Lists to hold separated types
entity_dicts: List[dict] = []
chunk_dicts: List[dict] = []
# Sort by type
for item in nodes:
if isinstance(item, EntityNode):
entity_dicts.append({**item.dict(), "id": item.id})
elif isinstance(item, ChunkNode):
chunk_dicts.append({**item.dict(), "id": item.id})
else:
pass
if chunk_dicts:
for index in range(0, len(chunk_dicts), CHUNK_SIZE):
chunked_params = chunk_dicts[index : index + CHUNK_SIZE]
self.structured_query(
f"""
UNWIND $data AS row
MERGE (c:{BASE_NODE_LABEL} {{id: row.id}})
SET c.`text` = row.text, c:Chunk
WITH c, row
SET c += row.properties
WITH c, row.embedding as embedding
WHERE embedding IS NOT NULL
SET c.embedding = embedding
RETURN count(*)
""",
param_map={"data": chunked_params},
)
if entity_dicts:
for index in range(0, len(entity_dicts), CHUNK_SIZE):
chunked_params = entity_dicts[index : index + CHUNK_SIZE]
self.structured_query(
f"""
UNWIND $data AS row
MERGE (e:{BASE_NODE_LABEL} {{id: row.id}})
SET e += CASE WHEN row.properties IS NOT NULL THEN row.properties ELSE e END
SET e.name = CASE WHEN row.name IS NOT NULL THEN row.name ELSE e.name END,
e:{BASE_ENTITY_LABEL}
WITH e, row
SET e:row.label
WITH e, row
WHERE row.embedding IS NOT NULL
SET e.embedding = row.embedding
WITH e, row
WHERE row.properties.triplet_source_id IS NOT NULL
MERGE (c:{BASE_NODE_LABEL} {{id: row.properties.triplet_source_id}})
MERGE (e)<-[:MENTIONS]-(c)
""",
param_map={"data": chunked_params},
)
def upsert_relations(self, relations: List[Relation]) -> None:
"""Add relations."""
params = [r.dict() for r in relations]
for index in range(0, len(params), CHUNK_SIZE):
chunked_params = params[index : index + CHUNK_SIZE]
self.structured_query(
f"""
UNWIND $data AS row
MERGE (source: {BASE_NODE_LABEL} {{id: row.source_id}})
ON CREATE SET source:Chunk
MERGE (target: {BASE_NODE_LABEL} {{id: row.target_id}})
ON CREATE SET target:Chunk
WITH source, target, row
CREATE (source)-[r:row.label]->(target)
SET r += row.properties
RETURN count(*)
""",
param_map={"data": chunked_params},
)
def get(
self,
properties: Optional[dict] = None,
ids: Optional[List[str]] = None,
) -> List[LabelledNode]:
"""Get nodes."""
cypher_statement = f"MATCH (e:{BASE_NODE_LABEL}) "
params = {}
cypher_statement += "WHERE e.id IS NOT NULL "
if ids:
cypher_statement += "AND e.id IN $ids "
params["ids"] = ids
if properties:
prop_list = []
for i, prop in enumerate(properties):
prop_list.append(f"e.`{prop}` = $property_{i}")
params[f"property_{i}"] = properties[prop]
cypher_statement += " AND " + " AND ".join(prop_list)
return_statement = """
RETURN
e.id AS name,
CASE
WHEN labels(e)[0] IN ['__Entity__', '__Node__'] THEN
CASE
WHEN size(labels(e)) > 2 THEN labels(e)[2]
WHEN size(labels(e)) > 1 THEN labels(e)[1]
ELSE NULL
END
ELSE labels(e)[0]
END AS type,
properties(e) AS properties
"""
cypher_statement += return_statement
response = self.structured_query(cypher_statement, param_map=params)
response = response if response else []
nodes = []
for record in response:
if "text" in record["properties"] or record["type"] is None:
text = record["properties"].pop("text", "")
nodes.append(
ChunkNode(
id_=record["name"],
text=text,
properties=remove_empty_values(record["properties"]),
)
)
else:
nodes.append(
EntityNode(
name=record["name"],
label=record["type"],
properties=remove_empty_values(record["properties"]),
)
)
return nodes
def get_triplets(
self,
entity_names: Optional[List[str]] = None,
relation_names: Optional[List[str]] = None,
properties: Optional[dict] = None,
ids: Optional[List[str]] = None,
) -> List[Triplet]:
cypher_statement = f"MATCH (e:`{BASE_ENTITY_LABEL}`)-[r]->(t) "
params = {}
if entity_names or relation_names or properties or ids:
cypher_statement += "WHERE "
if entity_names:
cypher_statement += "e.name in $entity_names "
params["entity_names"] = entity_names
if relation_names and entity_names:
cypher_statement += "AND "
if relation_names:
cypher_statement += "type(r) in $relation_names "
params["relation_names"] = relation_names
if ids:
cypher_statement += "e.id in $ids "
params["ids"] = ids
if properties:
prop_list = []
for i, prop in enumerate(properties):
prop_list.append(f"e.`{prop}` = $property_{i}")
params[f"property_{i}"] = properties[prop]
cypher_statement += " AND ".join(prop_list)
if not (entity_names or properties or relation_names or ids):
return_statement = """
WHERE NOT ANY(label IN labels(e) WHERE label = 'Chunk')
RETURN type(r) as type, properties(r) as rel_prop, e.id as source_id,
CASE
WHEN labels(e)[0] IN ['__Entity__', '__Node__'] THEN
CASE
WHEN size(labels(e)) > 2 THEN labels(e)[2]
WHEN size(labels(e)) > 1 THEN labels(e)[1]
ELSE NULL
END
ELSE labels(e)[0]
END AS source_type,
properties(e) AS source_properties,
t.id as target_id,
CASE
WHEN labels(t)[0] IN ['__Entity__', '__Node__'] THEN
CASE
WHEN size(labels(t)) > 2 THEN labels(t)[2]
WHEN size(labels(t)) > 1 THEN labels(t)[1]
ELSE NULL
END
ELSE labels(t)[0]
END AS target_type, properties(t) AS target_properties LIMIT 100;
"""
else:
return_statement = """
AND NOT ANY(label IN labels(e) WHERE label = 'Chunk')
RETURN type(r) as type, properties(r) as rel_prop, e.id as source_id,
CASE
WHEN labels(e)[0] IN ['__Entity__', '__Node__'] THEN
CASE
WHEN size(labels(e)) > 2 THEN labels(e)[2]
WHEN size(labels(e)) > 1 THEN labels(e)[1]
ELSE NULL
END
ELSE labels(e)[0]
END AS source_type,
properties(e) AS source_properties,
t.id as target_id,
CASE
WHEN labels(t)[0] IN ['__Entity__', '__Node__'] THEN
CASE
WHEN size(labels(t)) > 2 THEN labels(t)[2]
WHEN size(labels(t)) > 1 THEN labels(t)[1]
ELSE NULL
END
ELSE labels(t)[0]
END AS target_type, properties(t) AS target_properties LIMIT 100;
"""
cypher_statement += return_statement
data = self.structured_query(cypher_statement, param_map=params)
data = data if data else []
triplets = []
for record in data:
source = EntityNode(
name=record["source_id"],
label=record["source_type"],
properties=remove_empty_values(record["source_properties"]),
)
target = EntityNode(
name=record["target_id"],
label=record["target_type"],
properties=remove_empty_values(record["target_properties"]),
)
rel = Relation(
source_id=record["source_id"],
target_id=record["target_id"],
label=record["type"],
properties=remove_empty_values(record["rel_prop"]),
)
triplets.append([source, rel, target])
return triplets
def get_rel_map(
self,
graph_nodes: List[LabelledNode],
depth: int = 2,
limit: int = 30,
ignore_rels: Optional[List[str]] = None,
) -> List[Triplet]:
"""Get depth-aware rel map."""
triples = []
ids = [node.id for node in graph_nodes]
response = self.structured_query(
f"""
WITH $ids AS id_list
UNWIND range(0, size(id_list) - 1) AS idx
MATCH (e:__Node__)
WHERE e.id = id_list[idx]
MATCH p=(e)-[r*1..{depth}]-(other)
WHERE ALL(rel in relationships(p) WHERE type(rel) <> 'MENTIONS')
UNWIND relationships(p) AS rel
WITH DISTINCT rel, idx
WITH startNode(rel) AS source,
type(rel) AS type,
rel{{.*}} AS rel_properties,
endNode(rel) AS endNode,
idx
LIMIT toInteger($limit)
RETURN source.id AS source_id,
CASE
WHEN labels(source)[0] IN ['__Entity__', '__Node__'] THEN
CASE
WHEN size(labels(source)) > 2 THEN labels(source)[2]
WHEN size(labels(source)) > 1 THEN labels(source)[1]
ELSE NULL
END
ELSE labels(source)[0]
END AS source_type,
properties(source) AS source_properties,
type,
rel_properties,
endNode.id AS target_id,
CASE
WHEN labels(endNode)[0] IN ['__Entity__', '__Node__'] THEN
CASE
WHEN size(labels(endNode)) > 2 THEN labels(endNode)[2]
WHEN size(labels(endNode)) > 1 THEN labels(endNode)[1] ELSE NULL
END
ELSE labels(endNode)[0]
END AS target_type,
properties(endNode) AS target_properties,
idx
ORDER BY idx
LIMIT toInteger($limit)
""",
param_map={"ids": ids, "limit": limit},
)
response = response if response else []
ignore_rels = ignore_rels or []
for record in response:
if record["type"] in ignore_rels:
continue
source = EntityNode(
name=record["source_id"],
label=record["source_type"],
properties=remove_empty_values(record["source_properties"]),
)
target = EntityNode(
name=record["target_id"],
label=record["target_type"],
properties=remove_empty_values(record["target_properties"]),
)
rel = Relation(
source_id=record["source_id"],
target_id=record["target_id"],
label=record["type"],
properties=remove_empty_values(record["rel_properties"]),
)
triples.append([source, rel, target])
return triples
def structured_query(
self, query: str, param_map: Optional[Dict[str, Any]] = None
) -> Any:
param_map = param_map or {}
with self._driver.session(database=self._database) as session:
result = session.run(query, param_map)
full_result = [d.data() for d in result]
if self.sanitize_query_output:
return [value_sanitize(el) for el in full_result]
return full_result
def vector_query(
self, query: VectorStoreQuery, **kwargs: Any
) -> Tuple[List[LabelledNode], List[float]]:
"""Query the graph store with a vector store query."""
if self._supports_vector_index:
data = self.structured_query(
f"""CALL vector_search.search('{VECTOR_INDEX_NAME}', $limit, $embedding)
YIELD node, similarity
WITH node, similarity, labels(node) AS all_labels
UNWIND all_labels AS label
WITH node, similarity, label
WHERE NOT label IN ['{BASE_ENTITY_LABEL}', '{BASE_NODE_LABEL}']
WITH node, similarity, label, properties(node) AS originalProperties
RETURN
node.id AS name,
label AS type,
node{{.* , embedding: Null, name: Null, id: Null}} AS properties,
similarity
""",
param_map={
"embedding": query.query_embedding,
"limit": query.similarity_top_k,
},
)
else:
data = []
data = data if data else []
nodes = []
scores = []
for record in data:
node = EntityNode(
name=record["name"],
label=record["type"],
properties=remove_empty_values(record["properties"]),
)
nodes.append(node)
scores.append(record["similarity"])
return (nodes, scores)
def delete(
self,
entity_names: Optional[List[str]] = None,
relation_names: Optional[List[str]] = None,
properties: Optional[dict] = None,
ids: Optional[List[str]] = None,
) -> None:
"""Delete matching data."""
if entity_names:
self.structured_query(
"MATCH (n) WHERE n.name IN $entity_names DETACH DELETE n",
param_map={"entity_names": entity_names},
)
if ids:
self.structured_query(
"MATCH (n) WHERE n.id IN $ids DETACH DELETE n",
param_map={"ids": ids},
)
if relation_names:
for rel in relation_names:
self.structured_query(f"MATCH ()-[r:`{rel}`]->() DELETE r")
if properties:
cypher = "MATCH (e) WHERE "
prop_list = []
params = {}
for i, prop in enumerate(properties):
prop_list.append(f"e.`{prop}` = $property_{i}")
params[f"property_{i}"] = properties[prop]
cypher += " AND ".join(prop_list)
self.structured_query(cypher + " DETACH DELETE e", param_map=params)
def _enhanced_schema_cypher(
self,
label_or_type: str,
properties: List[Dict[str, Any]],
exhaustive: bool,
is_relationship: bool = False,
) -> str:
if is_relationship:
match_clause = f"MATCH ()-[n:`{label_or_type}`]->()"
else:
match_clause = f"MATCH (n:`{label_or_type}`)"
with_clauses = []
return_clauses = []
output_dict = {}
if exhaustive:
for prop in properties:
if prop["property"]:
prop_name = prop["property"]
else:
prop_name = None
if prop["type"]:
prop_type = prop["type"]
else:
prop_type = None
if prop_type == "String":
with_clauses.append(
f"collect(distinct substring(toString(n.`{prop_name}`), 0, 50)) "
f"AS `{prop_name}_values`"
)
return_clauses.append(
f"values:`{prop_name}_values`[..{DISTINCT_VALUE_LIMIT}],"
f" distinct_count: size(`{prop_name}_values`)"
)
elif prop_type in [
"Integer",
"Int",
"Double",
"Float",
"Date",
"LocalTime",
"LocalDateTime",
]:
with_clauses.append(f"min(n.`{prop_name}`) AS `{prop_name}_min`")
with_clauses.append(f"max(n.`{prop_name}`) AS `{prop_name}_max`")
with_clauses.append(
f"count(distinct n.`{prop_name}`) AS `{prop_name}_distinct`"
)
return_clauses.append(
f"min: toString(`{prop_name}_min`), "
f"max: toString(`{prop_name}_max`), "
f"distinct_count: `{prop_name}_distinct`"
)
elif prop_type in ["List", "List[Any]"]:
with_clauses.append(
f"min(size(n.`{prop_name}`)) AS `{prop_name}_size_min`, "
f"max(size(n.`{prop_name}`)) AS `{prop_name}_size_max`"
)
return_clauses.append(
f"min_size: `{prop_name}_size_min`, "
f"max_size: `{prop_name}_size_max`"
)
elif prop_type in ["Bool", "Duration"]:
continue
if return_clauses:
output_dict[prop_name] = "{" + return_clauses.pop() + "}"
else:
output_dict[prop_name] = None
else:
# Just sample 5 random nodes
match_clause += " WITH n LIMIT 5"
for prop in properties:
prop_name = prop["property"]
prop_type = prop["type"]
# Check if indexed property, we can still do exhaustive
prop_index = [
el
for el in self.structured_schema["metadata"]["index"]
if el["label"] == label_or_type
and el["properties"] == [prop_name]
and el["type"] == "RANGE"
]
if prop_type == "String":
if (
prop_index
and prop_index[0].get("size") > 0
and prop_index[0].get("distinctValues") <= DISTINCT_VALUE_LIMIT
):
distinct_values_query = f"""
MATCH (n:{label_or_type})
RETURN DISTINCT n.`{prop_name}` AS value
LIMIT {DISTINCT_VALUE_LIMIT}
"""
distinct_values = self.structured_query(distinct_values_query)
# Extract values from the result set
distinct_values = [
record["value"] for record in distinct_values
]
return_clauses.append(
f"values: {distinct_values},"
f" distinct_count: {len(distinct_values)}"
)
else:
with_clauses.append(
f"collect(distinct substring(n.`{prop_name}`, 0, 50)) "
f"AS `{prop_name}_values`"
)
return_clauses.append(f"values: `{prop_name}_values`")
elif prop_type in [
"Integer",
"Int",
"Double",
"Float",
"Date",
"LocalTime",
"LocalDateTime",
]:
if not prop_index:
with_clauses.append(
f"collect(distinct toString(n.`{prop_name}`)) "
f"AS `{prop_name}_values`"
)
return_clauses.append(f"values: `{prop_name}_values`")
else:
with_clauses.append(
f"min(n.`{prop_name}`) AS `{prop_name}_min`"
)
with_clauses.append(
f"max(n.`{prop_name}`) AS `{prop_name}_max`"
)
with_clauses.append(
f"count(distinct n.`{prop_name}`) AS `{prop_name}_distinct`"
)
return_clauses.append(
f"min: toString(`{prop_name}_min`), "
f"max: toString(`{prop_name}_max`), "
f"distinct_count: `{prop_name}_distinct`"
)
elif prop_type in ["List", "List[Any]"]:
with_clauses.append(
f"min(size(n.`{prop_name}`)) AS `{prop_name}_size_min`, "
f"max(size(n.`{prop_name}`)) AS `{prop_name}_size_max`"
)
return_clauses.append(
f"min_size: `{prop_name}_size_min`, "
f"max_size: `{prop_name}_size_max`"
)
elif prop_type in ["Bool", "Duration"]:
continue
if return_clauses:
output_dict[prop_name] = "{" + return_clauses.pop() + "}"
else:
output_dict[prop_name] = None
with_clause = "WITH " + ",\n ".join(with_clauses)
return_clause = (
"RETURN {"
+ ", ".join(f"`{k}`: {v}" for k, v in output_dict.items())
+ "} AS output"
)
# Combine all parts of the Cypher query
return f"{match_clause}\n{with_clause}\n{return_clause}"
def get_schema(self, refresh: bool = False) -> Any:
if refresh:
self.refresh_schema()
return self.structured_schema
def get_schema_str(self, refresh: bool = False) -> str:
schema = self.get_schema(refresh=refresh)
formatted_node_props = []
formatted_rel_props = []
if self.enhanced_schema:
# Enhanced formatting for nodes
for node_type, properties in schema["node_props"].items():
formatted_node_props.append(f"- **{node_type}**")
for prop in properties:
example = ""
if prop["type"] == "String" and prop.get("values"):
if prop.get("distinct_count", 11) > DISTINCT_VALUE_LIMIT:
example = (
f'Example: "{clean_string_values(prop["values"][0])}"'
if prop["values"]
else ""
)
else: # If less than 10 possible values return all
example = (
(
"Available options: "
f"{[clean_string_values(el) for el in prop['values']]}"
)
if prop["values"]
else ""
)
elif prop["type"] in [
"Integer",
"Int",
"Double",
"Float",
"Date",
"LocalTime",
"LocalDateTime",
]:
if prop.get("min") is not None:
example = f"Min: {prop['min']}, Max: {prop['max']}"
else:
example = (
f'Example: "{prop["values"][0]}"'
if prop.get("values")
else ""
)
elif prop["type"] in ["List", "List[Any]"]:
# Skip embeddings
if not prop.get("min_size") or prop["min_size"] > LIST_LIMIT:
continue
example = f"Min Size: {prop['min_size']}, Max Size: {prop['max_size']}"
formatted_node_props.append(
f" - `{prop['property']}`: {prop['type']} {example}"
)
# Enhanced formatting for relationships
for rel_type, properties in schema["rel_props"].items():
formatted_rel_props.append(f"- **{rel_type}**")
for prop in properties:
example = ""
if prop["type"] == "STRING":
if prop.get("distinct_count", 11) > DISTINCT_VALUE_LIMIT:
example = (
f'Example: "{clean_string_values(prop["values"][0])}"'
if prop.get("values")
else ""
)
else: # If less than 10 possible values return all
example = (
(
"Available options: "
f"{[clean_string_values(el) for el in prop['values']]}"
)
if prop.get("values")
else ""
)
elif prop["type"] in [
"Integer",
"Int",
"Double",
"Float",
"Date",
"LocalTime",
"LocalDateTime",
]:
if prop.get("min"): # If we have min/max
example = f"Min: {prop['min']}, Max: {prop['max']}"
else: # return a single value
example = (
f'Example: "{prop["values"][0]}"'
if prop.get("values")
else ""
)
elif prop["type"] == "List[Any]":
# Skip embeddings
if prop["min_size"] > LIST_LIMIT:
continue
example = f"Min Size: {prop['min_size']}, Max Size: {prop['max_size']}"
formatted_rel_props.append(
f" - `{prop['property']}: {prop['type']}` {example}"
)
else:
# Format node properties
for label, props in schema["node_props"].items():
props_str = ", ".join(
[f"{prop['property']}: {prop['type']}" for prop in props]
)
formatted_node_props.append(f"{label} {{{props_str}}}")
# Format relationship properties using structured_schema
for label, props in schema["rel_props"].items():
props_str = ", ".join(
[f"{prop['property']}: {prop['type']}" for prop in props]
)
formatted_rel_props.append(f"{label} {{{props_str}}}")
# Format relationships
formatted_rels = [
f"(:{el['start']})-[:{el['type']}]->(:{el['end']})"
for el in schema["relationships"]
]
return "\n".join(
[
"Node properties:",
"\n".join(formatted_node_props),
"Relationship properties:",
"\n".join(formatted_rel_props),
"The relationships:",
"\n".join(formatted_rels),
]
)
def verify_vector_support(self) -> None:
"""
Check if the connected Memgraph database supports vector indices.
Compares the current version with the required version (2.22.0) that
supports vector indexing.
"""
response = self.structured_query("SHOW VERSION;")
current_version = response[0]["version"]
current_version = tuple(map(int, current_version.split(".")))
required_version = "2.22"
required_version = tuple(map(int, required_version.split(".")))
# Check if the version is equal to or larger than the required version
if current_version >= required_version:
# Check if vector index is configured
try:
self.structured_query(
f"""
CREATE VECTOR INDEX {VECTOR_INDEX_NAME} ON :{BASE_ENTITY_LABEL}(embedding) WITH CONFIG {{"dimension": 1536, "capacity": 1000}};
"""
)
self._supports_vector_index = True
logger.info(
"Vector index %s was created with a fixed embedding dimension of 1536. "
"If your chosen LLM model uses a different dimension, manually create the vector index with the following query:\n"
'CREATE VECTOR INDEX %s ON :%s(embedding) WITH CONFIG {"dimension": <INSERT_DIMENSION>, "capacity": 1000};',
VECTOR_INDEX_NAME,
VECTOR_INDEX_NAME,
BASE_ENTITY_LABEL,
)
except neo4j.exceptions.Neo4jError as decode_error:
self._supports_vector_index = False
if (
decode_error.code
== "Memgraph.ClientError.MemgraphError.MemgraphError"
and "vector_search.show_index_info" in decode_error.message
):
logger.info(
"""Failed to create vector index entity:
Given vector index already exists."""
)
else:
self._supports_vector_index = False
logger.info(
"""Vector indexing is not supported by your current Memgraph
version (%s). Please upgrade to version 2.22.0 or newer to use
vector indices.""",
".".join(map(str, current_version)),
)
|
MemgraphPropertyGraphStore
|
python
|
charliermarsh__ruff
|
crates/ruff_python_formatter/resources/test/fixtures/ruff/blank_line_before_class_docstring.py
|
{
"start": 57,
"end": 143
}
|
class ____:
# This is a comment
"""This is a docstring."""
|
DocstringWithComment0
|
python
|
getsentry__sentry
|
tests/sentry/incidents/subscription_processor/test_subscription_processor_aci.py
|
{
"start": 4531,
"end": 15077
}
|
class ____(ProcessUpdateBaseClass):
@cached_property
def comparison_detector_above(self):
detector = self.metric_detector
detector.config.update({"comparison_delta": 60 * 60})
detector.save()
self.update_threshold(detector, DetectorPriorityLevel.HIGH, 150)
self.update_threshold(detector, DetectorPriorityLevel.OK, 150)
snuba_query = self.get_snuba_query(detector)
snuba_query.update(time_window=60 * 60)
return detector
@cached_property
def comparison_detector_below(self):
detector = self.metric_detector
detector.config.update({"comparison_delta": 60 * 60})
detector.save()
DataCondition.objects.filter(condition_group=detector.workflow_condition_group).delete()
self.set_up_data_conditions(detector, Condition.LESS, 50, None, 50)
snuba_query = self.get_snuba_query(detector)
snuba_query.update(time_window=60 * 60)
return detector
@patch("sentry.incidents.utils.process_update_helpers.metrics")
def test_comparison_alert_above(self, helper_metrics):
detector = self.comparison_detector_above
comparison_delta = timedelta(seconds=detector.config["comparison_delta"])
self.send_update(self.critical_threshold + 1, timedelta(minutes=-10))
# Shouldn't trigger, since there should be no data in the comparison period
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
helper_metrics.incr.assert_has_calls(
[
call("incidents.alert_rules.skipping_update_comparison_value_invalid"),
]
)
self.metrics.incr.assert_has_calls(
[
call("incidents.alert_rules.skipping_update_invalid_aggregation_value"),
]
)
comparison_date = timezone.now() - comparison_delta
for i in range(4):
self.store_event(
data={
"timestamp": (comparison_date - timedelta(minutes=30 + i)).isoformat(),
"environment": self.environment.name,
},
project_id=self.project.id,
)
self.metrics.incr.reset_mock()
self.send_update(2, timedelta(minutes=-9))
# Shouldn't trigger, since there are 4 events in the comparison period, and 2/4 == 50%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(4, timedelta(minutes=-8))
# Shouldn't trigger, since there are 4 events in the comparison period, and 4/4 == 100%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(6, timedelta(minutes=-7))
# Shouldn't trigger: 6/4 == 150%, but we want > 150%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(7, timedelta(minutes=-6))
# Should trigger: 7/4 == 175% > 150%
assert self.get_detector_state(detector) == DetectorPriorityLevel.HIGH
# Check that we successfully resolve
self.send_update(6, timedelta(minutes=-5))
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
@patch("sentry.incidents.utils.process_update_helpers.metrics")
def test_comparison_alert_below(self, helper_metrics):
detector = self.comparison_detector_below
comparison_delta = timedelta(seconds=detector.config["comparison_delta"])
self.send_update(self.critical_threshold - 1, timedelta(minutes=-10))
# Shouldn't trigger, since there should be no data in the comparison period
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
helper_metrics.incr.assert_has_calls(
[
call("incidents.alert_rules.skipping_update_comparison_value_invalid"),
]
)
self.metrics.incr.assert_has_calls(
[
call("incidents.alert_rules.skipping_update_invalid_aggregation_value"),
]
)
comparison_date = timezone.now() - comparison_delta
for i in range(4):
self.store_event(
data={
"timestamp": (comparison_date - timedelta(minutes=30 + i)).isoformat(),
"environment": self.environment.name,
},
project_id=self.project.id,
)
self.metrics.incr.reset_mock()
self.send_update(6, timedelta(minutes=-9))
# Shouldn't trigger, since there are 4 events in the comparison period, and 6/4 == 150%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(4, timedelta(minutes=-8))
# Shouldn't trigger, since there are 4 events in the comparison period, and 4/4 == 100%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(2, timedelta(minutes=-7))
# Shouldn't trigger: 2/4 == 50%, but we want < 50%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(1, timedelta(minutes=-6))
# Should trigger: 1/4 == 25% < 50%
assert self.get_detector_state(detector) == DetectorPriorityLevel.HIGH
# Check that we successfully resolve
self.send_update(2, timedelta(minutes=-5))
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
@patch("sentry.incidents.utils.process_update_helpers.metrics")
def test_is_unresolved_comparison_query(self, helper_metrics):
"""
Test that uses the ErrorsQueryBuilder (because of the specific query)
"""
detector = self.comparison_detector_above
comparison_delta = timedelta(seconds=detector.config["comparison_delta"])
snuba_query = self.get_snuba_query(detector)
snuba_query.update(query="(event.type:error) AND (is:unresolved)")
self.send_update(self.critical_threshold + 1, timedelta(minutes=-10), subscription=self.sub)
helper_metrics.incr.assert_has_calls(
[
call("incidents.alert_rules.skipping_update_comparison_value_invalid"),
]
)
self.metrics.incr.assert_has_calls(
[
call("incidents.alert_rules.skipping_update_invalid_aggregation_value"),
]
)
comparison_date = timezone.now() - comparison_delta
for i in range(4):
data = {
"timestamp": (comparison_date - timedelta(minutes=30 + i)).isoformat(),
"environment": self.environment.name,
"stacktrace": copy.deepcopy(DEFAULT_EVENT_DATA["stacktrace"]),
"fingerprint": ["group2"],
"level": "error",
"exception": {
"values": [
{
"type": "IntegrationError",
"value": "Identity not found.",
}
]
},
}
self.store_event(
data=data,
project_id=self.project.id,
)
self.metrics.incr.reset_mock()
self.send_update(2, timedelta(minutes=-9))
# Shouldn't trigger, since there are 4 events in the comparison period, and 2/4 == 50%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(4, timedelta(minutes=-8))
# Shouldn't trigger, since there are 4 events in the comparison period, and 4/4 == 100%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(6, timedelta(minutes=-7))
# Shouldn't trigger: 6/4 == 150%, but we want > 150%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(7, timedelta(minutes=-6))
# Should trigger: 7/4 == 175% > 150%
assert self.get_detector_state(detector) == DetectorPriorityLevel.HIGH
# Check that we successfully resolve
self.send_update(6, timedelta(minutes=-5))
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
@patch("sentry.incidents.utils.process_update_helpers.metrics")
def test_is_unresolved_different_aggregate(self, helper_metrics):
detector = self.comparison_detector_above
comparison_delta = timedelta(seconds=detector.config["comparison_delta"])
snuba_query = self.get_snuba_query(detector)
snuba_query.update(aggregate="count_unique(tags[sentry:user])")
self.send_update(self.critical_threshold + 1, timedelta(minutes=-10), subscription=self.sub)
helper_metrics.incr.assert_has_calls(
[
call("incidents.alert_rules.skipping_update_comparison_value_invalid"),
]
)
self.metrics.incr.assert_has_calls(
[
call("incidents.alert_rules.skipping_update_invalid_aggregation_value"),
]
)
comparison_date = timezone.now() - comparison_delta
for i in range(4):
self.store_event(
data={
"timestamp": (comparison_date - timedelta(minutes=30 + i)).isoformat(),
"environment": self.environment.name,
"tags": {"sentry:user": i},
},
project_id=self.project.id,
)
self.metrics.incr.reset_mock()
self.send_update(2, timedelta(minutes=-9))
# Shouldn't trigger, since there are 4 events in the comparison period, and 2/4 == 50%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(4, timedelta(minutes=-8))
# Shouldn't trigger, since there are 4 events in the comparison period, and 4/4 == 100%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(6, timedelta(minutes=-7))
# Shouldn't trigger: 6/4 == 150%, but we want > 150%
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
self.send_update(7, timedelta(minutes=-6))
# Should trigger: 7/4 == 175% > 150%
assert self.get_detector_state(detector) == DetectorPriorityLevel.HIGH
# Check that we successfully resolve
self.send_update(6, timedelta(minutes=-5))
assert self.get_detector_state(detector) == DetectorPriorityLevel.OK
|
ProcessUpdateComparisonAlertTest
|
python
|
ray-project__ray
|
python/ray/data/_internal/execution/interfaces/physical_operator.py
|
{
"start": 10669,
"end": 10987
}
|
class ____:
"""Breakdown of the state of the actors used by the ``PhysicalOperator``"""
running: int
pending: int
restarting: int
def __str__(self):
return (
f"running={self.running}, restarting={self.restarting}, "
f"pending={self.pending}"
)
|
_ActorPoolInfo
|
python
|
python-markdown__markdown
|
markdown/extensions/smarty.py
|
{
"start": 6167,
"end": 6801
}
|
class ____(HtmlInlineProcessor):
def __init__(self, pattern: str, replace: Sequence[int | str | etree.Element], md: Markdown):
""" Replaces matches with some text. """
HtmlInlineProcessor.__init__(self, pattern)
self.replace = replace
self.md = md
def handleMatch(self, m: re.Match[str], data: str) -> tuple[str, int, int]:
result = ''
for part in self.replace:
if isinstance(part, int):
result += m.group(part)
else:
result += self.md.htmlStash.store(part)
return result, m.start(0), m.end(0)
|
SubstituteTextPattern
|
python
|
ethereum__web3.py
|
web3/exceptions.py
|
{
"start": 6944,
"end": 7073
}
|
class ____(Web3Exception):
"""
Raised when a JSON-RPC response comes back in an unexpected format
"""
|
BadResponseFormat
|
python
|
huggingface__transformers
|
src/transformers/models/levit/modeling_levit.py
|
{
"start": 4799,
"end": 5278
}
|
class ____(nn.Module):
def __init__(self, input_dim, output_dim, bn_weight_init=1):
super().__init__()
self.linear = nn.Linear(in_features=input_dim, out_features=output_dim, bias=False)
self.batch_norm = nn.BatchNorm1d(output_dim)
def forward(self, hidden_state):
hidden_state = self.linear(hidden_state)
hidden_state = self.batch_norm(hidden_state.flatten(0, 1)).reshape_as(hidden_state)
return hidden_state
|
MLPLayerWithBN
|
python
|
doocs__leetcode
|
solution/0700-0799/0791.Custom Sort String/Solution2.py
|
{
"start": 0,
"end": 290
}
|
class ____:
def customSortString(self, order: str, s: str) -> str:
cnt = Counter(s)
ans = []
for c in order:
ans.append(c * cnt[c])
cnt[c] = 0
for c, v in cnt.items():
ans.append(c * v)
return ''.join(ans)
|
Solution
|
python
|
great-expectations__great_expectations
|
great_expectations/checkpoint/actions.py
|
{
"start": 41624,
"end": 43201
}
|
class ____(ValidationAction):
type: Literal["api"] = "api"
url: str
@override
def run(
self, checkpoint_result: CheckpointResult, action_context: ActionContext | None = None
) -> dict:
aggregate_payload = []
for run_id, run_result in checkpoint_result.run_results.items():
suite_name = run_result.suite_name
serializable_results = convert_to_json_serializable(run_result.results)
batch_identifier = run_id.batch_identifier
payload = self.create_payload(
data_asset_name=batch_identifier,
suite_name=suite_name,
validation_results_serializable=serializable_results,
)
aggregate_payload.append(payload)
response = self.send_results(aggregate_payload)
return {"result": f"Posted results to API, status code - {response.status_code}"}
def send_results(self, payload) -> requests.Response:
try:
headers = {"Content-Type": "application/json"}
return requests.post(self.url, headers=headers, data=payload)
except Exception as e:
print(f"Exception when sending data to API - {e}")
raise e # noqa: TRY201 # FIXME CoP
@staticmethod
def create_payload(data_asset_name, suite_name, validation_results_serializable) -> dict:
return {
"test_suite_name": suite_name,
"data_asset_name": data_asset_name,
"validation_results": validation_results_serializable,
}
|
APINotificationAction
|
python
|
weaviate__weaviate-python-client
|
weaviate/gql/filter.py
|
{
"start": 15276,
"end": 15852
}
|
class ____(NearMedia):
"""NearAudio class used to filter weaviate objects."""
def __init__(
self,
content: dict,
):
"""Initialize a NearAudio class instance.
Args:
content: The content of the `nearAudio` clause.
Raises:
TypeError: If 'content' is not of type dict.
TypeError: If 'content["audio"]' is not of type str.
ValueError: If 'content' has key "certainty"/"distance" but the value is not float.
"""
super().__init__(content, MediaType.AUDIO)
|
NearAudio
|
python
|
charliermarsh__ruff
|
crates/ruff_linter/resources/test/fixtures/pylint/duplicate_bases.py
|
{
"start": 114,
"end": 141
}
|
class ____(A, A,):
...
|
F2
|
python
|
nedbat__coveragepy
|
tests/test_coverage.py
|
{
"start": 34505,
"end": 35245
}
|
class ____(CoverageTest):
"""Tests specific to annotations."""
def test_attribute_annotation(self) -> None:
if env.PYBEHAVIOR.deferred_annotations:
lines = [1, 3]
else:
lines = [1, 2, 3]
self.check_coverage(
"""\
class X:
x: int
y = 1
""",
lines=lines,
missing="",
)
def test_attribute_annotation_from_future(self) -> None:
self.check_coverage(
"""\
from __future__ import annotations
class X:
x: int
y = 1
""",
lines=[1, 2, 3, 4],
missing="",
)
|
AnnotationTest
|
python
|
kubernetes-client__python
|
kubernetes/client/api/coordination_api.py
|
{
"start": 543,
"end": 5197
}
|
class ____(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def get_api_group(self, **kwargs): # noqa: E501
"""get_api_group # noqa: E501
get information of a group # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_api_group(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: V1APIGroup
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_api_group_with_http_info(**kwargs) # noqa: E501
def get_api_group_with_http_info(self, **kwargs): # noqa: E501
"""get_api_group # noqa: E501
get information of a group # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_api_group_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(V1APIGroup, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_api_group" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# Authentication setting
auth_settings = ['BearerToken'] # noqa: E501
return self.api_client.call_api(
'/apis/coordination.k8s.io/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1APIGroup', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
|
CoordinationApi
|
python
|
huggingface__transformers
|
src/transformers/models/instructblip/configuration_instructblip.py
|
{
"start": 4852,
"end": 9877
}
|
class ____(PreTrainedConfig):
r"""
This is the configuration class to store the configuration of a [`InstructBlipQFormerModel`]. It is used to
instantiate a InstructBLIP Querying Transformer (Q-Former) model according to the specified arguments, defining the
model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of
the InstructBLIP [Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5)
architecture. Configuration objects inherit from [`PreTrainedConfig`] and can be used to control the model outputs.
Read the documentation from [`PreTrainedConfig`] for more information.
Note that [`InstructBlipQFormerModel`] is very similar to [`BertLMHeadModel`] with interleaved cross-attention.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the Q-Former model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling the model.
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
pad_token_id (`int`, *optional*, defaults to 0):
Token id used for padding sequences.
cross_attention_frequency (`int`, *optional*, defaults to 2):
The frequency of adding cross-attention to the Transformer layers.
encoder_hidden_size (`int`, *optional*, defaults to 1408):
The hidden size of the hidden states for cross-attention.
Examples:
```python
>>> from transformers import InstructBlipQFormerConfig, InstructBlipQFormerModel
>>> # Initializing a InstructBLIP Salesforce/instruct-blip-flan-t5 style configuration
>>> configuration = InstructBlipQFormerConfig()
>>> # Initializing a model (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration
>>> model = InstructBlipQFormerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "instructblip_qformer"
base_config_key = "qformer_config"
def __init__(
self,
vocab_size=30522,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
hidden_act="gelu",
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=512,
initializer_range=0.02,
layer_norm_eps=1e-12,
pad_token_id=0,
cross_attention_frequency=2,
encoder_hidden_size=1408,
**kwargs,
):
super().__init__(pad_token_id=pad_token_id, **kwargs)
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.hidden_act = hidden_act
self.intermediate_size = intermediate_size
self.hidden_dropout_prob = hidden_dropout_prob
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.max_position_embeddings = max_position_embeddings
self.initializer_range = initializer_range
self.layer_norm_eps = layer_norm_eps
self.cross_attention_frequency = cross_attention_frequency
self.encoder_hidden_size = encoder_hidden_size
|
InstructBlipQFormerConfig
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.