code
stringlengths 1
5.19M
| package
stringlengths 1
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name="10EngrProblems", # This is the name of the package
version="1.0", # The initial release version
author="Hopalonghacksaw", # Full name of the author
description="10 Engineering Problem Soultionns",
long_description=long_description, # Long description read from the the readme file
long_description_content_type="text/markdown",
packages=setuptools.find_packages(), # List of all python modules to be installed
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
], # Information to filter the project on PyPi website
python_requires='>=3.6', # Minimum version requirement of the package
py_modules=["10EngrProblems"], # Name of the python package
package_dir={'':'10EngrProblems/src'}, # Directory of the source code of the package
install_requires=[] # Install other dependencies if any
)
| 10EngrProblems | /10EngrProblems-1.0.tar.gz/10EngrProblems-1.0/setup.py | setup.py |
# 10daysWeb
**A just-for-learning web framework that can be developed in 10 days.**
  
# 啰嗦
出于某些原因,我需要一个自己开发的轮子,大约只有十天时间。
于是我打算开发一个python web框架,这是我一直想做却又未完成的事。
我打算每天迭代,一遍写一遍查阅资料,记录新的想法和发现。
这样如果有谁与我处境相似,这个项目也许能够有所帮助。
最好能用成品再搭个博客什么的。
即使没有成功,也不会一无所获。
我们开始吧。
## Day 1
**万事开头难,相信我不是唯一一个在项目开始时感到无从下手的人。**
首先我下载了热门框架Flask的0.1版本的源码,三百余行的代码已经包含了一个web框架所必要的全部功能,还附带了一个使用示例。[如何下载最早的commit代码](#如何下载最早的commit代码)
对于我要实现的第一个最简单版本来说,flask仍然过于复杂了,我只提炼出`route`这个关键部件在第一版中实现。
`Route`用来管理一个web应用具体响应哪些路径和方法。通过装饰器,框架在启动时注册所有的用户函数,并在符合条件时自动调用。
@testApp.route('/', methods=['GET'])
def hello():
return 'hello world'
而`Rule`则具体表示某个需要被响应的路径,它主要由`url`, `methods`和`endpoint`组成。
`methods`包含一系列HTTP Method,表示要处理的请求类型。而`endpoint`则是实际产生返回内容的`Callable`对象,可以是函数或者类。
关于http包含哪些method,以及后续我们需要参考的报文格式和状态码,参见[RFC 2616](#https://tools.ietf.org/html/rfc2616)。
现在我们还缺少一段代码,用于监听和收发http报文,python3.4以后加入的asyncio提供了这个功能,而[官方文档](#http://asyncio.readthedocs.io)恰好给了我们一个极简的示例。
`asyncio.start_server`需要三个基本参数,收到请求时的自动调用的`client_connected_cb`,以及需要监听的地址和端口。
`client_connected_cb`则需要支持两个参数,`reader`和`writer`,份别用于读取请求报文和回写响应报文。
我在`client_connected_cb`中添加了简易的获取请求的路径的代码,用于和注册好的应用函数匹配。
同样我也已经定义了包含所有Http method的宏,不过还没有与请求进行匹配。
这样我们就得到了一个可以运行的''Web框架'',目前只能算是prototype,不过已经足够让我们印出那句世纪名言了。
Hello World!
## Day 2
**我们有了一个原型,但很多方面亟待完善**
我使用了一个开源第三方库来解析http报文,并实现了`Request`和`Response`来抽象请求。
我从rfc文档中摘取了http的状态码,和methods一起放在`utils.py`中。
尝试定义了一个异常,初步的设向是它可以让框架的使用者随时使用异常直接返回http的错误状态,`content`则是为了支持自定义的错误页面,但这部分仍不确定,也许我会使用`@error_handler`的形式来提供自定义异常时的行为。
添加了log,但在我的终端中还没有输出,待解决。
我使用了标准库`asyncio`,因为我希望这个框架是支持异步的,调整后的`handle`方法提现了处理一个请求的基本思路,但它看起来仍然很糟糕,对于异步我还未完全理清思路。
## Day 3
在代码方面,今天的改动并不大。
梳理了`handle`方法的逻辑, 我强制规定用户函数必须是协程,但日后也必须提供数据库,文件读写相关的异步封装API,否则框架仍然不是`真*异步`。
调整了流读取报文的处理策略,交由第三方解析库来判断报文是否结束。这方面并不用太过纠结,因为真正部署时讲会有nginx/apache之流替我们打理。
之后的主要工作:
- 完成`Debug模式`,实现自动重加载用户函数
- 添加静态文件路由和模式匹配路由支持
- 引入模板引擎及其异步调用封装
## Day 4
添加了动态url匹配支援,现在可以在以如下形式匹配路径:
@app.route('/<name>', methods=['GET'])
async def show_name(request, name):
return Response(content=f'hello {name}')
思考以后感觉静态文件路由完全可以由用户自行添加动态匹配来支持,即使不行还有web服务器来做,于是决定先放下这部分。
添加了`errorhandler`装饰器,现在可以通过它自定义异常时的行为和返回报文
调整了异常捕获机制,现在在找不到对应的用户方法时,能够正确的抛出404异常,而在用户方法中非预期中异常,则统一作为500状态处理
## Day 5 & 6
加入了`run_before`装饰器,用于在运行启动服务器前的初始化代码,默认传入事件循环loop参数
把这个~~丢人~~框架上传到了pip,现在可以通过`pip install 10daysweb`安装使用
尝试写一个todolist应用作为演示,康了半天前端觉得有些仓促,决定接入~~Telegram Bot~~微信小程序
加入了unitest,初步编写了一个url匹配的测试样例
## Day 7
新增信号装饰器,初步想法是用于服务器启动前和结束后初始化和关闭数据库连接池
@app.signal(type='run_before_start')
def foo(loop):
'''init database connection pool'''
增加了对应的未知信号类型异常,微信小程序api编写中。
## 如何下载最早的commit代码
作为一个知名的开源项目,Flask在github已经积累了数千此提交。
最可恨的是,github在Commit列表页面竟然没有提供一个按页跳转的功能。
下面一个不是很优雅,但确实更快的方法
首先在本地`git clone`下目标项目
使用`--reverse`参数倒置结果,拿到提交历史上最早的commit id
git log --reverse
在github上随意打开一个commit,替换掉url中的id即可。
哦,你还需要点一下`Browse files` | 10daysweb | /10daysweb-0.1.3.tar.gz/10daysweb-0.1.3/README.md | README.md |
from setuptools import setup, find_packages
from codecs import open
from os import path
__version__ = '0.1.3'
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'readme.md'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='10daysweb',
version=__version__,
description='Async web framework for learning',
long_description=long_description,
url='https://github.com/bace1996/10daysWeb',
author='Cykrt Chan',
author_mail='cykrt1996@gmail.com',
license='MIT',
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Environment :: Web Environment',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
],
keywords='web framework async',
packages=find_packages(exclude=['docs', 'demos', 'tests*']),
include_package_data=True,
install_requires=[
'httptools',
],
python_requires='>=3.6',
)
| 10daysweb | /10daysweb-0.1.3.tar.gz/10daysweb-0.1.3/setup.py | setup.py |
# -*- coding: utf-8 -*-
from typing import Dict
from .utils import STATUS_CODES
class Response:
def __init__(self,
status_code: int = 200,
headers: Dict[str, str] = {},
content='',
options: Dict[str, str] = {}):
self.status_code = status_code
self.headers = headers
self.content = content
self.reason_phrase = options.get('reason_phrase',
STATUS_CODES.get(
self.status_code,
'Unknown Error'))
def to_payload(self):
response: bytes = \
f'HTTP/1.1 {self.status_code} {self.reason_phrase}\r\n'.encode()
for k, v in self.headers.items():
response += f'{k}: {v}\r\n'.encode()
response += b'\r\n'
response += self.content.encode()
return response
| 10daysweb | /10daysweb-0.1.3.tar.gz/10daysweb-0.1.3/tendaysweb/response.py | response.py |
# -*- coding: utf-8 -*-
import asyncio
import logging
import inspect
import re
from typing import Callable, List, AnyStr, Dict, Tuple, Any
import httptools
from .request import Request
from .response import Response
from .exceptions import HttpException, UnknownSignalException
from .utils import HTTP_METHODS, STATUS_CODES, DEFAULT_ERROR_PAGE_TEMPLATE
logger = logging.getLogger('tendaysweb')
logging.basicConfig(level=logging.INFO)
class TenDaysWeb():
_signal_types = ['run_before_start', 'run_after_close']
def __init__(self, application_name):
"""
:param application_name: just name your TenDaysWeb Instance
"""
self._app_name = application_name
self._rule_list: List[Rule] = []
self._error_handlers: Dict[int, Callable] = {}
self._signal_func: Dict[str, List[Callable]] = {
key: []
for key in TenDaysWeb._signal_types
}
def route(self, url: str, methods: List = HTTP_METHODS, **options):
"""
A decorator that is used to register a view function for a
given URL rule. Example::
@app.route('/')
def index():
return 'Hello World'
"""
def decorator(func):
self._rule_list.append(Rule(url, methods, func, **options))
return func
return decorator
def signal(self, signal_type: str):
"""
A decorator that is used to register a function supposed to be called
before start_server
"""
def decorator(func):
if signal_type not in TenDaysWeb._signal_types:
raise UnknownSignalException(signal_type, func.__name__)
self._signal_func[signal_type].append(func)
return func
return decorator
def error_handler(self, error_code):
"""
This decorator is used to customize the behavior of an error
:param error_code:a http status code
"""
async def decorator(func):
self._error_handlers[error_code] = func
return func
return decorator
def match_request(self, request) -> Tuple[Callable, Dict[str, Any]]:
"""
Match each request to a endpoint
if no endpoint is eligable, return None, None
"""
handler = kwargs = None
for rule in self._rule_list:
kwargs = rule.match(request.url, request.method)
if kwargs is not None:
handler = rule._endpoint
break
return handler, kwargs
async def process_request(
self,
request: Request,
handler: Callable,
kwargs: Dict[str, Any]):
"""
:param request: Request instance
:param handler: A endpoint
:param kwargs: the additional parameters for call endpoint
"""
try:
return await handler(request, **kwargs)
except HttpException as e:
# catch exception user explicit rasie in endpoint
handler = self._error_handlers.get(e.err_code, None)
if handler is None:
return Response(
status_code=e.err_code,
content=TenDaysWeb.generate_default_error_page(
e.err_code))
return await handler()
async def handler(self, reader, writer):
"""
The handler handling each request
:param request: the Request instance
:return: The Response instance
"""
while True:
request: Request = await self.read_http_message(reader)
response: Response = Response()
if request is None:
writer.close()
break
handler, kwargs = self.match_request(request)
if handler is None:
response.status_code = 404
response.content = TenDaysWeb.generate_default_error_page(
response.status_code)
else:
try:
response = await self.process_request(
request, handler, kwargs)
except Exception as e:
logger.error(str(e))
response = Response(
status_code=500,
content=TenDaysWeb.generate_default_error_page(500))
# send payload
writer.write(response.to_payload())
try:
await writer.drain()
writer.write_eof()
except ConnectionResetError:
writer.close()
break
async def start_server(self,
loop,
http_handler: Callable,
websocket_handler=None,
address: str = 'localhost',
port: int=8000,):
"""
start server
"""
for func in self._signal_func['run_before_start']:
if inspect.iscoroutinefunction(func):
await func(loop)
else:
func()
await asyncio.start_server(http_handler, address, port)
for func in self._signal_func['run_after_close']:
if inspect.iscoroutinefunction(func):
await func(loop)
else:
func()
def run(self,
host: str = "localhost",
port: int = 8000,
debug: bool = False):
"""
start the http server
:param host: The listening host
:param port: The listening port
:param debug: whether it is in debug mod or not
"""
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(
self.start_server(loop, self.handler, None, host, port))
logger.info(f'Start listening {host}:{port}')
loop.run_forever()
except KeyboardInterrupt:
loop.close()
async def read_http_message(
self, reader: asyncio.streams.StreamReader) -> Request:
"""
this funciton will reading data cyclically
until recivied a complete http message
:param reqreaderuest: the asyncio.streams.StreamReader instance
:return The Request instance
"""
protocol = ParseProtocol()
parser = httptools.HttpRequestParser(protocol)
while True:
data = await reader.read(2 ** 16)
try:
parser.feed_data(data)
except httptools.HttpParserUpgrade:
raise HttpException(400)
if protocol.completed:
return Request.load_from_parser(parser, protocol)
if data == b'':
return None
@staticmethod
def generate_default_error_page(status, reason='', content=''):
return DEFAULT_ERROR_PAGE_TEMPLATE.format(
**{'status': status,
'reason': STATUS_CODES.get(status, 'Unknow'),
'content': content})
class Rule():
parttern = re.compile(r'\<([^/]+)\>')
def __init__(self, url: AnyStr, methods: List, endpoint: Callable,
**options):
"""
A rule describes a url is expected to be handled and how to handle it.
:param url: url to be handled
:param method: list of HTTP method name
:param endpoint: the actual function/class process this request
"""
self._url = url
self._methods = methods
self._options = options
self._endpoint = endpoint
self._param_name_list = Rule.parttern.findall(url)
self._url_pattern = re.compile(
f'''^{Rule.parttern.sub('([^/]+)', url)}$''')
def match(self, url: str, method: str):
"""
this function is used to judge whether a (url, method) matchs the Rule
"""
res = self._url_pattern.search(url)
if method in self._methods and res is not None:
return dict(zip(
self._param_name_list,
[res.group(i) for i in range(
1, self._url_pattern.groups + 1)]))
return None
class ParseProtocol:
"""
The protocol for HttpRequestParser
"""
def __init__(self) -> None:
self.url: str = ''
self.headers: Dict[str, str] = {}
self.body: bytes = b''
self.completed: bool = False
def on_url(self, url: bytes) -> None:
self.url = url.decode()
def on_header(self, name: bytes, value: bytes) -> None:
self.headers[name.decode()] = value.decode()
def on_body(self, body: bytes) -> None:
self.body += body
def on_message_complete(self) -> None:
self.completed = True
| 10daysweb | /10daysweb-0.1.3.tar.gz/10daysweb-0.1.3/tendaysweb/app.py | app.py |
# -*- coding: ' utf-8 -*-
HTTP_METHODS = [
'OPTIONS',
'GET',
'HEAD',
'POST',
'PUT',
'DELETE',
'TRACE',
'CONNECT',
]
STATUS_CODES = {
100: 'Continue',
101: 'Switching Protocols',
200: 'OK',
201: 'Created',
202: 'Accepted',
203: 'Non-Authoritative Information',
204: 'No Content',
205: 'Reset Content',
206: 'Partial Content',
300: 'Multiple Choices',
301: 'Moved Permanently',
302: 'Found',
303: 'See Other',
304: 'Not Modified',
305: 'Use Proxy',
307: 'Temporary Redirect',
400: 'Bad Request',
401: 'Unauthorized',
402: 'Payment Required',
403: 'Forbidden',
404: 'Not Found',
405: 'Method Not Allowed',
406: 'Not Acceptable',
407: 'Proxy Authentication Required',
408: 'Request Time-out',
409: 'Conflict',
410: 'Gone',
411: 'Length Required',
412: 'Precondition Failed',
413: 'Request Entity Too Large',
414: 'Request-URI Too Large',
415: 'Unsupported Media Type',
416: 'Requested range not satisfiable',
417: 'Expectation Failed',
500: 'Internal Server Error',
501: 'Not Implemented',
502: 'Bad Gateway',
503: 'Service Unavailable',
504: 'Gateway Time-out',
505: 'HTTP Version not supported',
}
DEFAULT_ERROR_PAGE_TEMPLATE = '''
<html>
<head>
<title>{status} {reason}</title>
</head>
<body>
<h1>{status} {reason}</h1>
{content}
</body>
</html>'''
| 10daysweb | /10daysweb-0.1.3.tar.gz/10daysweb-0.1.3/tendaysweb/utils.py | utils.py |
# -*- coding: utf-8 -*-
from .app import TenDaysWeb
from .response import Response
from .exceptions import HttpException | 10daysweb | /10daysweb-0.1.3.tar.gz/10daysweb-0.1.3/tendaysweb/__init__.py | __init__.py |
# -*- coding: utf-8 -*-
from typing import Dict
class Request:
def __init__(self, method: str, url: str, version: str,
headers: Dict[str, str], content: str):
self.method = method
self.url = url
self.version = version
self.headers = headers
self.content = content
@classmethod
def load_from_parser(cls, parser: 'HttpRequestParser',
protocol: 'ParseProtocol') -> 'Request':
return cls(
parser.get_method().decode(),
protocol.url,
parser.get_http_version(),
protocol.headers, protocol.body)
| 10daysweb | /10daysweb-0.1.3.tar.gz/10daysweb-0.1.3/tendaysweb/request.py | request.py |
# -*- coding: utf-8 -*-
from.utils import STATUS_CODES
class HttpException(Exception):
def __init__(self, err_code: int, err: str=''):
self.err_code = err_code
self.err = err if err else STATUS_CODES.get(err_code, 'Unknown Error')
class UnknownSignalException(Exception):
def __init__(self, signal_type: str, func_name: str):
self.signal_type = signal_type
self.func_name = func_name
| 10daysweb | /10daysweb-0.1.3.tar.gz/10daysweb-0.1.3/tendaysweb/exceptions.py | exceptions.py |
# colorlog module will do `colorama.init()`, however for PyCharm IDE it will break colors in the console
# First import colorama to trigger the `init()` then use `deinit()` to fix this if execution is hosted by PyCharm
import os
import colorama
# noinspection PyUnresolvedReferences
import colorlog
if 'PYCHARM_HOSTED' in os.environ:
colorama.deinit()
| 10dulkar17-s3-aws | /10dulkar17_s3_aws-0.0.6-py3-none-any.whl/10dulkar17_s3_aws/__init__.py | __init__.py |
#!/usr/bin/env python3
# Python Library Imports
import logging
import boto3
from botocore.exceptions import ClientError, NoCredentialsError, BotoCoreError
def upload_file(bucket, key, filename, encryption=None):
logging.info(f"Uploading File {filename} to {key}")
try:
if encryption:
bucket.upload_file(
filename, key, ExtraArgs={"ServerSideEncryption": encryption}
)
else:
bucket.upload_file(filename, key)
except Exception as err:
logging.exception(f"Exception uploading file: {err}")
raise
| 10dulkar17-s3-aws | /10dulkar17_s3_aws-0.0.6-py3-none-any.whl/10dulkar17_s3_aws/sdk/wallpaper_s3_client.py | wallpaper_s3_client.py |
115 Wangpan
===========
|Build| |PyPI version|
115 Wangpan (115网盘 or 115云) is an unofficial Python API and SDK for 115.com. Supported Python verisons are 2.6, 2.7, 3.3, 3.4.
* Documentation: http://115wangpan.readthedocs.org
* GitHub: https://github.com/shichao-an/115wangpan
* PyPI: https://pypi.python.org/pypi/115wangpan/
Features
--------
* Authentication
* Persistent session
* Tasks management: BitTorrent and links
* Files management: uploading, downloading, searching, and editing
Installation
------------
`libcurl <http://curl.haxx.se/libcurl/>`_ is required. Install dependencies before installing the python package:
Ubuntu:
.. code-block:: bash
$ sudo apt-get install build-essential libcurl4-openssl-dev python-dev
Fedora:
.. code-block:: bash
$ sudo yum groupinstall "Development Tools"
$ sudo yum install libcurl libcurl-devel python-devel
Then, you can install with pip:
.. code-block:: bash
$ pip install 115wangpan
Or, if you want to install the latest from GitHub:
.. code-block:: bash
$ pip install git+https://github.com/shichao-an/115wangpan
Usage
-----
.. code-block:: python
>>> import u115
>>> api = u115.API()
>>> api.login('username@example.com', 'password')
True
>>> tasks = api.get_tasks()
>>> task = tasks[0]
>>> print task.name
咲-Saki- 阿知賀編 episode of side-A
>>> print task.status_human
TRANSFERRED
>>> print task.size_human
1.6 GiB
>>> files = task.list()
>>> files
[<File: 第8局 修行.mkv>]
>>> f = files[0]
>>> f.url
u'http://cdnuni.115.com/some-very-long-url.mkv'
>>> f.directory
<Directory: 咲-Saki- 阿知賀編 episode of side-A>
>>> f.directory.parent
<Directory: 离线下载>
CLI commands
------------
* 115 down: for downloading files
* 115 up: for creating tasks from torrents and links
.. |Build| image:: https://api.travis-ci.org/shichao-an/115wangpan.png?branch=master
:target: http://travis-ci.org/shichao-an/115wangpan
.. |PyPI version| image:: https://img.shields.io/pypi/v/115wangpan.png
:target: https://pypi.python.org/pypi/115wangpan/
| 115wangpan | /115wangpan-0.7.6.tar.gz/115wangpan-0.7.6/README.rst | README.rst |
Changelog
=========
0.7.6 (2015-08-01)
------------------
- Fixed DRY_RUN message print by using print_msg that handles PY2 and PY3 strings
- Added -F/--files-only option to 115 down
- Fixed files_only parse error
- Fixed unexpected kwargs for get_tasks
- Fixed Task against added 'url' attr
0.7.5 (2015-07-02)
------------------
- Added environs to make a "workaround" that deals with issue #27
- Fixed Task.is_directory to include 'BEING TRANSFERRED' exception
0.7.4 (2015-06-20)
------------------
- Fixed getting download URL error due to another API change (#23)
0.7.3 (2015-06-16)
------------------
- Fixed previous broken release that does not contain CLI command 115
0.7.2 (2015-06-16)
------------------
- Fixed getting download URL error due to API change (#23)
0.7.1 (2015-06-15)
------------------
- Fixed argparse's required subparser behavior in Python 2.7 (http://bugs.python.org/issue9253)
0.7.0 (2015-06-14)
------------------
- Added public methods: move, edit, mkdir (#13, #19)
- Added Pro API support for getting download URL (#21)
- Added ``receiver_directory``
- Added logging utility and debugging hooks (#22)
- Combined 115down and 115up into a single 115 commands
- Supported Python 3.4 by removing ``__del__``
0.6.0 (2015-05-17)
------------------
- Deprecated ``auto_logout`` argument
- Added cookies support to CLI commands
0.5.1 (2015-04-20)
------------------
- 115down: fixed sub-entry range parser to ordered list
0.5.0 (2015-04-12)
------------------
- 115down: supported both keeping directory structure and flattening
- Fixed ``Task`` to not inherit ``Directory``
0.4.2 (2015-04-03)
------------------
- Fixed broken upload due to source page change (``_parse_src_js_var``)
0.4.1 (2015-04-03)
------------------
- 115down: added range support for argument ``sub_num`` (#14)
- 115down: added size display for file and task entries
0.4.0 (2015-03-23)
------------------
- Added persistent session (cookies) feature
- Added search API
- Added CLI commands: 115down and 115up
- Fixed #10
0.3.1 (2015-02-03)
------------------
- Fixed broken release 0.3.0 due to a missing dependency
0.3.0 (2015-02-03)
------------------
- Used external package "homura" to replace downloader utility
- Merge #8: added add_task_url API
0.2.4 (2014-10-09)
------------------
- Fixed #5: add isatty() so progress refreshes less frequently on non-tty
- Fixed parse_src_js_var
0.2.3 (2014-09-23)
------------------
- Fixed #2: ``show_progress`` argument
- Added resume download feature
0.2.2 (2014-09-20)
------------------
- Added system dependencies to documentation
0.2.1 (2014-09-20)
------------------
- Fixed ``Task.status_human`` error
0.2.0 (2014-09-20)
------------------
- Added download feature to the API and ``download`` method to ``u115.File``
- Added elaborate exceptions
- Added ``auto_logout`` optional argument to ``u115.API.__init__``
- Updated Task status info
0.1.1 (2014-09-11)
------------------
- Fixed broken sdist release of v0.1.0.
0.1.0 (2014-09-11)
------------------
- Initial release.
| 115wangpan | /115wangpan-0.7.6.tar.gz/115wangpan-0.7.6/CHANGELOG.rst | CHANGELOG.rst |
from setuptools import setup, find_packages
setup(
name='115wangpan',
version='0.7.6',
description="Unofficial Python API wrapper for 115.com",
long_description=open('README.rst').read(),
keywords='115 wangpan pan cloud lixian',
author='Shichao An',
author_email='shichao.an@nyu.edu',
url='https://github.com/shichao-an/115wangpan',
license='BSD',
install_requires=open('requirements.txt').read().splitlines(),
packages=find_packages(exclude=['tests', 'docs']),
scripts=[
'bin/115',
'bin/115down',
'bin/115up',
],
include_package_data=True,
zip_safe=False,
classifiers=[
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 3",
],
)
| 115wangpan | /115wangpan-0.7.6.tar.gz/115wangpan-0.7.6/setup.py | setup.py |
# -*- coding: utf-8 -*-
from __future__ import print_function, absolute_import
try:
import configparser
except ImportError:
import ConfigParser as configparser
import os
import logging
from u115.utils import pjoin, eval_path
_d = os.path.dirname(__file__)
user_dir = eval_path('~')
PROJECT_PATH = os.path.abspath(pjoin(_d, os.pardir))
PROJECT_CREDENTIALS = pjoin(PROJECT_PATH, '.credentials')
USER_CREDENTIALS = pjoin(user_dir, '.115')
CREDENTIALS = None
COOKIES_FILENAME = pjoin(user_dir, '.115cookies')
LOGGING_API_LOGGER = 'API'
LOGGING_FORMAT = "%(levelname)s:%(name)s:%(funcName)s: %(message)s"
LOGGING_LEVEL = logging.ERROR
DEBUG_REQ_FMT = """
TYPE: Request
FUNC: %s
URL: %s
METHOD: %s
PARAMS: %s
DATA: %s
"""
DEBUG_RES_FMT = """
TYPE: Response
FUNC: %s
STATE: %s
CONTENT: %s
"""
# Initialize logger
logger = logging.getLogger(LOGGING_API_LOGGER)
handler = logging.StreamHandler()
formatter = logging.Formatter(LOGGING_FORMAT)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(LOGGING_LEVEL)
if os.path.exists(PROJECT_CREDENTIALS):
CREDENTIALS = PROJECT_CREDENTIALS
elif os.path.exists(USER_CREDENTIALS):
CREDENTIALS = USER_CREDENTIALS
CONFIG = configparser.ConfigParser()
def get_credential(section='default'):
if os.environ.get('TRAVIS_TEST'):
username = os.environ.get('TEST_USER_USERNAME')
password = os.environ.get('TEST_USER_PASSWORD')
if username is None or password is None:
msg = 'No credentials environment variables found.'
raise ConfigError(msg)
elif CREDENTIALS is not None:
CONFIG.read(CREDENTIALS)
if CONFIG.has_section(section):
items = dict(CONFIG.items(section))
try:
username = items['username']
password = items['password']
except KeyError as e:
msg = 'Key "%s" not found in credentials file.' % e.args[0]
raise ConfigError(msg)
else:
msg = 'No section named "%s" found in credentials file.' % section
raise ConfigError(msg)
else:
msg = 'No credentials file found.'
raise ConfigError(msg)
return {'username': username, 'password': password}
class ConfigError(Exception):
pass
| 115wangpan | /115wangpan-0.7.6.tar.gz/115wangpan-0.7.6/u115/conf.py | conf.py |
from __future__ import print_function, absolute_import
import datetime
import errno
import os
import six
import sys
import time
from requests.utils import quote as _quote
from requests.utils import unquote as _unquote
PY3 = sys.version_info[0] == 3
STREAM = sys.stderr
STRPTIME_FORMATS = ['%Y-%m-%d %H:%M', '%Y-%m-%d']
if PY3:
bin_type = bytes
txt_type = str
else:
bin_type = str
txt_type = unicode
str_types = (bin_type, txt_type)
def get_timestamp(length):
"""Get a timestamp of `length` in string"""
s = '%.6f' % time.time()
whole, frac = map(int, s.split('.'))
res = '%d%d' % (whole, frac)
return res[:length]
def get_utcdatetime(timestamp):
return datetime.datetime.utcfromtimestamp(timestamp)
def string_to_datetime(s):
for f in STRPTIME_FORMATS:
try:
return datetime.datetime.strptime(s, f)
except ValueError:
pass
msg = 'Time data %s does not match any formats in %s' \
% (s, STRPTIME_FORMATS)
raise ValueError(msg)
def eval_path(path):
return os.path.abspath(os.path.expanduser(path))
def quote(s):
res = s
if isinstance(res, six.text_type):
res = s.encode('utf-8')
return _quote(res)
def unquote(s):
res = s
if not PY3:
if isinstance(res, six.text_type):
res = s.encode('utf-8')
return _unquote(res)
def utf8_encode(s):
res = s
if isinstance(res, six.text_type):
res = s.encode('utf-8')
return res
def pjoin(*args):
"""Short cut for os.path.join"""
return os.path.join(*args)
def mkdir_p(path):
"""mkdir -p path"""
if PY3:
return os.makedirs(path, exist_ok=True)
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
| 115wangpan | /115wangpan-0.7.6.tar.gz/115wangpan-0.7.6/u115/utils.py | utils.py |
# -*- coding: utf-8 -*-
from __future__ import print_function, absolute_import
import humanize
import inspect
import json
import logging
import os
import re
import requests
import time
from hashlib import sha1
from bs4 import BeautifulSoup
from requests.cookies import RequestsCookieJar
from u115 import conf
from u115.utils import (get_timestamp, get_utcdatetime, string_to_datetime,
eval_path, quote, unquote, utf8_encode, txt_type, PY3)
from homura import download
if PY3:
from http import cookiejar as cookielib
else:
import cookielib
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.94 Safari/537.36'
LOGIN_URL = 'http://passport.115.com/?ct=login&ac=ajax&is_ssl=1'
LOGOUT_URL = 'http://passport.115.com/?ac=logout'
CHECKPOINT_URL = 'http://passport.115.com/?ct=ajax&ac=ajax_check_point'
class RequestsLWPCookieJar(cookielib.LWPCookieJar, RequestsCookieJar):
""":class:`requests.cookies.RequestsCookieJar` compatible
:class:`cookielib.LWPCookieJar`"""
pass
class RequestsMozillaCookieJar(cookielib.MozillaCookieJar, RequestsCookieJar):
""":class:`requests.cookies.RequestsCookieJar` compatible
:class:`cookielib.MozillaCookieJar`"""
pass
class RequestHandler(object):
"""
Request handler that maintains session
:ivar session: underlying :class:`requests.Session` instance
"""
def __init__(self):
self.session = requests.Session()
self.session.headers['User-Agent'] = USER_AGENT
def get(self, url, params=None):
"""
Initiate a GET request
"""
r = self.session.get(url, params=params)
return self._response_parser(r, expect_json=False)
def post(self, url, data, params=None):
"""
Initiate a POST request
"""
r = self.session.post(url, data=data, params=params)
return self._response_parser(r, expect_json=False)
def send(self, request, expect_json=True, ignore_content=False):
"""
Send a formatted API request
:param request: a formatted request object
:type request: :class:`.Request`
:param bool expect_json: if True, raise :class:`.InvalidAPIAccess` if
response is not in JSON format
:param bool ignore_content: whether to ignore setting content of the
Response object
"""
r = self.session.request(method=request.method,
url=request.url,
params=request.params,
data=request.data,
files=request.files,
headers=request.headers)
return self._response_parser(r, expect_json, ignore_content)
def _response_parser(self, r, expect_json=True, ignore_content=False):
"""
:param :class:`requests.Response` r: a response object of the Requests
library
:param bool expect_json: if True, raise :class:`.InvalidAPIAccess` if
response is not in JSON format
:param bool ignore_content: whether to ignore setting content of the
Response object
"""
if r.ok:
try:
j = r.json()
return Response(j.get('state'), j)
except ValueError:
# No JSON-encoded data returned
if expect_json:
logger = logging.getLogger(conf.LOGGING_API_LOGGER)
logger.debug(r.text)
raise InvalidAPIAccess('Invalid API access.')
# Raw response
if ignore_content:
res = Response(True, None)
else:
res = Response(True, r.text)
return res
else:
r.raise_for_status()
class Request(object):
"""Formatted API request class"""
def __init__(self, url, method='GET', params=None, data=None,
files=None, headers=None):
"""
Create a Request object
:param str url: URL
:param str method: request method
:param dict params: request parameters
:param dict data: form data
:param dict files: mulitpart form data
:param dict headers: custom request headers
"""
self.url = url
self.method = method
self.params = params
self.data = data
self.files = files
self.headers = headers
self._debug()
def _debug(self):
logger = logging.getLogger(conf.LOGGING_API_LOGGER)
level = logger.getEffectiveLevel()
if level == logging.DEBUG:
func = inspect.stack()[2][3]
msg = conf.DEBUG_REQ_FMT % (func, self.url, self.method,
self.params, self.data)
logger.debug(msg)
class Response(object):
"""
Formatted API response class
:ivar bool state: whether API access is successful
:ivar dict content: result content
"""
def __init__(self, state, content):
self.state = state
self.content = content
self._debug()
def _debug(self):
logger = logging.getLogger(conf.LOGGING_API_LOGGER)
level = logger.getEffectiveLevel()
if level == logging.DEBUG:
func = inspect.stack()[4][3]
msg = conf.DEBUG_RES_FMT % (func, self.state, self.content)
logger.debug(msg)
class API(object):
"""
Request and response interface
:ivar passport: :class:`.Passport` object associated with this interface
:ivar http: :class:`.RequestHandler` object associated with this
interface
:cvar int num_tasks_per_page: default number of tasks per page/request
:cvar str web_api_url: files API url
:cvar str aps_natsort_url: natural sort files API url
:cvar str proapi_url: pro API url for downloads
"""
num_tasks_per_page = 30
web_api_url = 'http://web.api.115.com/files'
aps_natsort_url = 'http://aps.115.com/natsort/files.php'
proapi_url = 'http://proapi.115.com/app/chrome/down'
referer_url = 'http://115.com'
def __init__(self, persistent=False,
cookies_filename=None, cookies_type='LWPCookieJar'):
"""
:param bool auto_logout: whether to logout automatically when
:class:`.API` object is destroyed
.. deprecated:: 0.6.0
Call :meth:`.API.logout` explicitly
:param bool persistent: whether to use persistent session that stores
cookies on disk
:param str cookies_filename: path to the cookies file, use default
path (`~/.115cookies`) if None
:param str cookies_type: a string representing
:class:`cookielib.FileCookieJar` subclass,
`LWPCookieJar` (default) or `MozillaCookieJar`
"""
self.persistent = persistent
self.cookies_filename = cookies_filename
self.cookies_type = cookies_type
self.passport = None
self.http = RequestHandler()
self.logger = logging.getLogger(conf.LOGGING_API_LOGGER)
# Cache attributes to decrease API hits
self._user_id = None
self._username = None
self._signatures = {}
self._upload_url = None
self._lixian_timestamp = None
self._root_directory = None
self._downloads_directory = None
self._receiver_directory = None
self._torrents_directory = None
self._task_count = None
self._task_quota = None
if self.persistent:
self.load_cookies()
def _reset_cache(self):
self._user_id = None
self._username = None
self._signatures = {}
self._upload_url = None
self._lixian_timestamp = None
self._root_directory = None
self._downloads_directory = None
self._receiver_directory = None
self._torrents_directory = None
self._task_count = None
self._task_quota = None
def _init_cookies(self):
# RequestsLWPCookieJar or RequestsMozillaCookieJar
cookies_class = globals()['Requests' + self.cookies_type]
f = self.cookies_filename or conf.COOKIES_FILENAME
self.cookies = cookies_class(f)
def load_cookies(self, ignore_discard=True, ignore_expires=True):
"""Load cookies from the file :attr:`.API.cookies_filename`"""
self._init_cookies()
if os.path.exists(self.cookies.filename):
self.cookies.load(ignore_discard=ignore_discard,
ignore_expires=ignore_expires)
self._reset_cache()
def save_cookies(self, ignore_discard=True, ignore_expires=True):
"""Save cookies to the file :attr:`.API.cookies_filename`"""
if not isinstance(self.cookies, cookielib.FileCookieJar):
m = 'Cookies must be a cookielib.FileCookieJar object to be saved.'
raise APIError(m)
self.cookies.save(ignore_discard=ignore_discard,
ignore_expires=ignore_expires)
@property
def cookies(self):
"""
Cookies of the current API session (cookies getter shortcut)
"""
return self.http.session.cookies
@cookies.setter
def cookies(self, cookies):
"""
Cookies of the current API session (cookies setter shortcut)
"""
self.http.session.cookies = cookies
def login(self, username=None, password=None,
section='default'):
"""
Created the passport with ``username`` and ``password`` and log in.
If either ``username`` or ``password`` is None or omitted, the
credentials file will be parsed.
:param str username: username to login (email, phone number or user ID)
:param str password: password
:param str section: section name in the credential file
:raise: raises :class:`.AuthenticationError` if failed to login
"""
if self.has_logged_in:
return True
if username is None or password is None:
credential = conf.get_credential(section)
username = credential['username']
password = credential['password']
passport = Passport(username, password)
r = self.http.post(LOGIN_URL, passport.form)
if r.state is True:
# Bind this passport to API
self.passport = passport
passport.data = r.content['data']
self._user_id = r.content['data']['USER_ID']
return True
else:
msg = None
if 'err_name' in r.content:
if r.content['err_name'] == 'account':
msg = 'Account does not exist.'
elif r.content['err_name'] == 'passwd':
msg = 'Password is incorrect.'
raise AuthenticationError(msg)
def get_user_info(self):
"""
Get user info
:return: a dictionary of user information
:rtype: dict
"""
return self._req_get_user_aq()
@property
def user_id(self):
"""
User id of the current API user
"""
if self._user_id is None:
if self.has_logged_in:
self._user_id = self._req_get_user_aq()['data']['uid']
else:
raise AuthenticationError('Not logged in.')
return self._user_id
@property
def username(self):
"""
Username of the current API user
"""
if self._username is None:
if self.has_logged_in:
self._username = self._get_username()
else:
raise AuthenticationError('Not logged in.')
return self._username
@property
def has_logged_in(self):
"""Check whether the API has logged in"""
r = self.http.get(CHECKPOINT_URL)
if r.state is False:
return True
# If logged out, flush cache
self._reset_cache()
return False
def logout(self):
"""Log out"""
self.http.get(LOGOUT_URL)
self._reset_cache()
return True
@property
def root_directory(self):
"""Root directory"""
if self._root_directory is None:
self._load_root_directory()
return self._root_directory
@property
def downloads_directory(self):
"""Default directory for downloaded files"""
if self._downloads_directory is None:
self._load_downloads_directory()
return self._downloads_directory
@property
def receiver_directory(self):
"""Parent directory of the downloads directory"""
if self._receiver_directory is None:
self._receiver_directory = self.downloads_directory.parent
return self._receiver_directory
@property
def torrents_directory(self):
"""Default directory that stores uploaded torrents"""
if self._torrents_directory is None:
self._load_torrents_directory()
return self._torrents_directory
@property
def task_count(self):
"""
Number of tasks created
"""
self._req_lixian_task_lists()
return self._task_count
@property
def task_quota(self):
"""
Task quota (monthly)
"""
self._req_lixian_task_lists()
return self._task_quota
def get_tasks(self, count=30):
"""
Get ``count`` number of tasks
:param int count: number of tasks to get
:return: a list of :class:`.Task` objects
"""
return self._load_tasks(count)
def add_task_bt(self, filename, select=False):
"""
Add a new BT task
:param str filename: path to torrent file to upload
:param bool select: whether to select files in the torrent.
* True: it returns the opened torrent (:class:`.Torrent`) and
can then iterate files in :attr:`.Torrent.files` and
select/unselect them before calling :meth:`.Torrent.submit`
* False: it will submit the torrent with default selected files
"""
filename = eval_path(filename)
u = self.upload(filename, self.torrents_directory)
t = self._load_torrent(u)
if select:
return t
return t.submit()
def add_task_url(self, target_url):
"""
Add a new URL task
:param str target_url: the URL of the file that to be downloaded
"""
return self._req_lixian_add_task_url(target_url)
def get_storage_info(self, human=False):
"""
Get storage info
:param bool human: whether return human-readable size
:return: total and used storage
:rtype: dict
"""
res = self._req_get_storage_info()
if human:
res['total'] = humanize.naturalsize(res['total'], binary=True)
res['used'] = humanize.naturalsize(res['used'], binary=True)
return res
def upload(self, filename, directory=None):
"""
Upload a file ``filename`` to ``directory``
:param str filename: path to the file to upload
:param directory: destionation :class:`.Directory`, defaults to
:attribute:`.API.downloads_directory` if None
:return: the uploaded file
:rtype: :class:`.File`
"""
filename = eval_path(filename)
if directory is None:
directory = self.downloads_directory
# First request
res1 = self._req_upload(filename, directory)
data1 = res1['data']
file_id = data1['file_id']
# Second request
res2 = self._req_file(file_id)
data2 = res2['data'][0]
data2.update(**data1)
return _instantiate_uploaded_file(self, data2)
def download(self, obj, path=None, show_progress=True, resume=True,
auto_retry=True, proapi=False):
"""
Download a file
:param obj: :class:`.File` object
:param str path: local path
:param bool show_progress: whether to show download progress
:param bool resume: whether to resume on unfinished downloads
identified by filename
:param bool auto_retry: whether to retry automatically upon closed
transfer until the file's download is finished
:param bool proapi: whether to use pro API
"""
url = obj.get_download_url(proapi)
download(url, path=path, session=self.http.session,
show_progress=show_progress, resume=resume,
auto_retry=auto_retry)
def search(self, keyword, count=30):
"""
Search files or directories
:param str keyword: keyword
:param int count: number of entries to be listed
"""
kwargs = {}
kwargs['search_value'] = keyword
root = self.root_directory
entries = root._load_entries(func=self._req_files_search,
count=count, page=1, **kwargs)
res = []
for entry in entries:
if 'pid' in entry:
res.append(_instantiate_directory(self, entry))
else:
res.append(_instantiate_file(self, entry))
return res
def move(self, entries, directory):
"""
Move one or more entries (file or directory) to the destination
directory
:param list entries: a list of source entries (:class:`.BaseFile`
object)
:param directory: destination directory
:return: whether the action is successful
:raise: :class:`.APIError` if something bad happened
"""
fcids = []
for entry in entries:
if isinstance(entry, File):
fcid = entry.fid
elif isinstance(entry, Directory):
fcid = entry.cid
else:
raise APIError('Invalid BaseFile instance for an entry.')
fcids.append(fcid)
if not isinstance(directory, Directory):
raise APIError('Invalid destination directory.')
if self._req_files_move(directory.cid, fcids):
for entry in entries:
if isinstance(entry, File):
entry.cid = directory.cid
entry.reload()
return True
else:
raise APIError('Error moving entries.')
def edit(self, entry, name, mark=False):
"""
Edit an entry (file or directory)
:param entry: :class:`.BaseFile` object
:param str name: new name for the entry
:param bool mark: whether to bookmark the entry
"""
fcid = None
if isinstance(entry, File):
fcid = entry.fid
elif isinstance(entry, Directory):
fcid = entry.cid
else:
raise APIError('Invalid BaseFile instance for an entry.')
is_mark = 0
if mark is True:
is_mark = 1
if self._req_files_edit(fcid, name, is_mark):
entry.reload()
return True
else:
raise APIError('Error editing the entry.')
def mkdir(self, parent, name):
"""
Create a directory
:param parent: the parent directory
:param str name: the name of the new directory
:return: the new directory
:rtype: :class:`.Directory`
"""
pid = None
cid = None
if isinstance(parent, Directory):
pid = parent.cid
else:
raise('Invalid Directory instance.')
cid = self._req_files_add(pid, name)['cid']
return self._load_directory(cid)
def _req_offline_space(self):
"""Required before accessing lixian tasks"""
url = 'http://115.com/'
params = {
'ct': 'offline',
'ac': 'space',
'_': get_timestamp(13)
}
_sign = os.environ.get('U115_BROWSER_SIGN')
if _sign is not None:
_time = os.environ.get('U115_BROWSER_TIME')
if _time is None:
msg = 'U115_BROWSER_TIME is required given U115_BROWSER_SIGN.'
raise APIError(msg)
params['sign'] = _sign
params['time'] = _time
params['uid'] = self.user_id
req = Request(url=url, params=params)
r = self.http.send(req)
if r.state:
self._signatures['offline_space'] = r.content['sign']
self._lixian_timestamp = r.content['time']
else:
msg = 'Failed to retrieve signatures.'
raise RequestFailure(msg)
def _req_lixian_task_lists(self, page=1):
"""
This request will cause the system to create a default downloads
directory if it does not exist
"""
url = 'http://115.com/lixian/'
params = {'ct': 'lixian', 'ac': 'task_lists'}
self._load_signatures()
data = {
'page': page,
'uid': self.user_id,
'sign': self._signatures['offline_space'],
'time': self._lixian_timestamp,
}
req = Request(method='POST', url=url, params=params, data=data)
res = self.http.send(req)
if res.state:
self._task_count = res.content['count']
self._task_quota = res.content['quota']
return res.content['tasks']
else:
msg = 'Failed to get tasks.'
raise RequestFailure(msg)
def _req_lixian_get_id(self, torrent=False):
"""Get `cid` of lixian space directory"""
url = 'http://115.com/'
params = {
'ct': 'lixian',
'ac': 'get_id',
'torrent': 1 if torrent else None,
'_': get_timestamp(13)
}
req = Request(method='GET', url=url, params=params)
res = self.http.send(req)
return res.content
def _req_lixian_torrent(self, u):
"""
:param u: uploaded torrent file
"""
self._load_signatures()
url = 'http://115.com/lixian/'
params = {
'ct': 'lixian',
'ac': 'torrent',
}
data = {
'pickcode': u.pickcode,
'sha1': u.sha,
'uid': self.user_id,
'sign': self._signatures['offline_space'],
'time': self._lixian_timestamp,
}
req = Request(method='POST', url=url, params=params, data=data)
res = self.http.send(req)
if res.state:
return res.content
else:
msg = res.content.get('error_msg')
self.logger.error(msg)
raise RequestFailure('Failed to open torrent.')
def _req_lixian_add_task_bt(self, t):
self._load_signatures()
url = 'http://115.com/lixian/'
params = {'ct': 'lixian', 'ac': 'add_task_bt'}
_wanted = []
for i, b in enumerate(t.files):
if b.selected:
_wanted.append(str(i))
wanted = ','.join(_wanted)
data = {
'info_hash': t.info_hash,
'wanted': wanted,
'savepath': t.name,
'uid': self.user_id,
'sign': self._signatures['offline_space'],
'time': self._lixian_timestamp,
}
req = Request(method='POST', url=url, params=params, data=data)
res = self.http.send(req)
if res.state:
return True
else:
msg = res.content.get('error_msg')
self.logger.error(msg)
raise RequestFailure('Failed to create new task.')
def _req_lixian_add_task_url(self, target_url):
self._load_signatures()
url = 'http://115.com/lixian/'
params = {'ct': 'lixian', 'ac': 'add_task_url'}
data = {
'url': target_url,
'uid': self.user_id,
'sign': self._signatures['offline_space'],
'time': self._lixian_timestamp,
}
req = Request(method='POST', url=url, params=params, data=data)
res = self.http.send(req)
if res.state:
return True
else:
msg = res.content.get('error_msg')
self.logger.error(msg)
raise RequestFailure('Failed to create new task.')
def _req_lixian_task_del(self, t):
self._load_signatures()
url = 'http://115.com/lixian/'
params = {'ct': 'lixian', 'ac': 'task_del'}
data = {
'hash[0]': t.info_hash,
'uid': self.user_id,
'sign': self._signatures['offline_space'],
'time': self._lixian_timestamp,
}
req = Request(method='POST', url=url, params=params, data=data)
res = self.http.send(req)
if res.state:
return True
else:
raise RequestFailure('Failed to delete the task.')
def _req_file_userfile(self):
url = 'http://115.com/'
params = {
'ct': 'file',
'ac': 'userfile',
'is_wl_tpl': 1,
}
req = Request(method='GET', url=url, params=params)
self.http.send(req, expect_json=False, ignore_content=True)
def _req_aps_natsort_files(self, cid, offset, limit, o='file_name',
asc=1, aid=1, show_dir=1, code=None, scid=None,
snap=0, natsort=1, source=None, type=0,
format='json', star=None, is_share=None):
"""
When :meth:`.API._req_files` is called with `o='filename'` and
`natsort=1`, API access will fail
and :meth:`.API._req_aps_natsort_files` is subsequently called with
the same kwargs. Refer to the implementation in
:meth:`.Directory.list`
"""
params = locals()
del params['self']
req = Request(method='GET', url=self.aps_natsort_url, params=params)
res = self.http.send(req)
if res.state:
return res.content
else:
raise RequestFailure('Failed to access files API.')
def _req_files(self, cid, offset, limit, o='user_ptime', asc=1, aid=1,
show_dir=1, code=None, scid=None, snap=0, natsort=1,
source=None, type=0, format='json', star=None,
is_share=None):
"""
:param int type: type of files to be displayed
* '' (empty string): marked
* None: all
* 0: all
* 1: documents
* 2: images
* 3: music
* 4: video
* 5: zipped
* 6: applications
* 99: files only
"""
params = locals()
del params['self']
req = Request(method='GET', url=self.web_api_url, params=params)
res = self.http.send(req)
if res.state:
return res.content
else:
raise RequestFailure('Failed to access files API.')
def _req_files_search(self, offset, limit, search_value, aid=-1,
date=None, pick_code=None, source=None, type=0,
format='json'):
params = locals()
del params['self']
url = self.web_api_url + '/search'
req = Request(method='GET', url=url, params=params)
res = self.http.send(req)
if res.state:
return res.content
else:
raise RequestFailure('Failed to access files API.')
def _req_files_edit(self, fid, file_name=None, is_mark=0):
"""Edit a file or directory"""
url = self.web_api_url + '/edit'
data = locals()
del data['self']
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return True
else:
raise RequestFailure('Failed to access files API.')
def _req_files_add(self, pid, cname):
"""
Add a directory
:param str pid: parent directory id
:param str cname: directory name
"""
url = self.web_api_url + '/add'
data = locals()
del data['self']
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return res.content
else:
raise RequestFailure('Failed to access files API.')
def _req_files_move(self, pid, fids):
"""
Move files or directories
:param str pid: destination directory id
:param list fids: a list of ids of files or directories to be moved
"""
url = self.web_api_url + '/move'
data = {}
data['pid'] = pid
for i, fid in enumerate(fids):
data['fid[%d]' % i] = fid
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return True
else:
raise RequestFailure('Failed to access files API.')
def _req_file(self, file_id):
url = self.web_api_url + '/file'
data = {'file_id': file_id}
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return res.content
else:
raise RequestFailure('Failed to access files API.')
def _req_directory(self, cid):
"""Return name and pid of by cid"""
res = self._req_files(cid=cid, offset=0, limit=1, show_dir=1)
path = res['path']
count = res['count']
for d in path:
if str(d['cid']) == str(cid):
res = {
'cid': d['cid'],
'name': d['name'],
'pid': d['pid'],
'count': count,
}
return res
else:
raise RequestFailure('No directory found.')
def _req_files_download_url(self, pickcode, proapi=False):
if '_115_curtime' not in self.cookies:
self._req_file_userfile()
if not proapi:
url = self.web_api_url + '/download'
params = {'pickcode': pickcode, '_': get_timestamp(13)}
else:
url = self.proapi_url
params = {'pickcode': pickcode, 'method': 'get_file_url'}
headers = {
'Referer': self.referer_url,
}
req = Request(method='GET', url=url, params=params,
headers=headers)
res = self.http.send(req)
if res.state:
if not proapi:
return res.content['file_url']
else:
fid = res.content['data'].keys()[0]
return res.content['data'][fid]['url']['url']
else:
raise RequestFailure('Failed to get download URL.')
def _req_get_storage_info(self):
url = 'http://115.com'
params = {
'ct': 'ajax',
'ac': 'get_storage_info',
'_': get_timestamp(13),
}
req = Request(method='GET', url=url, params=params)
res = self.http.send(req)
return res.content['1']
def _req_upload(self, filename, directory):
"""Raw request to upload a file ``filename``"""
self._upload_url = self._load_upload_url()
self.http.get('http://upload.115.com/crossdomain.xml')
b = os.path.basename(filename)
target = 'U_1_' + str(directory.cid)
files = {
'Filename': ('', quote(b), ''),
'target': ('', target, ''),
'Filedata': (quote(b), open(filename, 'rb'), ''),
'Upload': ('', 'Submit Query', ''),
}
req = Request(method='POST', url=self._upload_url, files=files)
res = self.http.send(req)
if res.state:
return res.content
else:
msg = None
if res.content['code'] == 990002:
msg = 'Invalid parameter.'
elif res.content['code'] == 1001:
msg = 'Torrent upload failed. Please try again later.'
raise RequestFailure(msg)
def _req_rb_delete(self, fcid, pid):
url = 'http://web.api.115.com/rb/delete'
data = {
'pid': pid,
'fid[0]': fcid,
}
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return True
else:
msg = 'Failed to delete this file or directory.'
if 'errno' in res.content:
if res.content['errno'] == 990005:
raise JobError()
self.logger.error(res.content['error'])
raise APIError(msg)
def _req_get_user_aq(self):
url = 'http://my.115.com/'
data = {
'ct': 'ajax',
'ac': 'get_user_aq'
}
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return res.content
def _load_signatures(self, force=True):
if not self._signatures or force:
self._req_offline_space()
def _load_tasks(self, count, page=1, tasks=None):
if tasks is None:
tasks = []
req_tasks = self._req_lixian_task_lists(page)
loaded_tasks = []
if req_tasks is not None:
loaded_tasks = [
_instantiate_task(self, t) for t in req_tasks[:count]
]
if count <= self.num_tasks_per_page or req_tasks is None:
return tasks + loaded_tasks
else:
return self._load_tasks(count - self.num_tasks_per_page,
page + 1, tasks + loaded_tasks)
def _load_directory(self, cid):
kwargs = self._req_directory(cid)
if str(kwargs['pid']) != str(cid):
return Directory(api=self, **kwargs)
def _load_root_directory(self):
"""
Load root directory, which has a cid of 0
"""
kwargs = self._req_directory(0)
self._root_directory = Directory(api=self, **kwargs)
def _load_torrents_directory(self):
"""
Load torrents directory
If it does not exist yet, this request will cause the system to create
one
"""
r = self._req_lixian_get_id(torrent=True)
self._downloads_directory = self._load_directory(r['cid'])
def _load_downloads_directory(self):
"""
Load downloads directory
If it does not exist yet, this request will cause the system to create
one
"""
r = self._req_lixian_get_id(torrent=False)
self._downloads_directory = self._load_directory(r['cid'])
def _load_upload_url(self):
res = self._parse_src_js_var('upload_config_h5')
return res['url']
def _load_torrent(self, u):
res = self._req_lixian_torrent(u)
return _instantiate_torrent(self, res)
def _parse_src_js_var(self, variable):
"""Parse JavaScript variables in the source page"""
src_url = 'http://115.com'
r = self.http.get(src_url)
soup = BeautifulSoup(r.content)
scripts = [script.text for script in soup.find_all('script')]
text = '\n'.join(scripts)
pattern = "%s\s*=\s*(.*);" % (variable.upper())
m = re.search(pattern, text)
if not m:
msg = 'Cannot parse source JavaScript for %s.' % variable
raise APIError(msg)
return json.loads(m.group(1).strip())
def _get_username(self):
return unquote(self.cookies.get('OOFL'))
class Base(object):
def __repr__(self):
try:
u = self.__str__()
except (UnicodeEncodeError, UnicodeDecodeError):
u = '[Bad Unicode data]'
repr_type = type(u)
return repr_type('<%s: %s>' % (self.__class__.__name__, u))
def __str__(self):
if hasattr(self, '__unicode__'):
if PY3:
return self.__unicode__()
else:
return unicode(self).encode('utf-8')
return txt_type('%s object' % self.__class__.__name__)
class Passport(Base):
"""
Passport for user authentication
:ivar str username: username
:ivar str password: user password
:ivar dict form: a dictionary of POST data to login
:ivar int user_id: user ID of the authenticated user
:ivar dict data: data returned upon login
"""
def __init__(self, username, password):
self.username = username
self.password = password
self.form = self._form()
self.data = None
def _form(self):
vcode = self._vcode()
f = {
'login[ssoent]': 'A1',
'login[version]': '2.0',
'login[ssoext]': vcode,
'login[ssoln]': self.username,
'login[ssopw]': self._ssopw(vcode),
'login[ssovcode]': vcode,
'login[safe]': '1',
'login[time]': '0',
'login[safe_login]': '0',
'goto': 'http://115.com/',
}
return f
def _vcode(self):
s = '%.6f' % time.time()
whole, frac = map(int, s.split('.'))
res = '%.8x%.5x' % (whole, frac)
return res
def _ssopw(self, vcode):
p = sha1(utf8_encode(self.password)).hexdigest()
u = sha1(utf8_encode(self.username)).hexdigest()
v = vcode.upper()
pu = sha1(utf8_encode(p + u)).hexdigest()
return sha1(utf8_encode(pu + v)).hexdigest()
def __unicode__(self):
return self.username
class BaseFile(Base):
def __init__(self, api, cid, name):
"""
:param API api: associated API object
:param str cid: directory id
* For file: this represents the directory it belongs to;
* For directory: this represents itself
:param str name: originally named `n`
NOTICE
cid, fid and pid are in string format at this time
"""
self.api = api
self.cid = cid
self.name = name
self._deleted = False
def delete(self):
"""
Delete this file or directory
:return: whether deletion is successful
:raise: :class:`.APIError` if this file or directory is already deleted
"""
fcid = None
pid = None
if isinstance(self, File):
fcid = self.fid
pid = self.cid
elif isinstance(self, Directory):
fcid = self.cid
pid = self.pid
else:
raise APIError('Invalid BaseFile instance.')
if not self._deleted:
if self.api._req_rb_delete(fcid, pid):
self._deleted = True
return True
else:
raise APIError('This file or directory is already deleted.')
def move(self, directory):
"""
Move this file or directory to the destination directory
:param directory: destination directory
:return: whether the action is successful
:raise: :class:`.APIError` if something bad happened
"""
self.api.move([self], directory)
def edit(self, name, mark=False):
"""
Edit this file or directory
:param str name: new name for this entry
:param bool mark: whether to bookmark this entry
"""
self.api.edit(self, name, mark)
@property
def is_deleted(self):
"""Whether this file or directory is deleted"""
return self._deleted
def __eq__(self, other):
if isinstance(self, File):
if isinstance(other, File):
return self.fid == other.fid
elif isinstance(self, Directory):
if isinstance(other, Directory):
return self.cid == other.cid
return False
def __ne__(self, other):
return not self.__eq__(other)
def __unicode__(self):
return self.name
class File(BaseFile):
"""
File in a directory
:ivar int fid: file id
:ivar str cid: cid of the current directory
:ivar int size: size in bytes
:ivar str size_human: human-readable size
:ivar str file_type: originally named `ico`
:ivar str sha: SHA1 hash
:ivar datetime.datetime date_created: in "%Y-%m-%d %H:%M:%S" format,
originally named `t`
:ivar str thumbnail: thumbnail URL, originally named `u`
:ivar str pickcode: originally named `pc`
"""
def __init__(self, api, fid, cid, name, size, file_type, sha,
date_created, thumbnail, pickcode, *args, **kwargs):
super(File, self).__init__(api, cid, name)
self.fid = fid
self.size = size
self.size_human = humanize.naturalsize(size, binary=True)
self.file_type = file_type
self.sha = sha
self.date_created = date_created
self.thumbnail = thumbnail
self.pickcode = pickcode
self._directory = None
self._download_url = None
@property
def directory(self):
"""Directory that holds this file"""
if self._directory is None:
self._directory = self.api._load_directory(self.cid)
return self._directory
def get_download_url(self, proapi=False):
"""
Get this file's download URL
:param bool proapi: whether to use pro API
"""
if self._download_url is None:
self._download_url = \
self.api._req_files_download_url(self.pickcode, proapi)
return self._download_url
@property
def url(self):
"""Alias for :meth:`.File.get_download_url` with `proapi=False`"""
return self.get_download_url()
def download(self, path=None, show_progress=True, resume=True,
auto_retry=True, proapi=False):
"""Download this file"""
self.api.download(self, path, show_progress, resume, auto_retry,
proapi)
@property
def is_torrent(self):
"""Whether the file is a torrent"""
return self.file_type == 'torrent'
def open_torrent(self):
"""
Open the torrent (if it is a torrent)
:return: opened torrent
:rtype: :class:`.Torrent`
"""
if self.is_torrent:
return self.api._load_torrent(self)
def reload(self):
"""
Reload file info and metadata
* name
* sha
* pickcode
"""
res = self.api._req_file(self.fid)
data = res['data'][0]
self.name = data['file_name']
self.sha = data['sha1']
self.pickcode = data['pick_code']
class Directory(BaseFile):
"""
:ivar str cid: cid of this directory
:ivar str pid: represents the parent directory it belongs to
:ivar int count: number of entries in this directory
:ivar datetime.datetime date_created: integer, originally named `t`
:ivar str pickcode: string, originally named `pc`
"""
max_entries_per_load = 24 # Smaller than 24 may cause abnormal result
def __init__(self, api, cid, name, pid, count=-1,
date_created=None, pickcode=None, is_root=False,
*args, **kwargs):
super(Directory, self).__init__(api, cid, name)
self.pid = pid
self._count = count
if date_created is not None:
self.date_created = date_created
self.pickcode = pickcode
self._parent = None
@property
def is_root(self):
"""Whether this directory is the root directory"""
return int(self.cid) == 0
@property
def parent(self):
"""Parent directory that holds this directory"""
if self._parent is None:
if self.pid is not None:
self._parent = self.api._load_directory(self.pid)
return self._parent
@property
def count(self):
"""Number of entries in this directory"""
if self._count == -1:
self.reload()
return self._count
def reload(self):
"""
Reload directory info and metadata
* `name`
* `pid`
* `count`
"""
r = self.api._req_directory(self.cid)
self.pid = r['pid']
self.name = r['name']
self._count = r['count']
def _load_entries(self, func, count, page=1, entries=None, **kwargs):
"""
Load entries
:param function func: function (:meth:`.API._req_files` or
:meth:`.API._req_search`) that returns entries
:param int count: number of entries to load. This value should never
be greater than self.count
:param int page: page number (starting from 1)
"""
if entries is None:
entries = []
res = \
func(offset=(page - 1) * self.max_entries_per_load,
limit=self.max_entries_per_load,
**kwargs)
loaded_entries = [
entry for entry in res['data'][:count]
]
#total_count = res['count']
total_count = self.count
# count should never be greater than total_count
if count > total_count:
count = total_count
if count <= self.max_entries_per_load:
return entries + loaded_entries
else:
cur_count = count - self.max_entries_per_load
return self._load_entries(
func=func, count=cur_count, page=page + 1,
entries=entries + loaded_entries, **kwargs)
def list(self, count=30, order='user_ptime', asc=False, show_dir=True,
natsort=True):
"""
List directory contents
:param int count: number of entries to be listed
:param str order: order of entries, originally named `o`. This value
may be one of `user_ptime` (default), `file_size` and `file_name`
:param bool asc: whether in ascending order
:param bool show_dir: whether to show directories
:param bool natsort: whether to use natural sort
Return a list of :class:`.File` or :class:`.Directory` objects
"""
if self.cid is None:
return False
self.reload()
kwargs = {}
# `cid` is the only required argument
kwargs['cid'] = self.cid
kwargs['asc'] = 1 if asc is True else 0
kwargs['show_dir'] = 1 if show_dir is True else 0
kwargs['natsort'] = 1 if natsort is True else 0
kwargs['o'] = order
# When the downloads directory exists along with its parent directory,
# the receiver directory, its parent's count (receiver directory's
# count) does not include the downloads directory. This behavior is
# similar to its parent's parent (root), the count of which does not
# include the receiver directory.
# The following code fixed this behavior so that a directory's
# count correctly reflects the actual number of entries in it
# The side-effect that this code may ensure that downloads directory
# exists, causing the system to create the receiver directory and
# downloads directory, if they do not exist.
if self.is_root or self == self.api.receiver_directory:
self._count += 1
if self.count <= count:
# count should never be greater than self.count
count = self.count
try:
entries = self._load_entries(func=self.api._req_files,
count=count, page=1, **kwargs)
# When natsort=1 and order='file_name', API access will fail
except RequestFailure as e:
if natsort is True and order == 'file_name':
entries = \
self._load_entries(func=self.api._req_aps_natsort_files,
count=count, page=1, **kwargs)
else:
raise e
res = []
for entry in entries:
if 'pid' in entry:
res.append(_instantiate_directory(self.api, entry))
else:
res.append(_instantiate_file(self.api, entry))
return res
def mkdir(self, name):
"""
Create a new directory in this directory
"""
self.api.mkdir(self, name)
class Task(Base):
"""
BitTorrent or URL task
:ivar datetime.datetime add_time: added time
:ivar str cid: associated directory id, if any. For a directory task (
e.g. BT task), this is its associated directory's cid. For a file
task (e.g. HTTP url task), this is the cid of the downloads directory.
This value may be None if the task is failed and has no corresponding
directory
:ivar str file_id: equivalent to `cid` of :class:`.Directory`. This value
may be None if the task is failed and has no corresponding directory
:ivar str info_hash: hashed value
:ivar datetime.datetime last_update: last updated time
:ivar int left_time: left time ()
:ivar int move: moving state
* 0: not transferred
* 1: transferred
* 2: partially transferred
:ivar str name: name of this task
:ivar int peers: number of peers
:ivar int percent_done: <=100, originally named `percentDone`
:ivar int rate_download: download rate (B/s), originally named
`rateDownload`
:ivar int size: size of task
:ivar str size_human: human-readable size
:ivar int status: status code
* -1: failed
* 1: downloading
* 2: downloaded
* 4: searching resources
"""
def __init__(self, api, add_time, file_id, info_hash, last_update,
left_time, move, name, peers, percent_done, rate_download,
size, status, cid, pid, url, *args, **kwargs):
self.api = api
self.cid = cid
self.name = name
self.add_time = add_time
self.file_id = file_id
self.info_hash = info_hash
self.last_update = last_update
self.left_time = left_time
self.move = move
self.peers = peers
self.percent_done = percent_done
self.rate_download = rate_download
self.size = size
self.size_human = humanize.naturalsize(size, binary=True)
self.status = status
self.url = url
self._directory = None
self._deleted = False
self._count = -1
@property
def is_directory(self):
"""
:return: whether this task is associated with a directory.
:rtype: bool
"""
if self.cid is None:
msg = 'Cannot determine whether this task is a directory.'
if not self.is_transferred:
msg += ' This task has not been transferred.'
raise TaskError(msg)
return self.api.downloads_directory.cid != self.cid
@property
def is_bt(self):
"""Alias of `is_directory`"""
return self.is_directory
def delete(self):
"""
Delete task (does not influence its corresponding directory)
:return: whether deletion is successful
:raise: :class:`.TaskError` if the task is already deleted
"""
if not self._deleted:
if self.api._req_lixian_task_del(self):
self._deleted = True
return True
raise TaskError('This task is already deleted.')
@property
def is_deleted(self):
"""
:return: whether this task is deleted
:rtype: bool
"""
return self._deleted
@property
def is_transferred(self):
"""
:return: whether this tasks has been transferred
:rtype: bool
"""
return self.move == 1
@property
def status_human(self):
"""
Human readable status
:return:
* `DOWNLOADING`: the task is downloading files
* `BEING TRANSFERRED`: the task is being transferred
* `TRANSFERRED`: the task has been transferred to downloads \
directory
* `SEARCHING RESOURCES`: the task is searching resources
* `FAILED`: the task is failed
* `DELETED`: the task is deleted
* `UNKNOWN STATUS`
:rtype: str
"""
res = None
if self._deleted:
return 'DELETED'
if self.status == 1:
res = 'DOWNLOADING'
elif self.status == 2:
if self.move == 0:
res = 'BEING TRANSFERRED'
elif self.move == 1:
res = 'TRANSFERRED'
elif self.move == 2:
res = 'PARTIALLY TRANSFERRED'
elif self.status == 4:
res = 'SEARCHING RESOURCES'
elif self.status == -1:
res = 'FAILED'
if res is not None:
return res
return 'UNKNOWN STATUS'
@property
def directory(self):
"""Associated directory, if any, with this task"""
if not self.is_directory:
msg = 'This task is a file task with no associated directory.'
raise TaskError(msg)
if self._directory is None:
if self.is_transferred:
self._directory = self.api._load_directory(self.cid)
if self._directory is None:
msg = 'No directory assciated with this task: Task is %s.' % \
self.status_human.lower()
raise TaskError(msg)
return self._directory
@property
def parent(self):
"""Parent directory of the associated directory"""
return self.directory.parent
@property
def count(self):
"""Number of entries in the associated directory"""
return self.directory.count
def list(self, count=30, order='user_ptime', asc=False, show_dir=True,
natsort=True):
"""
List files of the associated directory to this task.
:param int count: number of entries to be listed
:param str order: originally named `o`
:param bool asc: whether in ascending order
:param bool show_dir: whether to show directories
"""
return self.directory.list(count, order, asc, show_dir, natsort)
def __unicode__(self):
return self.name
class Torrent(Base):
"""
Opened torrent before becoming a task
:ivar api: associated API object
:ivar str name: task name, originally named `torrent_name`
:ivar int size: task size, originally named `torrent_size`
:ivar str info_hash: hashed value
:ivar int file_count: number of files included
:ivar list files: files included (list of :class:`.TorrentFile`),
originally named `torrent_filelist_web`
"""
def __init__(self, api, name, size, info_hash, file_count, files=None,
*args, **kwargs):
self.api = api
self.name = name
self.size = size
self.size_human = humanize.naturalsize(size, binary=True)
self.info_hash = info_hash
self.file_count = file_count
self.files = files
self.submitted = False
def submit(self):
"""Submit this torrent and create a new task"""
if self.api._req_lixian_add_task_bt(self):
self.submitted = True
return True
return False
@property
def selected_files(self):
"""List of selected :class:`.TorrentFile` objects of this torrent"""
return [f for f in self.files if f.selected]
@property
def unselected_files(self):
"""List of unselected :class:`.TorrentFile` objects of this torrent"""
return [f for f in self.files if not f.selected]
def __unicode__(self):
return self.name
class TorrentFile(Base):
"""
File in the torrent file list
:param torrent: the torrent that holds this file
:type torrent: :class:`.Torrent`
:param str path: file path in the torrent
:param int size: file size
:param bool selected: whether this file is selected
"""
def __init__(self, torrent, path, size, selected, *args, **kwargs):
self.torrent = torrent
self.path = path
self.size = size
self.size_human = humanize.naturalsize(size, binary=True)
self.selected = selected
def select(self):
"""Select this file"""
self.selected = True
def unselect(self):
"""Unselect this file"""
self.selected = False
def __unicode__(self):
return '[%s] %s' % ('*' if self.selected else ' ', self.path)
def _instantiate_task(api, kwargs):
"""Create a Task object from raw kwargs"""
file_id = kwargs['file_id']
kwargs['file_id'] = file_id if str(file_id).strip() else None
kwargs['cid'] = kwargs['file_id'] or None
kwargs['rate_download'] = kwargs['rateDownload']
kwargs['percent_done'] = kwargs['percentDone']
kwargs['add_time'] = get_utcdatetime(kwargs['add_time'])
kwargs['last_update'] = get_utcdatetime(kwargs['last_update'])
is_transferred = (kwargs['status'] == 2 and kwargs['move'] == 1)
if is_transferred:
kwargs['pid'] = api.downloads_directory.cid
else:
kwargs['pid'] = None
del kwargs['rateDownload']
del kwargs['percentDone']
if 'url' in kwargs:
if not kwargs['url']:
kwargs['url'] = None
else:
kwargs['url'] = None
task = Task(api, **kwargs)
if is_transferred:
task._parent = api.downloads_directory
return task
def _instantiate_file(api, kwargs):
kwargs['file_type'] = kwargs['ico']
kwargs['date_created'] = string_to_datetime(kwargs['t'])
kwargs['pickcode'] = kwargs['pc']
kwargs['name'] = kwargs['n']
kwargs['thumbnail'] = kwargs.get('u')
kwargs['size'] = kwargs['s']
del kwargs['ico']
del kwargs['t']
del kwargs['pc']
del kwargs['s']
if 'u' in kwargs:
del kwargs['u']
return File(api, **kwargs)
def _instantiate_directory(api, kwargs):
kwargs['name'] = kwargs['n']
kwargs['date_created'] = get_utcdatetime(float(kwargs['t']))
kwargs['pickcode'] = kwargs.get('pc')
return Directory(api, **kwargs)
def _instantiate_uploaded_file(api, kwargs):
kwargs['fid'] = kwargs['file_id']
kwargs['name'] = kwargs['file_name']
kwargs['pickcode'] = kwargs['pick_code']
kwargs['size'] = kwargs['file_size']
kwargs['sha'] = kwargs['sha1']
kwargs['date_created'] = get_utcdatetime(kwargs['file_ptime'])
kwargs['thumbnail'] = None
_, ft = os.path.splitext(kwargs['name'])
kwargs['file_type'] = ft[1:]
return File(api, **kwargs)
def _instantiate_torrent(api, kwargs):
kwargs['size'] = kwargs['file_size']
kwargs['name'] = kwargs['torrent_name']
file_list = kwargs['torrent_filelist_web']
del kwargs['file_size']
del kwargs['torrent_name']
del kwargs['torrent_filelist_web']
torrent = Torrent(api, **kwargs)
torrent.files = [_instantiate_torrent_file(torrent, f) for f in file_list]
return torrent
def _instantiate_torrent_file(torrent, kwargs):
kwargs['selected'] = True if kwargs['wanted'] == 1 else False
del kwargs['wanted']
return TorrentFile(torrent, **kwargs)
class APIError(Exception):
"""General error related to API"""
def __init__(self, *args, **kwargs):
content = kwargs.pop('content', None)
self.content = content
super(APIError, self).__init__(*args, **kwargs)
class TaskError(APIError):
"""Task has unstable status or no directory operation"""
pass
class AuthenticationError(APIError):
"""Authentication error"""
pass
class InvalidAPIAccess(APIError):
"""Invalid and forbidden API access"""
pass
class RequestFailure(APIError):
"""Request failure"""
pass
class JobError(APIError):
"""Job running error (request multiple similar jobs simultaneously)"""
def __init__(self, *args, **kwargs):
content = kwargs.pop('content', None)
self.content = content
if not args:
msg = 'Your account has a similar job running. Try again later.'
args = (msg,)
super(JobError, self).__init__(*args, **kwargs)
| 115wangpan | /115wangpan-0.7.6.tar.gz/115wangpan-0.7.6/u115/api.py | api.py |
# -*- coding: utf-8 -*-
# flake8: noqa
__version__ = '0.7.6'
from u115.api import (API, Passport, RequestHandler, Request, Response,
RequestsLWPCookieJar, RequestsMozillaCookieJar,
Torrent, Task, TorrentFile, File, Directory,
APIError, TaskError, AuthenticationError,
InvalidAPIAccess, RequestFailure, JobError)
| 115wangpan | /115wangpan-0.7.6.tar.gz/115wangpan-0.7.6/u115/__init__.py | __init__.py |
# Example Package
This is a simple example package. You can use
[Github-flavored Markdown](https://guides.github.com/features/mastering-markdown/)
to write your content. | 11601160 | /11601160-0.0.1.tar.gz/11601160-0.0.1/README.md | README.md |
import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name="11601160",
version="0.0.1",
author="Example Author",
author_email="author@example.com",
description="A small example package",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/pypa/sampleproject",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
data_files = [ ('', ['focusIT/file3.txt']) ],
) | 11601160 | /11601160-0.0.1.tar.gz/11601160-0.0.1/setup.py | setup.py |
import pkg_resources
import os
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
resource_package = "focusIT"
def print_res():
print(os.path.abspath("file3.txt"))
print("Hey")
file1 = open(os.path.abspath("file3.txt"), "r")
sess = tf.compat.v1.Session()
a = tf.constant(int(file1.readline(), 10))
b = tf.constant(int(file1.readline(), 10))
print(sess.run(a + b))
def print_the_other():
sess = tf.compat.v1.Session()
a = tf.constant(2)
b = tf.constant(3)
print(sess.run(a + b))
def print_it():
print("hi")
template = pkg_resources.resource_string(resource_package, 'file3.txt')
print(template)
| 11601160 | /11601160-0.0.1.tar.gz/11601160-0.0.1/focusIT/Foc.py | Foc.py |
from setuptools import setup
setup(
name='11Team_AssistantBot',
version='1.11',
description='AssistantBot include adressbook and notes',
url='https://github.com/osandrey/GoIt_Team_11_Project/tree/testmyadressbook',
author='Dima, Inna, Serhiy, Andrey',
author_email='dima63475@gmail.com',
license='UA',
packages=["11Team_AssistantBot"],
install_requires=[
"color-it",
"prompt_toolkit",
"functools",
"subprocess",
"typing",
"prettytable",
""
],
entry_points={'console_scripts': ['assist = 11Team_AssistantBot.main']}
) | 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/setup.py | setup.py |
import functools
import os
import pickle
import subprocess
import re
from collections import UserDict
from typing import Callable
from colorit import *
from prompt_toolkit import prompt
from prompt_toolkit.completion import WordCompleter
from prompt_toolkit.shortcuts import yes_no_dialog
from greeting import *
from help import *
colorit.init_colorit()
class MyException(Exception):
pass
class Notepad(UserDict):
def __getitem__(self, title):
if not title in self.data.keys():
raise MyException(color("This article isn't in the Notepad",Colors.red))
note = self.data[title]
return note
def add_note(self, note) -> str:
self.data.update({note.title.value:note})
return color('Done!',Colors.blue)
def delete_note(self, title):
try:
self.data.pop(title)
return color(f"{title} was removed",Colors.purple)
except KeyError:
return color("This note isn't in the Notepad",Colors.blue)
def get_notes(self, file_name):
with open(file_name, 'ab+') as fh:
fh.seek(0)
try:
self.data = pickle.load(fh)
except EOFError:
pass
def show_notes_titles(self):
res = "\n".join([note for note in notes])
return color(res,Colors.orange)
def write_notes(self, file_name):
with open(file_name, "wb") as fh:
pickle.dump(self, fh)
class Field:
def __init__(self, value):
self.__value = None
self.value = value
class NoteTag(Field):
pass
class NoteTitle(Field):
@property
def value(self):
return self.__value
@value.setter
def value(self, title):
if len(title) == 0:
raise ValueError(color("The title wasn't added. It should have at least 1 character.",Colors.red))
self.__value = title
class NoteBody(Field):
pass
class Note:
def __init__(self, title: NoteTitle, body: NoteBody, tags: list[NoteTag]=None) -> None:
self.title = title
self.body = body if body else ''
self.tags = tags if tags else ''
def edit_tags(self, tags: list[NoteTag]):
self.tags = tags
def edit_title(self, title: NoteTitle):
self.title = title
def edit_body(self, body: NoteBody):
self.body = body
def show_note(self):
return '\n'.join([f"Title: {self.title.value}", f"Body: {self.body.value}", f"Tags: {self.show_tags()}"])
def show_tags(self):
if self.tags == []:
return "Tags: Empty",Colors.red
return ', '.join([tag.value for tag in self.tags])
def decorator_input(func: Callable) -> Callable:
@functools.wraps(func)
def wrapper(*words):
try:
return func(*words)
except KeyError as err:
return err
except IndexError:
return color("You didn't enter the title or keywords",Colors.red)
except TypeError:
return color("Sorry, this command doesn't exist",Colors.red)
except Exception as err:
return err
return wrapper
@decorator_input
def add_note(*args) -> str:
title = NoteTitle(input(color("Enter the title: ",Colors.yellow)))
if title.value in notes.data.keys():
raise MyException(color('This title already exists',Colors.red))
body = NoteBody(input(color("Enter the note: ",Colors.yellow)))
tags = input(color("Enter tags (separate them with ',') or press Enter to skip this step: ",Colors.yellow))
tags = [NoteTag(t.strip()) for t in tags.split(',')]
note = Note(title, body, tags)
return notes.add_note(note)
@decorator_input
def delete_note(*args: str) -> str:
return notes.delete_note(args[0])
@decorator_input
def edit_note(*args) -> str:
title = args[0]
if title in notes.data.keys():
note = notes.data.get(title)
user_title = input(color("Enter new title or press 'enter' to skip this step: ",Colors.yellow))
if user_title:
if not user_title in notes.data.keys():
notes.data[user_title] = notes.data.pop(title)
note.edit_title(NoteTitle(user_title))
else:
raise MyException(color('This title already exists.',Colors.red))
try:
body = edit(note.body.value, 'body')
if body:
body = NoteBody(body)
note.edit_body(body)
except Exception as err:
print(err)
try:
tags = edit(note.show_tags(), 'tags')
if tags:
tags = [NoteTag(t.strip()) for t in tags.split(',')]
note.edit_tags(tags)
except Exception as err:
print(err)
return "Done!"
@decorator_input
def edit(text: str, part) -> str:
user_input = input(color(f"Enter any letter if you want to edit {part} or press 'enter' to skip this step. ",Colors.green))
if user_input:
with open('edit_note.txt', 'w') as fh:
fh.write(text)
run_app()
mes = ''
if part == 'tags':
mes = color("Separate tags with ','",Colors.green)
input(color(f'Press enter or any letter if you finished editing. Please, make sure you closed the text editor. {mes}',Colors.green))
with open('edit_note.txt', 'r') as fh:
edited_text = fh.read()
return edited_text
@decorator_input
def find(*args) -> str:
try:
re.match(r'^\s*$', args)
except TypeError:
args = input(color("Enter the phrase you want to find: ",Colors.yellow))
notes_list = []
for note in notes.data.values():
if re.search(args, note.body.value) or re.search(args, note.title.value, flags=re.IGNORECASE):
notes_list.append(note.title.value)
if len(notes_list) == 0:
return "No matches"
return '\n'.join([title for title in notes_list])
@decorator_input
def find_tags(*args: str) -> str:
if len(args) == 0:
return "You didn't enter any tags."
all_notes = [note for note in notes.data.values()]
notes_dict = {title:[] for title in notes.data.keys()}
for arg in args:
for note in all_notes:
if arg in [tag.value for tag in note.tags]:
notes_dict[note.title.value].append(arg)
sorted_dict = sorted(notes_dict, key=lambda k: len(notes_dict[k]), reverse=True)
return '\n'.join([f"{key}:{notes_dict[key]}" for key in sorted_dict if len(notes_dict[key]) > 0])
@decorator_input
def goodbye() -> str:
return 'Goodbye!'
def get_command(words: str) -> Callable:
if words[0] == '':
raise KeyError ("This command doesn't exist")
for key in commands_dict.keys():
try:
if re.search(fr'\b{words[0].lower()}\b', str(key)):
func = commands_dict[key]
return func
except (re.error):
break
raise KeyError ("This command doesn't exist")
def run_app():
if os.name == "nt": # For Windows
os.startfile('edit_note.txt')
else: # For Mac
subprocess.call(["open", 'edit_note.txt'])
@decorator_input
def show_note(*args:str) -> str:
note = notes.data.get(args[0])
return note.show_note()
notes = Notepad()
notes.get_notes('notes.bin')
commands_dict = {('add', 'add_note'):add_note,
('edit', 'edit_note'):edit_note,
('show', 'show_note'):show_note,
('showall',):notes.show_notes_titles,
('find_tags',):find_tags,
('find',):find,
('delete',):delete_note,
('goodbye','close','exit','quit'):goodbye
}
word_completer = WordCompleter(["add", "add_note", "edit", "edit_note", "show", "show_note", "showall" ,"find_tags", "find", "delete" ,"."])
def main_notes():
print(color(greeting,Colors.green))
print(background(color("WRITE HELP TO SEE ALL COMMANDS ",Colors.yellow),Colors.blue))
print(background(color("WRITE 'exit', 'close' or 'bye' to close the bot ",Colors.blue),Colors.yellow))
while True:
words = prompt('Enter your command: ', completer=word_completer).split(" ")
if words[0].lower() == "help":
print(pers_assistant_help())
try:
func = get_command(words)
except KeyError as error:
print(error)
continue
print(func(*words[1:]))
if func.__name__ == 'goodbye':
exit = yes_no_dialog(
title='EXIT',
text='Do you want to close the bot?').run()
if exit:
notes.write_notes('notes.bin')
print(color("Bye, see you soon...",Colors.yellow))
break
else:
continue
| 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/Notepad.py | Notepad.py |
import pickle
import re
from datetime import datetime, timedelta
from colorit import *
from prompt_toolkit import prompt
from prompt_toolkit.completion import WordCompleter
from prompt_toolkit.shortcuts import yes_no_dialog
from Notepad import *
from addressbook import *
from greeting import greeting
from help import *
from sort import *
colorit.init_colorit()
class Error(Exception):
pass
STOPLIST =[".", "end", "close","exit","bye","good bye"]
users = []
def verificate_email(text:str):
email_re = re.findall(r"[\w+3\@{1}\w+\.\w+]", text)
email = "".join(email_re)
if bool(email) == True:
return email
else:
raise Error
def verificate_birthday(text:str):
date_re = re.findall(r"\d{4}\.\d{2}\.\d{2}", text)
date = "".join(date_re)
if bool(date) == True:
return date
else:
raise Error
def verificate_number(num): #Done
flag = True
try:
number = re.sub(r"[\+\(\)A-Za-z\ ]", "", num)
if len(number) == 12:
number = "+" + number
elif len(number) == 10:
number = "+38" + number
elif len(number) == 9:
number = "+380" + number
else:
flag = False
raise Error
except Error:
print(color(f"This number dont correct {number}",Colors.red))
return number if flag else ""
def add_user(text:str): #Done
text = text.split()
name = text[0]
phone = text[1]
if name in ad:
return "this user already exist"
else:
name = Name(name)
phone = Phone(phone)
rec = Record(name, phone)
ad.add_record(rec)
return color("Done",Colors.blue)
def show_all(nothing= ""): # Done
if len(ad) == 0:
return (color("AddressBook is empty", Colors.red))
else:
number = len(ad)
ad.iterator(number)
return color("Done",Colors.blue)
def add_phone(text:str):
text = text.split()
name = text[0]
phone = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
phone = Phone(phone)
adding.add_phone(phone)
return color("Done",Colors.blue)
def add_email(text:str):
text = text.split()
name = text[0]
email = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
email = Email(email)
adding.add_email(email)
return color("Done",Colors.blue)
def add_birthday(text:str):
text = text.split()
name = text[0]
birthday = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
birthday = Birthday(birthday)
adding.add_birthday(birthday)
return color("Done",Colors.blue)
def add_tags(text:str):
text = text.split()
name = text[0]
tags = " ".join(text[1:])
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
tags = Tags(tags)
adding.add_tags(tags)
return color("Done",Colors.blue)
def add_adress(text:str):
text = text.split()
name = text[0]
adress = " ".join(text[1:])
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adress = Adress(adress)
adding.add_adress(adress)
return color("Done",Colors.blue)
def change_adress(text:str):
text = text.split()
name = text[0]
adress = " ".join(text[1:])
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adress = Adress(adress)
adding.change_adress(adress)
return color("Done",Colors.blue)
def change_phone(text:str):
text = text.split()
name = text[0]
oldphone = text[1]
newphone = text[2]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
# oldphone = Phone(oldphone)
# newphone = Phone(newphone)
adding.change_phone(oldphone,newphone)
return color("Done",Colors.blue)
def change_email(text:str):
text = text.split()
name = text[0]
newemail = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
newemail = Email(newemail)
adding.change_email(newemail)
return color("Done",Colors.blue)
def change_birthday(text:str):
text = text.split()
name = text[0]
birthday = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
birthday = Birthday(birthday)
adding.change_birthday(birthday)
return color("Done",Colors.blue)
def remove_phone(text:str):
text = text.split()
name = text[0]
phone = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
if phone == "-":
adding = ad[name]
adding.remove_phone(phone)
elif name in ad:
adding = ad[name]
phone = Phone(phone)
adding.remove_phone(phone)
return color("Done",Colors.blue)
def remove_email(text:str):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adding.remove_email()
return color("Done",Colors.blue)
def remove_birthday(text:str):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adding.remove_birthday()
return color("Done",Colors.blue)
def remove_tags(text:str):
text = text.split()
name = text[0]
tags = " ".join(text[1:]).strip()
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adding.remove_tags(tags)
return color("Done",Colors.blue)
def remove_user(text:str):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
del ad[name]
return color("Done",Colors.blue)
def remove_adress(text:str):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adding.remove_adress()
return color("Done",Colors.blue)
def find_name(text):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
print(ad.find_name(name))
return color("Done",Colors.blue)
def find_tags(text:str):
text = text.split()
tags = text[0:]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
print(ad.find_tags(tags))
return color("Done",Colors.blue)
def find_phone(text:str):
text = text.split()
phone = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
print(ad.find_phone(phone))
return color("Done",Colors.blue)
def when_birthday(text:str):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
print(adding.days_to_birthday())
return color("Done",Colors.blue)
def birthdays_within(text:str):
days = int(text.split()[0])
flag = False
current = datetime.now()
future = current + timedelta(days=days)
for name, record in ad.items():
if record.get_birthday() is None:
pass
else:
userdate = datetime.strptime(record.get_birthday(), "%Y.%m.%d").date()
userdate = userdate.replace(year=current.year)
if current.date() < userdate < future.date():
flag = True
print(color(f"\n{name.title()} has birthday {record.get_birthday()}",Colors.yellow))
return color("Done",Colors.blue) if flag == True else color("Nobody have birthday in this period",Colors.red)
def help(tst=""):
instruction = color("""
\nCOMMANDS\n
show all
add user <FirstName_LastName> <phone>
add phone <user> <phone>
add email <user> <email>
add birthday <user> <date>
add tags <user> <tags>
add adress <user> <adress>
change adress <user> <new_adress>
change email <user> <newEmail>
change birthday <user> <newBirthday>
remove phone <user> <phone>
remove email <user> <email>
remove birthday <user>
remove phone <user> <phone>
remove email <user> <email>
remove tags <user> <tags>
remove user <user>
remove adress <user>
find name <name>
find tags <tags>
find phone <phone>
sort directory <path to folder>
when birthday <name>
birthdays within <days-must be integer>
""",Colors.orange)
return instruction
commands = {
"help": pers_assistant_help,
"add phone": add_phone,
"add user": add_user,
"show all": show_all,
"add email": add_email,
"add birthday": add_birthday,
"add tags": add_tags,
"add adress": add_adress,
"change adress": change_adress,
"change phone": change_phone,
"change email": change_email,
"change birthday": change_birthday,
"remove phone": remove_phone,
"remove email" :remove_email,
"remove birthday": remove_birthday,
"remove tags": remove_tags,
"remove user": remove_user,
"remove adress": remove_adress,
"find name": find_name,
"find tags": find_tags,
"find phone": find_phone,
"sort directory": sorting,
"when birthday": when_birthday,
"birthdays within": birthdays_within,
}
word_completer = WordCompleter([comm for comm in commands.keys()])
def parser(userInput:str):
if len(userInput.split()) == 2:
return commands[userInput.strip()], "None"
for command in commands.keys():
if userInput.startswith(str(command)):
text = userInput.replace(command, "")
command = commands[command]
# print(text.strip().split())
return command, text.strip()
def main():
print(color(greeting,Colors.green))
print(background(color("WRITE HELP TO SEE ALL COMMANDS ",Colors.yellow),Colors.blue))
print(background(color("WRITE 'exit', 'close' or 'bye' for close bot ",Colors.blue),Colors.yellow))
ad.load_contacts_from_file()
while True:
# user_input = input(color("Enter your command: ",Colors.green)).strip().lower()
user_input = prompt('Enter your command: ', completer=word_completer)
if user_input in STOPLIST:
exit = yes_no_dialog(
title='EXIT',
text='Do you want to close the bot?').run()
if exit:
print(color("Bye,see tou soon...",Colors.yellow))
break
else:
continue
elif user_input.startswith("help"):
print(color(pers_assistant_help(),Colors.green))
continue
elif (len(user_input.split())) == 1:
print(color("Please write full command", Colors.red))
continue
else:
try:
command, text = parser(user_input)
print(command(text))
ad.save_contacts_to_file()
except KeyError:
print(color("You enter wrong command", Colors.red))
except Error:
print(color("You enter wrong command Error", Colors.red))
except TypeError:
print(color("You enter wrong command TypeError", Colors.red))
except IndexError:
print(color("You enter wrong command or name", Colors.red))
except ValueError:
print(color("You enter wrong information", Colors.red))
if __name__ == "__main__":
choice = input(color(f"SELECT WHICH BOT YOU WANT TO USE \nEnter 'notes' for use Notes\nEnter 'contacts' for use AdressBook\nEnter >>> ",Colors.green))
if choice == "notes":
main_notes()
elif choice == "contacts":
main()
else:
user_error = input(color("You choose wrong name push enter to close the bot",Colors.red))
| 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/main.py | main.py |
import pickle
import re
from collections import UserDict
from datetime import datetime
from colorit import *
colorit.init_colorit()
class Error(Exception): #власне виключення
pass
# def __str__(self) -> str:
# return "\n \nSomething went wrong\n Try again!\n"
class Field:
def __init__(self, value) -> None:
self._value = value
def __str__(self) -> str:
return self._value
@property
def value(self):
return self._value
@value.setter
def value(self, value):
self._value = value
class Name(Field): #клас для створення поля name
def __str__(self) -> str:
self._value : str
return self._value.title()
class Phone(Field): #клас для створення поля phone
@staticmethod #метод який не звязаний з класом
def verify(number): #перевирка номеру телефона
number = re.sub(r"[\-\(\)\+\ a-zA-Zа-яА-я]", "", number)
try:
if len(number) == 12:
number = "+" + number
elif len(number) == 10:
number = "+38" + number
elif len(number) == 9:
number = "+380" + number
else:
number = False
raise Error
except Error:
print(color("\nYou enter wrong number\n Try again!\n", Colors.red))
if number:
return number
else:
return "-"
def __init__(self, value) -> None:
self._value = Phone.verify(value)
@Field.value.setter
def value(self, value):
self._value =Phone.verify(value)
def __repr__(self) -> str:
return self._value
def __str__(self) -> str:
return self._value
class Birthday:
@staticmethod #метод який не звязаний з класом
def verify_date(birth_date: str):
try:
birthdate = re.findall(r"\d{4}\.\d{2}\.\d{2}", birth_date)
if bool(birthdate) == False:
raise Error
except Error:
print(color("\nYou enter wrong date.\nUse this format - YYYY.MM.DD \nTry again!\n", Colors.red))
if birthdate:
return birthdate[0]
else:
return "-"
def __init__(self, birthday) -> None:
self.__birthday = self.verify_date(birthday)
@property
def birthday(self):
return self.__birthday
@birthday.setter
def birthday(self,birthday):
self.__birthday = self.verify_date(birthday)
def __repr__(self) -> str:
return self.__birthday
def __str__(self) -> str:
return self.__birthday
class Email:
@staticmethod #метод який не звязаний з класом
def verificate_email(text:str):
email_re = re.findall(r"\w+3\@{1}\w+\.\w+", text)
email = "".join(email_re)
try:
if bool(email) == True:
return email
else:
raise Error
except Error:
print(color("\nYou enter wrong email\n Try again!\n", Colors.red))
return "-"
def __init__(self, email) -> None:
self.__email = self.verificate_email(email)
@property
def email(self):
return self.__email
@email.setter
def email(self,email):
self.__email = self.verificate_email(email)
def __repr__(self) -> str:
return self.__email
def __str__(self) -> str:
return self.__email
class Adress:
def __init__(self, adress) -> None:
self.__adress = adress
@property
def adress(self):
return self.__adress
@adress.setter
def adress(self,adress):
self.__adress = self.adress
def __repr__(self) -> str:
return self.__adress
def __str__(self) -> str:
return self.__adress
class Tags:
def __init__(self, tags) -> None:
self.__tags = tags
@property
def tags(self):
return self.__tags
@tags.setter
def tags(self,tags):
self.__tags = self.tags
def __repr__(self) -> str:
return self.__tags
def __str__(self) -> str:
return self.__tags
class Record: #клас для запису инфи
def __init__ (self, name : Name, phone: Phone = None, birthday: Birthday = None, email: Email = None, adress: Adress = None, tags :Tags = None):
self.name = name
self.phone = phone
self.birthday = birthday
self.email = email
self.adress = adress
self.tags = []
self.phones = []
if phone:
self.phones.append(phone)
def get_birthday(self):
if self.birthday is None:
return None
else:
return str(self.birthday)
def get_tags(self):
return self.tags
def get_phone(self):
return self.phones
def add_phone(self, phone: Phone): # додати телефон
self.phones.append(phone)
def add_birthday(self, birthday: Birthday): # додати телефон
if self.birthday is None:
self.birthday = birthday
else:
print(color("This user already have birthday date",Colors.red))
def add_email(self, email:Email): # додати телефон
if self.email is None:
self.email = email
else:
print(color("This user already have email",Colors.red))
def add_tags(self, tags:Tags): # додати телефон
self.tags.append(tags)
def add_adress(self, adress):
if self.adress is None:
self.adress = adress
else:
print(color("This user already have adress",Colors.red))
def change_adress(self,adress):
# adress = Adress(adress)
if self.adress is None:
print(color("This user doesnt have adress", Colors.red))
else:
self.adress = adress
def change_email(self,email):
# email = Email(email)
if self.email is None:
print(color("This user doesnt have adress", Colors.red))
else:
self.email = email
def change_birthday(self,birthday):
# birthday = Birthday(birthday)
if self.birthday is None:
print(color("This user doesnt have birthday", Colors.red))
else:
self.birthday = birthday
def remove_email(self):
if self.email is None:
print(color("This user doesnt have email",Colors.red))
else:
self.email = None
def remove_birthday(self):
if self.birthday is None:
print(color("This user doesnt have birthday date",Colors.red))
else:
self.birthday = None
def remove_phone(self, phone): # видалити телефон
# phone = Phone(phone)
for ph in self.phones:
if str(ph) == str(phone):
self.phones.remove(ph)
else:
print(color("This user doesnt have this phone",Colors.red))
def remove_tags(self, tags):
for tag in self.tags:
if str(tag) == str(tags):
self.tags.remove(tag)
else:
print(color("This user doesnt have tags which you want to remove",Colors.red))
def remove_adress(self):
if self.adress is None:
print(color("This user doesnt have adress",Colors.red))
else:
self.adress = None
def change_phone(self, oldphone, newphone): # зминити телефон користувача
oldphone = Phone(oldphone)
newphone = Phone(newphone)
for phone in self.phones:
if str(phone) == str(oldphone):
self.phones.remove(phone)
self.phones.append(newphone)
else:
print(color("This user doesnt have oldphone which you want to change",Colors.red))
def days_to_birthday(self): #функция яка показуе скильки днив до наступного др
# потрибно допрацювати
try:
if str(self.birthday) == None:
return None
current = datetime.now().date()
current : datetime
user_date = datetime.strptime(str(self.birthday), "%Y.%m.%d")
user_date: datetime
user_date = user_date.replace(year=current.year).date()
if user_date < current:
user_date = user_date.replace(year= current.year +1)
res = user_date - current
return color(f"{res.days} days before next birthday", Colors.purple)
else:
res = user_date - current
return color(f"{res.days} days before next birthday", Colors.purple)
except ValueError:
return (color("You set wrong date or user doesnt have birthday date\nTry again set new date in format YYYY.MM.DD", Colors.red))
def __repr__(self) -> str:
return f"\nPhone - {[str(i) for i in self.phones]},\nBirthday - {self.birthday},\nEmail - {self.email},\nAdress - {self.adress},\nTags - {self.tags}"
separator = "___________________________________________________________"
class AdressBook(UserDict): #адресна книга
def add_record(self, record: Record):
self.data[record.name.value] = record
def generator(self): # генератор з yield
for name, info in self.data.items():
print(color(separator,Colors.purple))
yield color(f"Name - {name.title()} : ",Colors.blue)+ color(f"{info}",Colors.yellow)
print(color(separator,Colors.purple))
def iterator(self, value): # функция яка показуе килькисть контактив яку введе користувач
value = value
gen = self.generator()
try:
if value > len(self.data):
raise Error
except:
print(color("You set big value, list has less users. Try again.\n", Colors.red))
while value > 0:
try:
print(next(gen))
value -= 1
except StopIteration:
print(color(f"Try enter value less on {value}. Dict has {len(self.data)} contacts",Colors.purple))
return ""
return color("Thats all!",Colors.orange)
# def save(self): #функция збереження даних адресбук у csv файл
# if len(self.data) == 0:
# print(color("Your AddressBook is empty",Colors.red))
# with open("savebook.csv", "w", newline="") as file:
# fields = ["Name", "Info"]
# writer = csv.DictWriter(file, fields)
# writer.writeheader()
# for name, info in self.data.items():
# name :str
# writer.writerow({"Name": name.title(), "Info": str(info)})
# return color("Succesfull save your AddressBook",Colors.green)
# def load(self): # функция яка завантажуе контакти з збереженого csv файлу, якшо такого нема буде про це повидомлено
# try:
# with open("savebook.csv", "r", newline="") as file:
# reader = csv.DictReader(file)
# for row in reader:
# saved = {row["Name"]: row["Info"]}
# self.data.update(saved)
# print(color("\nSuccesfull load saved AddressBook", Colors.purple))
# except:
# print(color("\nDont exist file with saving contacts",Colors.blue))
# return ""
def find_tags(self,tags):
res = ""
finder = False
tags = tags[0]
for user, info in self.data.items():
for tag in info.get_tags():
if str(tag) == str(tags):
finder = True
print(color(f"\nFind tags\nUser - {user.title()}{info}",Colors.purple))
return color("Found users",Colors.green) if finder == True else color("Dont find any user",Colors.green)
def find_name(self, name: str): #функция для пошуку по имя або телефону
res= ""
fail = color("Finder not find any matches in AddressBook",Colors.red)
for user, info in self.data.items():
if str(user) == name:
res += color(f"Find similar contacts:\n\nUser - {user.title()}{info}\n",Colors.purple)
return res if len(res)>0 else fail
def find_phone(self,phone):
finder = False
phone = Phone(phone)
for user, info in self.data.items():
for ph in info.get_phone():
if str(ph) == str(phone):
finder = True
print(color(f"\nFind phone\nUser - {user.title()}{info}",Colors.purple))
return color("Found users",Colors.green) if finder == True else color("Dont find any user",Colors.green)
def save_contacts_to_file(self):
with open('contacts.pickle', 'wb') as file:
pickle.dump(self.data, file)
def load_contacts_from_file(self):
try:
with open('contacts.pickle', 'rb') as file:
self.data = pickle.load(file)
except FileNotFoundError:
pass
ad = AdressBook()
# ПЕРЕВИРКА СКРИПТА
# name = Name("Dima")
# phone = Phone("0993796625")
# birth = Birthday("2001.08.12")
# rec = Record(name, phone, birth)
# ad = AdressBook()
# ad.add_record(rec)
# #=============================================================================
# name1 = Name("Benderovec")
# phone1 = Phone("0993790447")
# birth1 = Birthday("2001.08.12")
# rec1 = Record(name1, phone1, birth1)
# ad.add_record(rec1)
# #=============================================================================
# # print(rec.days_to_birthday())
# #=============================================================================
# name2 = Name("Diana")
# phone2 = Phone("099797484")
# birth2 = Birthday("2003.04.01")
# rec2 = Record(name2, phone2, birth2)
# #============================================================================
# ad.add_record(rec2)
# print(ad.data)
# print(ad.iterator(6))
# print(ad.find("test"))
# НА ВСЕ ЩО НИЖЧЕ НЕ ЗВЕРТАТИ УВАГИ!!!!!!!!!!!!!!!!!
# result = button_dialog(
# title='Button dialog example',
# text='Do you want to confirm?',
# buttons=[
# ('Yes', True),
# ('No', False),
# ('Maybe...', None)
# ],
# ).run()
# print(result)
# html_completer = WordCompleter(['add user', 'add phone', 'add email', 'add adress'])
# text = prompt('Enter command: ', completer=html_completer)
# print('You said: %s' % text)
# my_completer = WordCompleter(['add phone', 'add user', 'add email', 'add adress'])
# text = prompt('Enter HTML: ', completer=my_completer, complete_while_typing=True,)
# print(text.split())
# for i in my_completer:
# print(i)
"""
from prompt_toolkit import prompt
from prompt_toolkit.completion import WordCompleter
html_completer = WordCompleter(['<html>', '<body>', '<head>', '<title>'])
text = prompt('Enter HTML: ', completer=html_completer)
print('You said: %s' % text)
from prompt_toolkit.shortcuts import yes_no_dialog
result = yes_no_dialog(
title='Yes/No dialog example',
text='Do you want to confirm?').run()
""" | 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/addressbook.py | addressbook.py |
greeting = """
@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@
#@@@@ @@@@@@@@@@@@@@@@@@@@
@@@@ @@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ #@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ #@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@& &@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@/
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@ ,@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@(
@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@ .@@@@@
@@@@@@@@@@@@@@@@@@@@ @@@%
@@@@@@@@@@@@@@@@@@@@% @@@@&
@@@@@@@@@@@@@@@@@@@@@
__ __ _ __
/ / /\ \ \___| | ___ ___ _ __ ___ ___ _ \ \
\ \/ \/ / _ \ |/ __/ _ \| '_ ` _ \ / _ \ (_) | |
\ /\ / __/ | (_| (_) | | | | | | __/ _ | |
\/ \/ \___|_|\___\___/|_| |_| |_|\___| (_) | |
/_/
""" | 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/greeting.py | greeting.py |
from pathlib import Path
import shutil
import os
from colorit import *
import sys
name_extensions = {
"images": (".jpeg", ".png", ".jpg", ".svg"),
"video": (".avi", ".mp4", ".mov", ".mkv"),
"documents": (".doc", ".docx", ".pdf", ".xlsx", ".pptx", ".txt"),
"music": (".mp3", ".ogg", ".wav", ".amr"),
"archives": (".zip", ".gz", ".tar"),
"unknown": ""
}
RUSS_SYMB = "абвгдеёжзийклмнопрстуфхцчшщъыьэюяєіїґ?<>,!@#[]#$%^&*()-=; "
ENG_SYMB = (
"a",
"b",
"v",
"g",
"d",
"e",
"e",
"j",
"z",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
"r",
"s",
"t",
"u",
"f",
"h",
"ts",
"ch",
"sh",
"sch",
"",
"y",
"",
"e",
"yu",
"ya",
"je",
"i",
"ji",
"g",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
)
TRANS = {}
# current_path = Path("C:\\test_sorted") поганий кейс(
for c, t in zip(RUSS_SYMB, ENG_SYMB):
TRANS[ord(c)] = t
TRANS[ord(c.upper())] = t.upper()
def normalize(name: str) -> str:
return name.translate(TRANS)
def unpack_arch(
archive_path, current_path
):
shutil.unpack_archive(archive_path, rf"{current_path}\\archives")
def create_folder(folder: Path): # створення папок для сортування
for name in name_extensions.keys():
if not folder.joinpath(name).exists():
folder.joinpath(name).mkdir()
def bypass_files(path_folder):
create_folder(path_folder)
for item in path_folder.glob("**/*"):
if item.is_file():
sort_file(item, path_folder)
if item.is_dir() and item.name not in list(name_extensions):
if os.path.getsize(item) == 0:
shutil.rmtree(item)
if item.name in name_extensions:
continue
def sort_file(
file: Path, path_folder: Path
): # сорт
if file.suffix in name_extensions["images"]:
file.replace(path_folder.joinpath("images", f"{normalize(file.stem)}{file.suffix}"))
elif file.suffix in name_extensions["documents"]:
file.replace(path_folder.joinpath("documents", f"{normalize(file.stem)}{file.suffix}"))
elif file.suffix in name_extensions["music"]:
file.replace(path_folder.joinpath("music", f"{normalize(file.stem)}{file.suffix}"))
elif file.suffix in name_extensions["video"]:
file.replace(path_folder.joinpath("video", f"{normalize(file.stem)}{file.suffix}"))
elif file.suffix in name_extensions["archives"]:
shutil.unpack_archive(file, path_folder)
os.remove(file)
else:
file.replace(path_folder.joinpath("unknown",f"{normalize(file.stem)}{file.suffix}"))
def sorting(pathh):
flag = False
try:
current_path = Path(pathh)
except IndexError:
print("Type path to folder")
# return None
if not current_path.exists():
print("Folder is not exist. Try again.")
return color(f"Folder is not exist. Try again.",Colors.red)
result_list = list(current_path.iterdir())
bypass_files(current_path)
flag = True
for i in result_list:
print(i, "- sorted")
return color("Done",Colors.blue) if flag == True else color("Something went wrong",Colors.red)
# .\HW6m.py C:\test_sorted | 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/sort.py | sort.py |
from colorit import *
from prettytable import PrettyTable
def pers_assistant_help():
pah_com_list = {"tel_book":"TELEPHONE BOOK", "note_book": "NOTE BOOK", "sorted": "SORTED"}
all_commands = {
"1":[
["show all", "This command shows all contacts in your address book", "show all"],
["add user", "This command adds a new user in your address book", "add user <FirstName_LastName> <phone>"],
["add tags","This command add a new tags for an existing contact"," add tags <tag>"],
["add phone", "This command adds a new phone number for an existing contact", "add phone <user> <phone>"],
["add email", "This command adds an email for an existing contact", "add email <user> <email>"],
["add birthday", "This command adds a birthday for an existing contact", "add birthday <user> <date>"],
["add adress", "This command adds an address for an existing contact", "add adress <user> <address>"],
["change phone","This command changes an phone for an existing contact","change phone <OldPhone> <NewPhone>"],
["change adress", "This command changes an address for an existing contact", "change adress <user> <new_address>"],
["change email", "This command changes an email address for an existing contact", "change email <user> <new_email>"],
["change birthday", "This command changes a birthday for an existing contact", "change birthday <user> <newBirthday>"],
["find name", "This command finds all existing contacts whose names match the search query", "find name <name>"],
["find phone", "This command finds existing contacts whose phone match the search query", "find phone <phone>"],
["find tags", "This command finds existing contacts whose tags match the search query", "find tags <tag>"]
["remove tags","This command removes a tags for an existing contact", "remove tags <user> <tag>"],
["remove phone", "This command removes a phone number for an existing contact", "remove phone <user> <phone>"],
["remove birthday", "This command removes a birthday for an existing contact", "remove birthday <user>"],
["remove email", "This command removes an email address for an existing contact", "remove email <user> <email>"],
["remove user", "This command removes an existing contact and all the information about it", "remove user <user>"],
["remove adress", "This command removes an existing contact and all the information about it", "remove adress <user> <address>"],
["when birthday", "This command shows a birthday of an existing contact", "when birthday <user>"],
["birthday within","This command shows all users who has birthday in selected period"," birthday within <days - (must be integer)>"]
],
"2":[
["add or add_note", "This command adds a new note in your Notepad", "add(add_note) <title> <body> <tags>"],
["edit or edit_note", "This command changes an existing note in your Notepad", "edit(edit_note) <title>"],
["delete", "This command deletes an existing note in your Notepad", "delete <title>"],
["find_tags", "This command finds and sorts existing notes whose tags match the search query", "find_tags <tag>"],
["find", "This command finds existing notes whose note(body) matches the search query", "find <frase>"],
["show or show_note", "This command shows an existing note in your Notepad", "show(show_note) <title>"],
["showall", "This command shows all existing notes in your Notepad", "showall"],
],
"3": [[
"sort directory", "This command sorts all files in the given directory", "sort directory <path to folder>"
]]}
print(f'''I'm your personal assistant.
I have {pah_com_list['tel_book']}, {pah_com_list['note_book']} and I can {pah_com_list['sorted']} your files in your folder.\n''')
while True:
print(f'''If you want to know how to work with:
"{pah_com_list['tel_book']}" press '1'
"{pah_com_list['note_book']}" press '2'
function "{pah_com_list['sorted']}" press '3'
SEE all comands press '4'
EXIT from HELP press any other key''')
user_input = input()
if user_input not in ["1", "2", "3", "4"]:
break
elif user_input in ["1", "2", "3"]:
my_table = PrettyTable(["Command Name", "Discription", "Example"])
[my_table.add_row(i) for i in all_commands[user_input]]
my_table.add_row(["quit, close, goodbye, exit", "This command finish work with your assistant", "quit(close, goodbye, exit)"])
print(my_table)
else:
my_table = PrettyTable(["Command Name", "Discription", "Example"])
all_commands_list = sorted([i for j in list(all_commands.values()) for i in j])
[my_table.add_row(i) for i in all_commands_list]
my_table.add_row(["quit, close, goodbye, exit", "This command finish work with your assistant", "quit(close, goodbye, exit)"])
print(my_table)
return color("Done",Colors.blue)
| 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/help.py | help.py |
import sys, platform, os, re
if not sys.version_info >= (3, 6):
sys.exit('Python 3.6 or higher is required!')
try:
import eldf
except ImportError:
sys.exit("Module eldf is not installed!\nPlease install it using this command:\n" + (sys.platform == 'win32')*(os.path.dirname(sys.executable) + '\\Scripts\\') + 'pip3 install eldf')
if len(sys.argv) < 2 or '-h' in sys.argv or '--help' in sys.argv:
print('''Usage: 11l py-or-11l-source-file [options]
Options:
--int64 use 64-bit integers
-d disable optimizations [makes compilation faster]
-t transpile only
-e expand includes
-v print version''')
sys.exit(1)
if '-v' in sys.argv:
print(open(os.path.join(os.path.dirname(sys.argv[0]), 'version.txt')).read())
sys.exit(0)
enopt = not '-d' in sys.argv
if not (sys.argv[1].endswith('.py') or sys.argv[1].endswith('.11l')):
sys.exit("source-file should have extension '.py' or '.11l'")
def show_error(fname, fcontents, e, syntax_error):
next_line_pos = fcontents.find("\n", e.pos)
if next_line_pos == -1:
next_line_pos = len(fcontents)
prev_line_pos = fcontents.rfind("\n", 0, e.pos) + 1
sys.exit(('Syntax' if syntax_error else 'Lexical') + ' error: ' + e.message + "\n in file '" + fname + "', line " + str(fcontents[:e.pos].count("\n") + 1) + "\n"
+ fcontents[prev_line_pos:next_line_pos] + "\n" + re.sub(r'[^\t]', ' ', fcontents[prev_line_pos:e.pos]) + '^'*max(1, e.end - e.pos))
import _11l_to_cpp.tokenizer, _11l_to_cpp.parse
if sys.argv[1].endswith('.py'):
import python_to_11l.tokenizer, python_to_11l.parse
py_source = open(sys.argv[1], encoding = 'utf-8-sig').read()
try:
_11l_code = python_to_11l.parse.parse_and_to_str(python_to_11l.tokenizer.tokenize(py_source), py_source, sys.argv[1])
except (python_to_11l.parse.Error, python_to_11l.tokenizer.Error) as e:
show_error(sys.argv[1], py_source, e, type(e) == python_to_11l.parse.Error)
_11l_fname = os.path.splitext(sys.argv[1])[0] + '.11l'
open(_11l_fname, 'w', encoding = 'utf-8', newline = "\n").write(_11l_code)
else:
_11l_fname = sys.argv[1]
_11l_code = open(sys.argv[1], encoding = 'utf-8-sig').read()
cpp_code = ''
if '--int64' in sys.argv:
cpp_code += "#define INT_IS_INT64\n"
_11l_to_cpp.parse.int_is_int64 = True
cpp_code += '#include "' + os.path.abspath(os.path.join(os.path.dirname(sys.argv[0]), '_11l_to_cpp', '11l.hpp')) + "\"\n\n" # replace("\\", "\\\\") is not necessary here (because MSVC for some reason treat backslashes in include path differently than in regular string literals)
try:
cpp_code += _11l_to_cpp.parse.parse_and_to_str(_11l_to_cpp.tokenizer.tokenize(_11l_code), _11l_code, _11l_fname, append_main = True)
except (_11l_to_cpp.parse.Error, _11l_to_cpp.tokenizer.Error) as e:
# open(_11l_fname, 'w', encoding = 'utf-8', newline = "\n").write(_11l_code)
show_error(_11l_fname, _11l_code, e, type(e) == _11l_to_cpp.parse.Error)
if '-e' in sys.argv:
included = set()
def process_include_directives(src_code, dir = ''):
exp_code = ''
writepos = 0
while True:
i = src_code.find('#include "', writepos)
if i == -1:
break
exp_code += src_code[writepos:i]
if src_code[i-2:i] == '//': # skip commented includes
exp_code += '#'
writepos = i + 1
continue
fname_start = i + len('#include "')
fname_end = src_code.find('"', fname_start)
assert(src_code[fname_end + 1] == "\n") # [-TODO: Add support of comments after #include directives-]
fname = src_code[fname_start:fname_end]
if fname[1:3] == ':\\' or fname.startswith('/'): # this is an absolute pathname
pass
else: # this is a relative pathname
assert(dir != '')
fname = os.path.join(dir, fname)
if fname not in included:
included.add(fname)
exp_code += process_include_directives(open(fname, encoding = 'utf-8-sig').read(), os.path.dirname(fname))
writepos = fname_end + 1
exp_code += src_code[writepos:]
return exp_code
cpp_code = process_include_directives(cpp_code)
cpp_fname = os.path.splitext(sys.argv[1])[0] + '.cpp'
open(cpp_fname, 'w', encoding = 'utf-8-sig', newline = "\n").write(cpp_code) # utf-8-sig is for MSVC
if '-t' in sys.argv or \
'-e' in sys.argv:
sys.exit()
if sys.platform == 'win32':
was_break = False
for version in ['2019', '2017']:
for edition in ['BuildTools', 'Community', 'Enterprise', 'Professional']:
vcvarsall = 'C:\\Program Files' + ' (x86)'*platform.machine().endswith('64') + '\\Microsoft Visual Studio\\' + version + '\\' + edition + R'\VC\Auxiliary\Build\vcvarsall.bat'
if os.path.isfile(vcvarsall):
was_break = True
#print('Using ' + version + '\\' + edition)
break # ^L.break
if was_break:
break
if not was_break:
sys.exit('''Unable to find vcvarsall.bat!
If you do not have Visual Studio 2017 or 2019 installed please install it or Build Tools for Visual Studio from here[https://visualstudio.microsoft.com/downloads/].''')
os.system('"' + vcvarsall + '" ' + ('x64' if platform.machine().endswith('64') else 'x86') + ' > nul && cl.exe /std:c++17 /MT /EHsc /nologo /W3 ' + '/O2 '*enopt + cpp_fname)
else:
if os.system('g++-8 --version > /dev/null') != 0:
sys.exit('GCC 8 is not found!')
os.system('g++-8 -std=c++17 -Wfatal-errors -DNDEBUG ' + '-O3 '*enopt + '-march=native -o "' + os.path.splitext(sys.argv[1])[0] + '" "' + cpp_fname + '" -lstdc++fs')
| 11l | /11l-2021.3-py3-none-any.whl/11l.py | 11l.py |
try:
from python_to_11l.tokenizer import Token
import python_to_11l.tokenizer as tokenizer
except ImportError:
from tokenizer import Token
import tokenizer
from typing import List, Tuple, Dict, Callable
from enum import IntEnum
import os, re, eldf
class Scope:
parent : 'Scope'
class Var:
type : str
node : 'ASTNode'
def __init__(self, type, node):
assert(type is not None)
self.type = type
self.node = node
def serialize_to_dict(self):
node = None
if type(self.node) == ASTFunctionDefinition:
node = self.node.serialize_to_dict()
return {'type': self.type, 'node': node}
def deserialize_from_dict(self, d):
if d['node'] is not None:
self.node = ASTFunctionDefinition()
self.node.deserialize_from_dict(d['node'])
vars : Dict[str, Var]
nonlocals_copy : set
nonlocals : set
globals : set
is_function : bool
is_lambda_or_for = False
def __init__(self, func_args):
self.parent = None
if func_args is not None:
self.is_function = True
self.vars = dict(map(lambda x: (x[0], Scope.Var(x[1], None)), func_args))
else:
self.is_function = False
self.vars = {}
self.nonlocals_copy = set()
self.nonlocals = set()
self.globals = set()
def serialize_to_dict(self, imported_modules):
ids_dict = {'Imported modules': imported_modules}
for name, id in self.vars.items():
if name not in python_types_to_11l and not id.type.startswith('('): # )
ids_dict[name] = id.serialize_to_dict()
return ids_dict
def deserialize_from_dict(self, d):
for name, id_dict in d.items():
if name != 'Imported modules':
id = Scope.Var(id_dict['type'], None)
id.deserialize_from_dict(id_dict)
self.vars[name] = id
def add_var(self, name, error_if_already_defined = False, type = '', err_token = None, node = None):
s = self
while True:
if name in s.nonlocals_copy or name in s.nonlocals or name in s.globals:
return False
if s.is_function:
break
s = s.parent
if s is None:
break
if not (name in self.vars):
s = self
while True:
if name in s.vars:
return False
if s.is_function:
break
s = s.parent
if s is None:
break
self.vars[name] = Scope.Var(type, node)
return True
elif error_if_already_defined:
raise Error('redefinition of already defined variable is not allowed', err_token if err_token is not None else token)
return False
def find_and_get_prefix(self, name, token):
if name == 'self':
return ''
if name in ('isinstance', 'len', 'super', 'print', 'input', 'ord', 'chr', 'range', 'zip', 'all', 'any', 'abs', 'pow', 'sum', 'product', 'open', 'min', 'max', 'divmod', 'hex', 'bin', 'map', 'list', 'tuple', 'dict', 'set', 'sorted', 'reversed', 'filter', 'reduce', 'round', 'enumerate', 'hash', 'copy', 'deepcopy', 'NotImplementedError', 'ValueError', 'IndexError'):
return ''
s = self
while True:
if name in s.nonlocals_copy:
return '@='
if name in s.nonlocals:
return '@'
if name in s.globals:
return ':'
if s.is_function and not s.is_lambda_or_for:
break
s = s.parent
if s is None:
break
capture_level = 0
s = self
while True:
if name in s.vars:
if s.parent is None: # variable is declared in the global scope
if s.vars[name].type == '(Module)':
return ':::'
return ':' if capture_level > 0 else ''
else:
return capture_level*'@'
if s.is_function:
capture_level += 1
s = s.parent
if s is None:
if name in ('id',):
return ''
raise Error('undefined identifier', token)
def find(self, name):
s = self
while True:
id = s.vars.get(name)
if id is not None:
return id
s = s.parent
if s is None:
return None
def var_type(self, name):
id = self.find(name)
return id.type if id is not None else None
scope : Scope
class Module:
scope : Scope
def __init__(self, scope):
self.scope = scope
modules : Dict[str, Module] = {}
class SymbolBase:
id : str
lbp : int
nud_bp : int
led_bp : int
nud : Callable[['SymbolNode'], 'SymbolNode']
led : Callable[['SymbolNode', 'SymbolNode'], 'SymbolNode']
def set_nud_bp(self, nud_bp, nud):
self.nud_bp = nud_bp
self.nud = nud
def set_led_bp(self, led_bp, led):
self.led_bp = led_bp
self.led = led
def __init__(self):
def nud(s): raise Error('unknown unary operator', s.token)
self.nud = nud
def led(s, l): raise Error('unknown binary operator', s.token)
self.led = led
class SymbolNode:
token : Token
symbol : SymbolBase = None
children : List['SymbolNode']# = []
parent : 'SymbolNode' = None
ast_parent : 'ASTNode'
function_call = False
iterable_unpacking = False
tuple = False
is_list = False
is_set = False
def is_dict(self): return self.symbol.id == '{' and not self.is_set # }
slicing = False
is_not = False
skip_find_and_get_prefix = False
scope_prefix : str = ''
scope : Scope
token_str_override : str
def __init__(self, token, token_str_override = None):
self.token = token
self.children = []
self.scope = scope
self.token_str_override = token_str_override
def var_type(self):
if self.is_parentheses():
return self.children[0].var_type()
if self.symbol.id == '*' and self.children[0].var_type() == 'List':
return 'List'
if self.symbol.id == '+' and (self.children[0].var_type() == 'List' or self.children[1].var_type() == 'List'):
return 'List'
if self.is_list:
return 'List'
#if self.symbol.id == '[' and not self.is_list and self.children[0].var_type() == 'str': # ]
if self.symbol.id == '[' and self.children[0].var_type() == 'str': # ]
return 'str'
if self.symbol.id == '*' and self.children[1].var_type() == 'str':
return 'str'
if self.token.category == Token.Category.STRING_LITERAL:
return 'str'
if self.symbol.id == '.':
if self.children[0].token_str() == 'os' and self.children[1].token_str() == 'pathsep':
return 'str'
return None
if self.symbol.id == 'if':
t0 = self.children[0].var_type()
if t0 is not None:
return t0
return self.children[2].var_type()
if self.function_call and self.children[0].token_str() == 'str':
return 'str'
return self.scope.var_type(self.token.value(source))
def append_child(self, child):
child.parent = self
self.children.append(child)
def leftmost(self):
if self.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL, Token.Category.NAME, Token.Category.CONSTANT) or self.symbol.id == 'lambda':
return self.token.start
if self.symbol.id == '(': # )
if self.function_call:
return self.children[0].token.start
else:
return self.token.start
elif self.symbol.id == '[': # ]
if self.is_list:
return self.token.start
else:
return self.children[0].token.start
if len(self.children) in (2, 3):
return self.children[0].leftmost()
return self.token.start
def rightmost(self):
if self.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL, Token.Category.NAME, Token.Category.CONSTANT):
return self.token.end
if self.symbol.id in '([': # ])
if len(self.children) == 0:
return self.token.end + 1
return (self.children[-1] or self.children[-2]).rightmost() + 1
return self.children[-1].rightmost()
def left_to_right_token(self):
return Token(self.leftmost(), self.rightmost(), Token.Category.NAME)
def token_str(self):
return self.token.value(source) if not self.token_str_override else self.token_str_override
def is_parentheses(self):
return self.symbol.id == '(' and not self.tuple and not self.function_call # )
def to_str(self):
# r = ''
# prev_token_end = self.children[0].token.start
# for c in self.children:
# r += source[prev_token_end:c.token.start]
# if c.token.value(source) != 'self': # hack for a while
# r += c.token.value(source)
# prev_token_end = c.token.end
# return r
if self.token.category == Token.Category.NAME:
if self.scope_prefix == ':' and ((self.parent and self.parent.function_call and self is self.parent.children[0]) or (self.token_str()[0].isupper() and self.token_str() != self.token_str().upper()) or self.token_str() in python_types_to_11l): # global functions and types do not require prefix `:` because global functions and types are ok, but global variables are not so good and they should be marked with `:`
return self.token_str()
if self.token_str() == 'self' and (self.parent is None or (self.parent.symbol.id != '.' and self.parent.symbol.id != 'lambda')):
parent = self
while parent.parent is not None:
parent = parent.parent
ast_parent = parent.ast_parent
while ast_parent is not None:
if isinstance(ast_parent, ASTFunctionDefinition):
if len(ast_parent.function_arguments) and ast_parent.function_arguments[0][0] == 'self' and isinstance(ast_parent.parent, ASTClassDefinition):
return '(.)'
break
ast_parent = ast_parent.parent
return self.scope_prefix + self.token_str()
if self.token.category == Token.Category.NUMERIC_LITERAL:
n = self.token.value(source)
i = 0
# if n[0] in '-+':
# sign = n[0]
# i = 1
# else:
# sign = ''
sign = ''
is_hex = n[i:i+1] == '0' and n[i+1:i+2] in ('x', 'X')
is_oct = n[i:i+1] == '0' and n[i+1:i+2] in ('o', 'O')
is_bin = n[i:i+1] == '0' and n[i+1:i+2] in ('b', 'B')
if is_hex or is_oct or is_bin:
i += 2
if is_hex:
n = n[i:].replace('_', '')
if len(n) <= 2: # ultrashort hexadecimal number
n = '0'*(2-len(n)) + n
return n[:1] + "'" + n[1:]
elif len(n) <= 4: # short hexadecimal number
n = '0'*(4-len(n)) + n
return n[:2] + "'" + n[2:]
else:
number_with_separators = ''
j = len(n)
while j > 4:
number_with_separators = "'" + n[j-4:j] + number_with_separators
j -= 4
return sign + '0'*(4-j) + n[0:j] + number_with_separators
if n[-1] in 'jJ':
n = n[:-1] + 'i'
return sign + n[i:].replace('_', "'") + ('o' if is_oct else 'b' if is_bin else '')
if self.token.category == Token.Category.STRING_LITERAL:
def balance_pq_string(s):
min_nesting_level = 0
nesting_level = 0
for ch in s:
if ch == "‘":
nesting_level += 1
elif ch == "’":
nesting_level -= 1
min_nesting_level = min(min_nesting_level, nesting_level)
nesting_level -= min_nesting_level
return "'"*-min_nesting_level + "‘"*-min_nesting_level + "‘" + s + "’" + "’"*nesting_level + "'"*nesting_level
s = self.token.value(source)
if s[0] in 'rR':
l = 3 if s[1:4] in ('"""', "'''") else 1
return balance_pq_string(s[1+l:-l])
elif s[0] in 'bB':
return s[1:] + '.code'
else:
l = 3 if s[0:3] in ('"""', "'''") else 1
if '\\' in s or ('‘' in s and not '’' in s) or (not '‘' in s and '’' in s):
if s == R'"\\"' or s == R"'\\'":
return R'‘\’'
s = s.replace("\n", "\\n\\\n").replace("\\\\n\\\n", "\\\n")
if s[0] == '"':
return s if l == 1 else '"' + s[3:-3].replace('"', R'\"') + '"'
else:
return '"' + s[l:-l].replace('"', R'\"').replace(R"\'", "'") + '"'
else:
return balance_pq_string(s[l:-l])
if self.token.category == Token.Category.CONSTANT:
return {'None': 'N', 'False': '0B', 'True': '1B'}[self.token.value(source)]
def range_need_space(child1, child2):
return not((child1 is None or child1.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL))
and (child2 is None or child2.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL)))
if self.symbol.id == '(': # )
if self.function_call:
if self.children[0].symbol.id == '.':
c01 = self.children[0].children[1].token_str()
if self.children[0].children[0].symbol.id == '{' and c01 == 'get': # } # replace `{'and':'&', 'or':'|', 'in':'C'}.get(self.symbol.id, 'symbol-' + self.symbol.id)` with `(S .symbol.id {‘and’ {‘&’}; ‘or’ {‘|’}; ‘in’ {‘C’} E ‘symbol-’(.symbol.id)})`
parenthesis = ('(', ')') if self.parent is not None else ('', '')
return parenthesis[0] + self.children[0].to_str() + parenthesis[1]
if c01 == 'join' and not (self.children[0].children[0].symbol.id == '.' and self.children[0].children[0].children[0].token_str() == 'os'): # replace `', '.join(arr)` with `arr.join(‘, ’)`
assert(len(self.children) == 3)
return (self.children[1].to_str() if self.children[1].token.category == Token.Category.NAME or self.children[1].symbol.id == 'for' or self.children[1].function_call else '(' + self.children[1].to_str() + ')') + '.join(' + (self.children[0].children[0].children[0].to_str() if self.children[0].children[0].is_parentheses() else self.children[0].children[0].to_str()) + ')'
if c01 == 'split' and len(self.children) == 5 and not (self.children[0].children[0].token_str() == 're'): # split() second argument [limit] in 11l is similar to JavaScript, Ruby and PHP, but not Python
return self.children[0].to_str() + '(' + self.children[1].to_str() + ', ' + self.children[3].to_str() + ' + 1)'
if c01 == 'split' and len(self.children) == 1:
return self.children[0].to_str() + '_py()' # + '((‘ ’, "\\t", "\\r", "\\n"), group_delimiters\' 1B)'
if c01 == 'is_integer' and len(self.children) == 1: # `x.is_integer()` -> `fract(x) == 0`
return 'fract(' + self.children[0].children[0].to_str() + ') == 0'
if c01 == 'bit_length' and len(self.children) == 1: # `x.bit_length()` -> `bit_length(x)`
return 'bit_length(' + self.children[0].children[0].to_str() + ')'
repl = {'startswith':'starts_with', 'endswith':'ends_with', 'find':'findi', 'rfind':'rfindi',
'lower':'lowercase', 'islower':'is_lowercase', 'upper':'uppercase', 'isupper':'is_uppercase', 'isdigit':'is_digit', 'isalpha':'is_alpha',
'timestamp':'unix_time', 'lstrip':'ltrim', 'rstrip':'rtrim', 'strip':'trim',
'appendleft':'append_left', 'extendleft':'extend_left', 'popleft':'pop_left', 'issubset':'is_subset'}.get(c01, '')
if repl != '': # replace `startswith` with `starts_with`, `endswith` with `ends_with`, etc.
c00 = self.children[0].children[0].to_str()
if repl == 'uppercase' and c00.endswith('[2..]') and self.children[0].children[0].children[0].symbol.id == '(' and self.children[0].children[0].children[0].children[0].token_str() == 'hex': # ) # `hex(x)[2:].upper()` -> `hex(x)`
return 'hex(' + self.children[0].children[0].children[0].children[1].to_str() + ')'
#assert(len(self.children) == 3)
res = c00 + '.' + repl + '('
def is_char(child):
ts = child.token_str()
return child.token.category == Token.Category.STRING_LITERAL and (len(ts) == 3 or (ts[:2] == '"\\' and len(ts) == 4))
if repl.endswith('trim') and len(self.children) == 1: # `strip()` -> `trim((‘ ’, "\t", "\r", "\n"))`
res += '(‘ ’, "\\t", "\\r", "\\n")'
elif repl.endswith('trim') and not is_char(self.children[1]): # `"...".strip("\t ")` -> `"...".trim(Array[Char]("\t "))`
assert(len(self.children) == 3)
res += 'Array[Char](' + self.children[1].to_str() + ')'
else:
for i in range(1, len(self.children), 2):
assert(self.children[i+1] is None)
res += self.children[i].to_str()
if i < len(self.children)-2:
res += ', '
return res + ')'
if self.children[0].children[0].symbol.id == '(' and \
self.children[0].children[0].children[0].token_str() == 'open' and \
len(self.children[0].children[0].children) == 5 and \
self.children[0].children[0].children[4] is None and \
self.children[0].children[0].children[3].token_str() in ("'rb'", '"rb"') and \
self.children[0].children[1].token_str() == 'read': # ) # transform `open(fname, 'rb').read()` into `File(fname).read_bytes()`
assert(self.children[0].children[0].children[2] is None)
return 'File(' + self.children[0].children[0].children[1].to_str() + ').read_bytes()'
if c01 == 'total_seconds': # `delta.total_seconds()` -> `delta.seconds`
assert(len(self.children) == 1)
return self.children[0].children[0].to_str() + '.seconds'
if c01 == 'conjugate' and len(self.children) == 1: # `c.conjugate()` -> `conjugate(c)`
return 'conjugate(' + self.children[0].children[0].to_str() + ')'
if c01 == 'readlines': # `f.readlines()` -> `f.read_lines(1B)`
assert(len(self.children) == 1)
return self.children[0].children[0].to_str() + ".read_lines(1B)"
if c01 == 'readline': # `f.readline()` -> `f.read_line(1B)`
assert(len(self.children) == 1)
return self.children[0].children[0].to_str() + ".read_line(1B)"
if self.children[0].children[0].token_str() == 're' and self.children[0].children[1].token_str() != 'compile': # `re.search('pattern', 'string')` -> `re:‘pattern’.search(‘string’)`
c1_in_braces_if_needed = self.children[1].to_str()
if self.children[1].token.category != Token.Category.STRING_LITERAL:
c1_in_braces_if_needed = '(' + c1_in_braces_if_needed + ')'
if self.children[0].children[1].token_str() == 'split': # `re.split('pattern', 'string')` -> `‘string’.split(re:‘pattern’)`
return self.children[3].to_str() + '.split(re:' + c1_in_braces_if_needed + ')'
if self.children[0].children[1].token_str() == 'sub': # `re.sub('pattern', 'repl', 'string')` -> `‘string’.replace(re:‘pattern’, ‘repl’)`
return self.children[5].to_str() + '.replace(re:' + c1_in_braces_if_needed + ', ' + re.sub(R'\\(\d{1,2})', R'$\1', self.children[3].to_str()) + ')'
if self.children[0].children[1].token_str() == 'match':
assert c1_in_braces_if_needed[0] != '(', 'only string literal patterns supported in `match()` for a while' # )
if c1_in_braces_if_needed[-2] == '$': # `re.match('pattern$', 'string')` -> `re:‘pattern’.match(‘string’)`
return 're:' + c1_in_braces_if_needed[:-2] + c1_in_braces_if_needed[-1] + '.match(' + self.children[3].to_str() + ')'
else: # `re.match('pattern', 'string')` -> `re:‘^pattern’.search(‘string’)`
return 're:' + c1_in_braces_if_needed[0] + '^' + c1_in_braces_if_needed[1:] + '.search(' + self.children[3].to_str() + ')'
c0c1 = self.children[0].children[1].token_str()
return 're:' + c1_in_braces_if_needed + '.' + {'fullmatch': 'match', 'findall': 'find_strings', 'finditer': 'find_matches'}.get(c0c1, c0c1) + '(' + self.children[3].to_str() + ')'
if self.children[0].children[0].token_str() == 'collections' and self.children[0].children[1].token_str() == 'defaultdict': # `collections.defaultdict(ValueType) # KeyType` -> `DefaultDict[KeyType, ValueType]()`
assert(len(self.children) == 3)
if source[self.children[1].token.end + 2 : self.children[1].token.end + 3] != '#':
raise Error('to use `defaultdict` the type of dict keys must be specified in the comment', self.children[0].children[1].token)
sl = slice(self.children[1].token.end + 3, source.find("\n", self.children[1].token.end + 3))
return 'DefaultDict[' + trans_type(source[sl].lstrip(' '), self.scope, Token(sl.start, sl.stop, Token.Category.NAME)) + ', ' \
+ trans_type(self.children[1].token_str(), self.scope, self.children[1].token) + ']()'
if self.children[0].children[0].token_str() == 'collections' and self.children[0].children[1].token_str() == 'deque': # `collections.deque() # ValueType` -> `Deque[ValueType]()`
if len(self.children) == 3:
return 'Deque(' + self.children[1].to_str() + ')'
assert(len(self.children) == 1)
if source[self.token.end + 2 : self.token.end + 3] != '#':
raise Error('to use `deque` the type of deque values must be specified in the comment', self.children[0].children[1].token)
sl = slice(self.token.end + 3, source.find("\n", self.token.end + 3))
return 'Deque[' + trans_type(source[sl].lstrip(' '), self.scope, Token(sl.start, sl.stop, Token.Category.NAME)) + ']()'
if self.children[0].children[0].token_str() == 'int' and self.children[0].children[1].token_str() == 'from_bytes':
assert(len(self.children) == 5)
if not (self.children[3].token.category == Token.Category.STRING_LITERAL and self.children[3].token_str()[1:-1] == 'little'):
raise Error("only 'little' byteorder supported so far", self.children[3].token)
return "Int(bytes' " + self.children[1].to_str() + ')'
if self.children[0].children[0].token_str() == 'random' and self.children[0].children[1].token_str() == 'shuffle':
return 'random:shuffle(&' + self.children[1].to_str() + ')'
if self.children[0].children[0].token_str() == 'random' and self.children[0].children[1].token_str() == 'randint':
return 'random:(' + self.children[1].to_str() + ' .. ' + self.children[3].to_str() + ')'
if self.children[0].children[0].token_str() == 'random' and self.children[0].children[1].token_str() == 'randrange':
return 'random:(' + self.children[1].to_str() + (' .< ' + self.children[3].to_str() if len(self.children) == 5 else '') + ')'
if self.children[0].children[0].token_str() == 'heapq':
res = 'minheap:' + {'heappush':'push', 'heappop':'pop', 'heapify':'heapify'}[self.children[0].children[1].token_str()] + '(&'
for i in range(1, len(self.children), 2):
assert(self.children[i+1] is None)
res += self.children[i].to_str()
if i < len(self.children)-2:
res += ', '
return res + ')'
if self.children[0].children[0].token_str() == 'itertools' and self.children[0].children[1].token_str() == 'count': # `itertools.count(1)` -> `1..`
return self.children[1].to_str() + '..'
func_name = self.children[0].to_str()
if func_name == 'str':
func_name = 'String'
elif func_name in ('int', 'Int64'):
if func_name == 'int':
func_name = 'Int'
if len(self.children) == 5:
return func_name + '(' + self.children[1].to_str() + ", radix' " + self.children[3].to_str() + ')'
elif func_name == 'float':
if len(self.children) == 3 and self.children[1].token.category == Token.Category.STRING_LITERAL and self.children[1].token_str()[1:-1].lower() in ('infinity', 'inf'):
return 'Float.infinity'
func_name = 'Float'
elif func_name == 'complex':
func_name = 'Complex'
elif func_name == 'list': # `list(map(...))` -> `map(...)`
if len(self.children) == 3 and self.children[1].symbol.id == '(' and self.children[1].children[0].token_str() == 'range': # ) # `list(range(...))` -> `Array(...)`
parens = True#len(self.children[1].children) == 7 # if true, then this is a range with step
return 'Array' + '('*parens + self.children[1].to_str() + ')'*parens
assert(len(self.children) == 3)
if self.children[1].symbol.id == '(' and self.children[1].children[0].token_str() in ('map', 'product', 'zip'): # )
return self.children[1].to_str()
else:
return 'Array(' + self.children[1].to_str() + ')'
elif func_name == 'tuple': # `tuple(sorted(...))` -> `tuple_sorted(...)`
assert(len(self.children) == 3)
if self.children[1].function_call and self.children[1].children[0].token_str() == 'sorted':
return 'tuple_' + self.children[1].to_str()
elif func_name == 'dict':
func_name = 'Dict'
elif func_name == 'set': # `set() # KeyType` -> `Set[KeyType]()`
if len(self.children) == 3:
return 'Set(' + self.children[1].to_str() + ')'
assert(len(self.children) == 1)
if source[self.token.end + 2 : self.token.end + 3] != '#':
# if self.parent is None and type(self.ast_parent) == ASTExprAssignment \
# and self.ast_parent.dest_expression.symbol.id == '.' \
# and self.ast_parent.dest_expression.children[0].token_str() == 'self' \
# and type(self.ast_parent.parent) == ASTFunctionDefinition \
# and self.ast_parent.parent.function_name == '__init__':
# return 'Set()'
raise Error('to use `set` the type of set keys must be specified in the comment', self.children[0].token)
sl = slice(self.token.end + 3, source.find("\n", self.token.end + 3))
return 'Set[' + trans_type(source[sl].lstrip(' '), self.scope, Token(sl.start, sl.stop, Token.Category.NAME)) + ']()'
elif func_name == 'open':
func_name = 'File'
mode = '‘r’'
for i in range(1, len(self.children), 2):
if self.children[i+1] is None:
if i == 3:
mode = self.children[i].to_str()
else:
arg_name = self.children[i].to_str()
if arg_name == 'mode':
mode = self.children[i+1].to_str()
elif arg_name == 'newline':
if mode not in ('‘w’', '"w"'):
raise Error("`newline` argument is only supported in 'w' mode", self.children[i].token)
if self.children[i+1].to_str() != '"\\n"':
raise Error(R'the only allowed value for `newline` argument is `"\n"`', self.children[i+1].token)
self.children.pop(i+1)
self.children.pop(i)
break
elif func_name == 'product':
func_name = 'cart_product'
elif func_name == 'deepcopy':
func_name = 'copy'
elif func_name == 'print' and self.iterable_unpacking:
func_name = 'print_elements'
if func_name == 'len': # replace `len(container)` with `container.len`
assert(len(self.children) == 3)
if isinstance(self.ast_parent, (ASTIf, ASTWhile)) if self.parent is None else self.parent.symbol.id == 'if': # `if len(arr)` -> `I !arr.empty`
return '!' + self.children[1].to_str() + '.empty'
if len(self.children[1].children) == 2 and self.children[1].symbol.id not in ('.', '['): # ]
return '(' + self.children[1].to_str() + ')' + '.len'
return self.children[1].to_str() + '.len'
elif func_name == 'ord': # replace `ord(ch)` with `ch.code`
assert(len(self.children) == 3)
return self.children[1].to_str() + '.code'
elif func_name == 'chr': # replace `chr(code)` with `Char(code' code)`
assert(len(self.children) == 3)
return "Char(code' " + self.children[1].to_str() + ')'
elif func_name == 'isinstance': # replace `isinstance(obj, type)` with `T(obj) >= type`
assert(len(self.children) == 5)
return 'T(' + self.children[1].to_str() + ') >= ' + self.children[3].to_str()
elif func_name in ('map', 'filter'): # replace `map(function, iterable)` with `iterable.map(function)`
assert(len(self.children) == 5)
b = len(self.children[3].children) > 1 and self.children[3].symbol.id not in ('(', '[') # ])
c1 = self.children[1].to_str()
return '('*b + self.children[3].to_str() + ')'*b + '.' + func_name + '(' + {'int':'Int', 'float':'Float', 'str':'String'}.get(c1, c1) + ')'
elif func_name == 'reduce':
if len(self.children) == 5: # replace `reduce(function, iterable)` with `iterable.reduce(function)`
return self.children[3].to_str() + '.reduce(' + self.children[1].to_str() + ')'
else: # replace `reduce(function, iterable, initial)` with `iterable.reduce(initial, function)`
assert(len(self.children) == 7)
return self.children[3].to_str() + '.reduce(' + self.children[5].to_str() + ', ' + self.children[1].to_str() + ')'
elif func_name == 'super': # replace `super()` with `T.base`
assert(len(self.children) == 1)
return 'T.base'
elif func_name == 'range':
assert(3 <= len(self.children) <= 7)
parenthesis = ('(', ')') if self.parent is not None and (self.parent.symbol.id == 'for' or (self.parent.function_call and self.parent.children[0].token_str() in ('map', 'filter', 'reduce'))) else ('', '')
if len(self.children) == 3: # replace `range(e)` with `(0 .< e)`
space = ' ' * range_need_space(self.children[1], None)
c1 = self.children[1].to_str()
if c1.endswith(' + 1'): # `range(e + 1)` -> `0 .. e`
return parenthesis[0] + '0' + space + '..' + space + c1[:-4] + parenthesis[1]
return parenthesis[0] + '0' + space + '.<' + space + c1 + parenthesis[1]
else:
rangestr = ' .< ' if range_need_space(self.children[1], self.children[3]) else '.<'
if len(self.children) == 5: # replace `range(b, e)` with `(b .< e)`
if self.children[3].token.category == Token.Category.NUMERIC_LITERAL and self.children[3].token_str().replace('_', '').isdigit() and \
self.children[1].token.category == Token.Category.NUMERIC_LITERAL and self.children[1].token_str().replace('_', '').isdigit(): # if `b` and `e` are numeric literals, then ...
return parenthesis[0] + self.children[1].token_str().replace('_', '') + '..' + str(int(self.children[3].token_str().replace('_', '')) - 1) + parenthesis[1] # ... replace `range(b, e)` with `(b..e-1)`
c3 = self.children[3].to_str()
if c3.endswith(' + 1'): # `range(a, b + 1)` -> `a .. b`
return parenthesis[0] + self.children[1].to_str() + rangestr.replace('<', '.') + c3[:-4] + parenthesis[1]
return parenthesis[0] + self.children[1].to_str() + rangestr + c3 + parenthesis[1]
else: # replace `range(b, e, step)` with `(b .< e).step(step)`
return '(' + self.children[1].to_str() + rangestr + self.children[3].to_str() + ').step(' + self.children[5].to_str() + ')'
elif func_name == 'print':
first_named_argument = len(self.children)
for i in range(1, len(self.children), 2):
if self.children[i+1] is not None:
first_named_argument = i
break
sep = '‘ ’'
for i in range(first_named_argument, len(self.children), 2):
assert(self.children[i+1] is not None)
if self.children[i].to_str() == 'sep':
sep = self.children[i+1].to_str()
break
def surround_with_sep(s, before, after):
if (sep in ('‘ ’', '‘’') # special case for ‘ ’ and ‘’
or sep[0] == s[0]): # ‘`‘sep’‘str’‘sep’` -> `‘sepstrsep’`’|‘`"sep""str""sep"` -> `"sepstrsep"`’
return s[0] + sep[1:-1]*before + s[1:-1] + sep[1:-1]*after + s[-1]
else: # `"sep"‘str’"sep"`|`‘sep’"str"‘sep’`
return sep*before + s + sep*after
def parenthesize_if_needed(child):
#if child.token.category in (Token.Category.NAME, Token.Category.NUMERIC_LITERAL) or child.symbol.id == '[': # ] # `print(‘Result: ’3)` is currently not supported in 11l
if child.token.category == Token.Category.NAME or child.symbol.id in ('[', '('): # )]
return child.to_str()
else:
return '(' + child.to_str() + ')'
res = 'print('
for i in range(1, first_named_argument, 2):
if i == 1: # it's the first agrument
if i == first_named_argument - 2: # it's the only argument — ‘no sep is required’/‘no parentheses are required’
res += self.children[i].to_str()
elif self.children[i].token.category == Token.Category.STRING_LITERAL:
res += surround_with_sep(self.children[i].to_str(), False, True)
else:
res += parenthesize_if_needed(self.children[i])
else:
if self.children[i].token.category == Token.Category.STRING_LITERAL:
if self.children[i-2].token.category == Token.Category.STRING_LITERAL:
raise Error('consecutive string literals in `print()` are not supported', self.children[i].token)
res += surround_with_sep(self.children[i].to_str(), True, i != first_named_argument - 2)
else:
if self.children[i-2].token.category != Token.Category.STRING_LITERAL:
res += sep
res += parenthesize_if_needed(self.children[i])
for i in range(first_named_argument, len(self.children), 2):
if self.children[i].to_str() != 'sep':
if len(res) > len('print('): # )
res += ', '
res += self.children[i].to_str() + "' " + self.children[i+1].to_str()
return res + ')'
else:
if ':' in func_name:
colon_pos = func_name.rfind(':')
module_name = func_name[:colon_pos].replace(':', '.')
if module_name in modules:
tid = modules[module_name].scope.find(func_name[colon_pos+1:])
else:
tid = None
elif func_name.startswith('.'):
s = self.scope
while True:
if s.is_function and not s.is_lambda_or_for:
tid = s.parent.vars.get(func_name[1:])
break
s = s.parent
if s is None:
tid = None
break
else:
tid = self.scope.find(func_name)
f_node = tid.node if tid is not None and type(tid.node) == ASTFunctionDefinition else None
res = func_name + '('
for i in range(1, len(self.children), 2):
if self.children[i+1] is None:
if f_node is not None:
fargs = f_node.function_arguments[i//2 + int(func_name.startswith('.'))]
arg_type_name = fargs[2]
if arg_type_name.startswith(('List[', 'Dict[', 'DefaultDict[')) or (arg_type_name != '' and trans_type(arg_type_name, self.scope, self.children[i].token).endswith('&')) or fargs[3] == '&': # ]]]
res += '&'
res += self.children[i].to_str()
else:
ci_str = self.children[i].to_str()
res += ci_str + "' "
if f_node is not None:
for farg in f_node.function_arguments:
if farg[0] == ci_str:
if farg[2].startswith(('List[', 'Dict[')): # ]]
res += '&'
break
res += self.children[i+1].to_str()
if i < len(self.children)-2:
res += ', '
return res + ')'
elif self.tuple:
res = '('
for i in range(len(self.children)):
res += self.children[i].to_str()
if i < len(self.children)-1:
res += ', '
if len(self.children) == 1:
res += ','
return res + ')'
else:
assert(len(self.children) == 1)
return '(' + self.children[0].to_str() + ')'
elif self.symbol.id == '[': # ]
if self.is_list:
if len(self.children) == 1 and self.children[0].symbol.id == 'for':
return self.children[0].to_str()
res = '['
for i in range(len(self.children)):
res += self.children[i].to_str()
if i < len(self.children)-1:
res += ', '
return res + ']'
elif self.children[0].symbol.id == '{': # }
parenthesis = ('(', ')') if self.parent is not None else ('', '')
res = parenthesis[0] + 'S ' + self.children[1].to_str() + ' {'
for i in range(0, len(self.children[0].children), 2):
res += self.children[0].children[i].to_str() + ' {' + self.children[0].children[i+1].to_str() + '}'
if i < len(self.children[0].children)-2:
res += '; '
return res + '}' + parenthesis[1]
else:
c0 = self.children[0].to_str()
if self.slicing:
if len(self.children) == 2: # `a = b[:]` -> `a = copy(b)`
assert(self.children[1] is None)
return 'copy(' + c0 + ')'
if c0.startswith('bin(') and len(self.children) == 3 and self.children[1].token_str() == '2' and self.children[2] is None: # ) # `bin(x)[2:]` -> `bin(x)`
return c0
if len(self.children) == 4 and self.children[1] is None and self.children[2] is None and self.children[3].symbol.id == '-' and len(self.children[3].children) == 1 and self.children[3].children[0].token_str() == '1': # replace `result[::-1]` with `reversed(result)`
return 'reversed(' + c0 + ')'
def for_negative_bound(c):
child = self.children[c]
if child is None:
return None
r = child.to_str()
if r[0] == '-': # hacky implementation of ‘this rule’[https://docs.python.org/3/reference/simple_stmts.html]:‘If either bound is negative, the sequence's length is added to it.’
r = '(len)' + r
return r
space = ' ' * range_need_space(self.children[1], self.children[2])
fnb2 = for_negative_bound(2)
s = (for_negative_bound(1) or '0') + space + '.' + ('<' + space + fnb2 if fnb2 else '.')
if len(self.children) == 4 and self.children[3] is not None:
s = '(' + s + ').step(' + self.children[3].to_str() + ')'
return c0 + '[' + s + ']'
elif self.children[1].to_str() == '-1':
return c0 + '.last'
else:
c1 = self.children[1].to_str()
return (c0 + '['
+ '(len)'*(c1[0] == '-') # hacky implementation of ‘this rule’[https://docs.python.org/3/reference/simple_stmts.html]:‘the subscript must yield an integer. If it is negative, the sequence's length is added to it.’
+ c1 + ']')
elif self.symbol.id == '{': # }
if len(self.children) == 0:
return 'Dict()'
if self.is_set:
is_not_for = self.children[0].symbol.id != 'for'
res = 'Set(' + '['*is_not_for
for i in range(len(self.children)):
res += self.children[i].to_str()
if i < len(self.children)-1:
res += ', '
return res + ']'*is_not_for + ')'
if self.children[-1].symbol.id == 'for':
assert(len(self.children) == 2)
c = self.children[1]
c2s = c.children[2].to_str()
return 'Dict(' + (c2s[1:-1] if c.children[2].function_call and c.children[2].children[0].token_str() == 'range' else c2s) + ', ' + c.children[1].to_str() + ' -> (' + self.children[0].to_str() + ', ' + c.children[0].to_str() + '))'
res = '['
for i in range(0, len(self.children), 2):
res += self.children[i].to_str() + ' = ' + self.children[i+1].to_str()
if i < len(self.children)-2:
res += ', '
return res + ']'
elif self.symbol.id == 'lambda':
r = '(' if len(self.children) != 3 else ''
for i in range(0, len(self.children)-1, 2):
r += self.children[i].token_str()
if self.children[i+1] is not None:
r += ' = ' + self.children[i+1].to_str()
if i < len(self.children)-3:
r += ', '
if len(self.children) != 3: r += ')'
return r + ' -> ' + self.children[-1].to_str()
elif self.symbol.id == 'for':
if self.children[2].token_str() == 'for': # this is a multiloop
if self.children[2].children[2].token_str() == 'for': # this is a multiloop3
filtered = len(self.children[2].children[2].children) == 4
res = 'multiloop' + '_filtered'*filtered + '(' + self.children[2].children[0].to_str() + ', ' + self.children[2].children[2].children[0].to_str() + ', ' + self.children[2].children[2].children[2].to_str()
fparams = ', (' + self.children[1].token_str() + ', ' + self.children[2].children[1].token_str() + ', ' + self.children[2].children[2].children[1].token_str() + ') -> '
if filtered:
res += fparams + self.children[2].children[2].children[3].to_str()
res += fparams + self.children[0].to_str() + ')'
return res
filtered = len(self.children[2].children) == 4
res = 'multiloop' + '_filtered'*filtered + '(' + self.children[2].children[0].to_str() + ', ' + self.children[2].children[2].to_str()
fparams = ', (' + self.children[1].token_str() + ', ' + self.children[2].children[1].token_str() + ') -> '
if filtered:
res += fparams + self.children[2].children[3].to_str()
res += fparams + self.children[0].to_str() + ')'
return res
res = self.children[2].children[0].children[0].to_str() if self.children[2].symbol.id == '(' and len(self.children[2].children) == 1 and self.children[2].children[0].symbol.id == '.' and len(self.children[2].children[0].children) == 2 and self.children[2].children[0].children[1].token_str() == 'items' else self.children[2].to_str() # )
if len(self.children) == 4:
res += '.filter(' + self.children[1].to_str() + ' -> ' + self.children[3].to_str() + ')'
if self.children[1].to_str() != self.children[0].to_str():
res += '.map(' + self.children[1].to_str() + ' -> ' + self.children[0].to_str() + ')'
return res
elif self.symbol.id == 'not':
if len(self.children) == 1:
if (self.children[0].token.category == Token.Category.OPERATOR_OR_DELIMITER or (self.children[0].token.category == Token.Category.KEYWORD and self.children[0].symbol.id == 'in')) and len(self.children[0].children) == 2:
return '!(' + self.children[0].to_str() + ')'
else:
return '!' + self.children[0].to_str()
else:
assert(len(self.children) == 2)
return self.children[0].to_str() + ' !C ' + self.children[1].to_str()
elif self.symbol.id == 'is':
if self.children[1].token_str() == 'None':
return self.children[0].to_str() + (' != ' if self.is_not else ' == ') + 'N'
return '&' + self.children[0].to_str() + (' != ' if self.is_not else ' == ') + '&' + self.children[1].to_str()
if len(self.children) == 1:
#return '(' + self.symbol.id + self.children[0].to_str() + ')'
return {'~':'(-)'}.get(self.symbol.id, self.symbol.id) + self.children[0].to_str()
elif len(self.children) == 2:
#return '(' + self.children[0].to_str() + ' ' + self.symbol.id + ' ' + self.children[1].to_str() + ')'
if self.symbol.id == '.':
if self.children[0].symbol.id == '{' and self.children[1].token.category == Token.Category.NAME and self.children[1].token.value(source) == 'get': # } # replace `{'and':'&', 'or':'|', 'in':'C'}.get(self.symbol.id, 'symbol-' + self.symbol.id)` with `(S .symbol.id {‘and’ {‘&’}; ‘or’ {‘|’}; ‘in’ {‘C’} E ‘symbol-’(.symbol.id)})`
res = 'S ' + self.parent.children[1].to_str() + ' {'
for i in range(0, len(self.children[0].children), 2):
res += self.children[0].children[i].to_str() + ' {' + self.children[0].children[i+1].to_str() + '}'
if i < len(self.children[0].children)-2:
res += '; '
return res + ' E ' + self.parent.children[3].to_str() + '}'
c1ts = self.children[1].token_str()
if self.children[0].token_str() == 'sys' and c1ts in ('argv', 'exit', 'stdin', 'stdout', 'stderr'):
return ':'*(c1ts != 'exit') + c1ts
if self.children[0].scope_prefix == ':::':
if self.children[0].token_str() in ('math', 'cmath'):
c1 = self.children[1].to_str()
if c1 not in ('e', 'pi'):
if c1 == 'fabs': c1 = 'abs'
return c1
r = self.children[0].token_str() + ':' + self.children[1].to_str()
return {'tempfile:gettempdir': 'fs:get_temp_dir', 'os:path': 'fs:path', 'os:pathsep': 'os:env_path_sep', 'os:sep': 'fs:path:sep', 'os:system': 'os:', 'os:listdir': 'fs:list_dir', 'os:walk': 'fs:walk_dir',
'os:mkdir': 'fs:create_dir', 'os:makedirs': 'fs:create_dirs', 'os:remove': 'fs:remove_file', 'os:rmdir': 'fs:remove_dir', 'os:rename': 'fs:rename',
'time:time': 'Time().unix_time', 'time:sleep': 'sleep', 'datetime:datetime': 'Time', 'datetime:date': 'Time', 'datetime:timedelta': 'TimeDelta', 're:compile': 're:',
'random:random': 'random:'}.get(r, r)
if self.children[0].symbol.id == '.' and self.children[0].children[0].scope_prefix == ':::':
if self.children[0].children[0].token_str() == 'datetime':
if self.children[0].children[1].token_str() == 'datetime':
if self.children[1].token_str() == 'now': # `datetime.datetime.now()` -> `Time()`
return 'Time'
if self.children[1].token_str() == 'fromtimestamp': # `datetime.datetime.fromtimestamp()` -> `time:from_unix_time()`
return 'time:from_unix_time'
if self.children[1].token_str() == 'strptime': # `datetime.datetime.strptime()` -> `time:strptime()`
return 'time:strptime'
if self.children[0].children[1].token_str() == 'date' and self.children[1].token_str() == 'today': # `datetime.date.today()` -> `time:today()`
return 'time:today'
if self.children[0].children[0].token_str() == 'os' and self.children[0].children[1].token_str() == 'path':
r = {'pathsep':'os:env_path_sep', 'isdir':'fs:is_dir', 'isfile':'fs:is_file', 'islink':'fs:is_symlink',
'dirname':'fs:path:dir_name', 'basename':'fs:path:base_name', 'abspath':'fs:path:absolute', 'relpath':'fs:path:relative',
'getsize':'fs:file_size', 'splitext':'fs:path:split_ext'}.get(self.children[1].token_str(), '')
if r != '':
return r
if len(self.children[0].children) == 2 and self.children[0].children[0].scope_prefix == ':::' and self.children[0].children[0].token_str() != 'sys': # for `os.path.join()` [and also take into account `sys.argv.index()`]
return self.children[0].to_str() + ':' + self.children[1].to_str()
if self.children[0].to_str() == 'self':
parent = self
while parent.parent:
if parent.parent.symbol.id == 'for' and id(parent.parent.children[0]) == id(parent):
return '@.' + self.children[1].to_str()
parent = parent.parent
if parent.symbol.id == 'lambda':
if len(parent.children) >= 3 and parent.children[0].token_str() == 'self':
return 'self.' + self.children[1].to_str()
return '@.' + self.children[1].to_str()
ast_parent = parent.ast_parent
function_nesting = 0
while type(ast_parent) != ASTProgram:
if type(ast_parent) == ASTFunctionDefinition:
if len(ast_parent.function_arguments) >= 1 and ast_parent.function_arguments[0][0] == 'self' and type(ast_parent.parent) != ASTClassDefinition:
return 'self.' + self.children[1].to_str()
function_nesting += 1
if function_nesting == 2:
break
elif type(ast_parent) == ASTClassDefinition:
break
ast_parent = ast_parent.parent
return ('@' if function_nesting == 2 else '') + '.' + self.children[1].to_str()
if c1ts == 'days':
return self.children[0].to_str() + '.' + c1ts + '()'
return self.children[0].to_str() + '.' + self.children[1].to_str()
elif self.symbol.id == '+=' and self.children[1].symbol.id == '[' and self.children[1].is_list: # ]
c1 = self.children[1].to_str()
return self.children[0].to_str() + ' [+]= ' + (c1[1:-1] if len(self.children[1].children) == 1 and c1.startswith('[') else c1) # ]
elif self.symbol.id == '+=' and self.children[1].token.value(source) == '1':
return self.children[0].to_str() + '++'
elif self.symbol.id == '-=' and self.children[1].token.value(source) == '1':
return '--' + self.children[0].to_str() if self.parent else self.children[0].to_str() + '--'
elif self.symbol.id == '+=' and ((self.children[0].token.category == Token.Category.NAME and self.children[0].var_type() == 'str')
or (self.children[1].symbol.id == '+' and len(self.children[1].children) == 2 and
(self.children[1].children[0].token.category == Token.Category.STRING_LITERAL
or self.children[1].children[1].token.category == Token.Category.STRING_LITERAL))
or self.children[1].token.category == Token.Category.STRING_LITERAL):
return self.children[0].to_str() + ' ‘’= ' + self.children[1].to_str()
elif self.symbol.id == '+=' and self.children[0].token.category == Token.Category.NAME and self.children[0].var_type() == 'List':
return self.children[0].to_str() + ' [+]= ' + self.children[1].to_str()
elif self.symbol.id == '+' and self.children[1].symbol.id == '*' and self.children[0].token.category == Token.Category.STRING_LITERAL \
and self.children[1].children[1].token.category == Token.Category.STRING_LITERAL: # for `outfile.write('<blockquote'+(ch=='<')*' class="re"'+'>')`
return self.children[0].to_str() + '(' + self.children[1].to_str() + ')'
elif self.symbol.id == '+' and self.children[1].symbol.id == '*' and self.children[1].children[0].token.category == Token.Category.STRING_LITERAL \
and (self.children[0].token.category == Token.Category.STRING_LITERAL
or (self.children[0].symbol.id == '+'
and self.children[0].children[1].token.category == Token.Category.STRING_LITERAL)): # for `outfile.write("<table"+' style="display: inline"'*(prevci != 0 and instr[prevci-1] != "\n")+...)` and `outfile.write('<pre>' + ins + '</pre>' + "\n"*(not self.habr_html))`
return self.children[0].to_str() + '(' + self.children[1].to_str() + ')'
elif self.symbol.id == '+' and self.children[1].token.category == Token.Category.STRING_LITERAL and ((self.children[0].symbol.id == '+'
and self.children[0].children[1].token.category == Token.Category.STRING_LITERAL) # for `outfile.write(... + '<br /></span>' # ... \n + '<div class="spoiler_text" ...')`
or self.children[0].token.category == Token.Category.STRING_LITERAL): # for `pre {margin: 0;}''' + # ... \n '''...`
c0 = self.children[0].to_str()
c1 = self.children[1].to_str()
return c0 + {('"','"'):'‘’', ('"','‘'):'', ('’','‘'):'""', ('’','"'):''}[(c0[-1], c1[0])] + c1
elif self.symbol.id == '+' and (self.children[0].token.category == Token.Category.STRING_LITERAL
or self.children[1].token.category == Token.Category.STRING_LITERAL
or (self.children[0].symbol.id == '+' and self.children[0].children[1].token.category == Token.Category.STRING_LITERAL)):
c1 = self.children[1].to_str()
return self.children[0].to_str() + ('(' + c1 + ')' if c1[0] == '.' else c1)
elif self.symbol.id == '+' and self.children[1].symbol.id == '*' and (self.children[1].children[0].token.category == Token.Category.STRING_LITERAL # for `self.newlines() + ' ' * (indent*3) + 'F ' + ...`
or self.children[1].children[1].token.category == Token.Category.STRING_LITERAL): # for `(... + self.ohd*'</span>')`
p = self.children[0].symbol.id == '*'
return '('*p + self.children[0].to_str() + ')'*p + '‘’(' + self.children[1].to_str() + ')'
elif self.symbol.id == '+' and self.children[0].symbol.id == '*' and self.children[0].children[0].token.category == Token.Category.STRING_LITERAL: # for `' ' * (indent*3) + self.expression.to_str() + "\n"`
c1 = self.children[1].to_str()
return '(' + self.children[0].to_str() + ')‘’' + ('(' + c1 + ')' if c1[0] == '.' else c1)
elif self.symbol.id == '+' and (self.children[0].var_type() == 'str' or self.children[1].var_type() == 'str'):
return self.children[0].to_str() + '‘’' + self.children[1].to_str()
elif self.symbol.id == '+' and (self.children[0].var_type() == 'List' or self.children[1].var_type() == 'List'):
return self.children[0].to_str() + ' [+] ' + self.children[1].to_str()
elif self.symbol.id == '<=' and self.children[0].symbol.id == '<=': # replace `'0' <= ch <= '9'` with `ch C ‘0’..‘9’`
return self.children[0].children[1].to_str() + ' C ' + self.children[0].children[0].to_str() + (' .. ' if range_need_space(self.children[0].children[0], self.children[1]) else '..') + self.children[1].to_str()
elif self.symbol.id == '<' and self.children[0].symbol.id == '<=': # replace `'0' <= ch < '9'` with `ch C ‘0’.<‘9’`
return self.children[0].children[1].to_str() + ' C ' + self.children[0].children[0].to_str() + (' .< ' if range_need_space(self.children[0].children[0], self.children[1]) else '.<') + self.children[1].to_str()
elif self.symbol.id == '<=' and self.children[0].symbol.id == '<' : # replace `'0' < ch <= '9'` with `ch C ‘0’<.‘9’`
return self.children[0].children[1].to_str() + ' C ' + self.children[0].children[0].to_str() + (' <. ' if range_need_space(self.children[0].children[0], self.children[1]) else '<.') + self.children[1].to_str()
elif self.symbol.id == '<' and self.children[0].symbol.id == '<' : # replace `'0' <= ch <= '9'` with `ch C ‘0’<.<‘9’`
return self.children[0].children[1].to_str() + ' C ' + self.children[0].children[0].to_str() + (' <.< ' if range_need_space(self.children[0].children[0], self.children[1]) else '<.<') + self.children[1].to_str()
elif self.symbol.id == '==' and self.children[0].symbol.id == '(' and self.children[0].children[0].to_str() == 'len' and self.children[1].token.value(source) == '0': # ) # replace `len(arr) == 0` with `arr.empty`
return self.children[0].children[1].to_str() + '.empty'
elif self.symbol.id == '!=' and self.children[0].symbol.id == '(' and self.children[0].children[0].to_str() == 'len' and self.children[1].token.value(source) == '0': # ) # replace `len(arr) != 0` with `!arr.empty`
return '!' + self.children[0].children[1].to_str() + '.empty'
elif self.symbol.id in ('==', '!=') and self.children[1].symbol.id == '.' and len(self.children[1].children) == 2 and self.children[1].children[1].token_str().isupper(): # replace `token.category == Token.Category.NAME` with `token.category == NAME`
#self.skip_find_and_get_prefix = True # this is not needed here because in AST there is still `Token.Category.NAME`, not just `NAME`
return self.children[0].to_str() + ' ' + self.symbol.id + ' ' + self.children[1].children[1].token_str()
elif self.symbol.id in ('==', '!=') and self.children[0].function_call and self.children[0].children[0].token_str() == 'id' and self.children[1].function_call and self.children[1].children[0].token_str() == 'id': # replace `id(a) == id(b)` with `&a == &b`
return '&' + self.children[0].children[1].token_str() + ' ' + self.symbol.id + ' &' + self.children[1].children[1].token_str()
elif self.symbol.id == '%' and self.children[0].token.category == Token.Category.STRING_LITERAL:
add_parentheses = self.children[1].symbol.id != '(' or self.children[1].function_call # )
fmtstr = self.children[0].to_str()
nfmtstr = ''
i = 0
while i < len(fmtstr):
if fmtstr[i] == '#':
nfmtstr += '##'
i += 1
continue
fmtchr = fmtstr[i+1:i+2]
if fmtstr[i] == '%':
if fmtchr == '%':
nfmtstr += '%'
i += 2
elif fmtchr == 'g':
nfmtstr += '#.'
i += 2
else:
nfmtstr += '#'
before_period = 0
after_period = 6
period_pos = 0
i += 1
if fmtstr[i] == '-': # left align
nfmtstr += '<'
i += 1
if fmtstr[i:i+1] == '0' and fmtstr[i+1:i+2].isdigit(): # zero padding
nfmtstr += '0'
while i < len(fmtstr) and fmtstr[i].isdigit():
before_period = before_period*10 + ord(fmtstr[i]) - ord('0')
i += 1
if fmtstr[i:i+1] == '.':
period_pos = i
i += 1
after_period = 0
while i < len(fmtstr) and fmtstr[i].isdigit():
after_period = after_period*10 + ord(fmtstr[i]) - ord('0')
i += 1
if fmtstr[i:i+1] in ('d', 'i'):
if before_period != 0:
nfmtstr += str(before_period)
else:
nfmtstr += '.'#'.0' # `#.0` corresponds to `%.0f` rather than `%i` or `%d`, and `'%i' % (1.7)` = `1`, but `‘#.0’.format(1.7)` = `2`
elif fmtstr[i:i+1] == 's':
if before_period != 0:
nfmtstr += str(before_period)
else:
nfmtstr += '.'
elif fmtstr[i:i+1] == 'f':
if before_period != 0:
b = before_period
if after_period != 0:
b -= after_period + 1
if b > 1:
nfmtstr += str(b)
nfmtstr += '.' + str(after_period)
elif fmtstr[i:i+1] == 'g':
nfmtstr += str(before_period)
if period_pos != 0:
raise Error('precision in %g conversion type is not supported', Token(self.children[0].token.start + period_pos, self.children[0].token.start + i, Token.Category.STRING_LITERAL))
else:
tpos = self.children[0].token.start + i
raise Error('unsupported format character `' + fmtstr[i:i+1] + '`', Token(tpos, tpos, Token.Category.STRING_LITERAL))
i += 1
continue
nfmtstr += fmtstr[i]
i += 1
return nfmtstr + '.format' + '('*add_parentheses + self.children[1].to_str() + ')'*add_parentheses
else:
return self.children[0].to_str() + ' ' + {'and':'&', 'or':'|', 'in':'C', '//':'I/', '//=':'I/=', '**':'^', '**=':'^=', '^':'(+)', '^=':'(+)=', '|':'[|]', '|=':'[|]=', '&':'[&]', '&=':'[&]='}.get(self.symbol.id, self.symbol.id) + ' ' + self.children[1].to_str()
elif len(self.children) == 3:
assert(self.symbol.id == 'if')
c0 = self.children[0].to_str()
if self.children[1].symbol.id == 'is' and self.children[1].is_not and self.children[1].children[1].token.value(source) == 'None' and self.children[1].children[0].to_str() == c0: # replace `a if a is not None else b` with `a ? b`
return c0 + ' ? ' + self.children[2].to_str()
return 'I ' + self.children[1].to_str() + ' {' + c0 + '} E ' + self.children[2].to_str()
return ''
symbol_table : Dict[str, SymbolBase] = {}
allowed_keywords_in_expressions : List[str] = []
def symbol(id, bp = 0):
try:
s = symbol_table[id]
except KeyError:
s = SymbolBase()
s.id = id
s.lbp = bp
symbol_table[id] = s
if id[0].isalpha(): # this is keyword-in-expression
assert(id.isalpha())
allowed_keywords_in_expressions.append(id)
else:
s.lbp = max(bp, s.lbp)
return s
class ASTNode:
parent : 'ASTNode'
def walk_expressions(self, f):
pass
def walk_children(self, f):
pass
class ASTNodeWithChildren(ASTNode):
# children : List['ASTNode'] = [] # OMFG! This actually means static (common for all objects of type ASTNode) variable, not default value of member variable, that was unexpected to me as it contradicts C++11 behavior
children : List['ASTNode']
tokeni : int
def __init__(self):
self.children = []
self.tokeni = tokeni
def walk_children(self, f):
for child in self.children:
f(child)
def children_to_str(self, indent, t):
r = ''
if self.tokeni > 0:
ti = self.tokeni - 1
while ti > 0 and tokens[ti].category in (Token.Category.DEDENT, Token.Category.STATEMENT_SEPARATOR):
ti -= 1
r = (min(source[tokens[ti].end:tokens[self.tokeni].start].count("\n"), 2) - 1) * "\n"
r += ' ' * (indent*3) + t + "\n"
for c in self.children:
r += c.to_str(indent+1)
return r
class ASTNodeWithExpression(ASTNode):
expression : SymbolNode
def set_expression(self, expression):
self.expression = expression
self.expression.ast_parent = self
def walk_expressions(self, f):
f(self.expression)
class ASTProgram(ASTNodeWithChildren):
imported_modules : List[str] = None
def to_str(self):
r = ''
for c in self.children:
r += c.to_str(0)
return r
class ASTImport(ASTNode):
def __init__(self):
self.modules = []
def to_str(self, indent):
return ' ' * (indent*3) + '//import ' + ', '.join(self.modules) + "\n" # this is easier than avoid to add empty line here: `import sys\n\ndef f()` -> `\nF f()`
class ASTExpression(ASTNodeWithExpression):
def to_str(self, indent):
return ' ' * (indent*3) + self.expression.to_str() + "\n"
class ASTExprAssignment(ASTNodeWithExpression):
add_vars : List[bool]
drop_list = False
is_tuple_assign_expression = False
dest_expression : SymbolNode
additional_dest_expressions : List[SymbolNode]
def __init__(self):
# self.add_vars = [] # this is not necessary
self.additional_dest_expressions = []
def set_dest_expression(self, dest_expression):
self.dest_expression = dest_expression
self.dest_expression.ast_parent = self
def to_str(self, indent):
if type(self.parent) == ASTClassDefinition:
assert(len(self.add_vars) == 1 and self.add_vars[0] and not self.is_tuple_assign_expression)
return ' ' * (indent*3) + self.dest_expression.to_str() + ' = ' + self.expression.to_str() + "\n"
if self.dest_expression.slicing:
s = self.dest_expression.to_str() # [
if s.endswith(']') and self.expression.function_call and self.expression.children[0].token_str() == 'reversed' and self.expression.children[1].to_str() == s:
l = len(self.dest_expression.children[0].to_str())
return ' ' * (indent*3) + s[:l] + '.reverse_range(' + s[l+1:-1] + ")\n"
raise Error('slice assignment is not supported', self.dest_expression.left_to_right_token())
if self.drop_list:
return ' ' * (indent*3) + self.dest_expression.to_str() + ".drop()\n"
if self.dest_expression.tuple and len(self.dest_expression.children) == 2 and \
self. expression.tuple and len(self. expression.children) == 2 and \
self.dest_expression.children[0].to_str() == self.expression.children[1].to_str() and \
self.dest_expression.children[1].to_str() == self.expression.children[0].to_str():
return ' ' * (indent*3) + 'swap(&' + self.dest_expression.children[0].to_str() + ', &' + self.dest_expression.children[1].to_str() + ")\n"
if self.is_tuple_assign_expression or not any(self.add_vars):
r = ' ' * (indent*3) + self.dest_expression.to_str()
for ade in self.additional_dest_expressions:
r += ' = ' + ade.to_str()
return r + ' = ' + self.expression.to_str() + "\n"
if all(self.add_vars):
if self.expression.function_call and self.expression.children[0].token_str() == 'ref':
assert(len(self.expression.children) == 3)
return ' ' * (indent*3) + 'V& ' + self.dest_expression.to_str() + ' = ' + self.expression.children[1].to_str() + "\n"
return ' ' * (indent*3) + 'V ' + self.dest_expression.to_str() + ' = ' + self.expression.to_str() + "\n"
assert(self.dest_expression.tuple and len(self.dest_expression.children) == len(self.add_vars))
r = ' ' * (indent*3) + '('
for i in range(len(self.add_vars)):
if self.add_vars[i]:
r += 'V '
assert(self.dest_expression.children[i].token.category == Token.Category.NAME)
r += self.dest_expression.children[i].token_str()
if i < len(self.add_vars)-1:
r += ', '
return r + ') = ' + self.expression.to_str() + "\n"
def walk_expressions(self, f):
f(self.dest_expression)
super().walk_expressions(f)
class ASTAssert(ASTNodeWithExpression):
expression2 : SymbolNode = None
def set_expression2(self, expression2):
self.expression2 = expression2
self.expression2.ast_parent = self
def to_str(self, indent):
return ' ' * (indent*3) + 'assert(' + (self.expression.children[0].to_str() if self.expression.symbol.id == '(' and not self.expression.tuple and not self.expression.function_call # )
else self.expression.to_str()) + (', ' + self.expression2.to_str() if self.expression2 is not None else '') + ")\n"
def walk_expressions(self, f):
if self.expression2 is not None: f(self.expression2)
super().walk_expressions(f)
python_types_to_11l = {'&':'&', 'int':'Int', 'float':'Float', 'complex':'Complex', 'str':'String', 'Char':'Char', 'Int64':'Int64', 'UInt32':'UInt32', 'Byte':'Byte', 'bool':'Bool', 'None':'N', 'List':'', 'Tuple':'Tuple', 'Dict':'Dict', 'DefaultDict':'DefaultDict', 'Set':'Set', 'IO[str]': 'File',
'datetime.date':'Time', 'datetime.datetime':'Time'}
def trans_type(ty, scope, type_token):
if ty[0] in '\'"':
assert(ty[-1] == ty[0])
ty = ty[1:-1]
t = python_types_to_11l.get(ty)
if t is not None:
return t
else:
p = ty.find('[')
if p != -1:
assert(ty[-1] == ']')
i = p + 1
s = i
nesting_level = 0
types = []
while True:
if ty[i] == '[':
nesting_level += 1
elif ty[i] == ']':
if nesting_level == 0:
assert(i == len(ty)-1)
types.append(trans_type(ty[s:i], scope, type_token))
break
nesting_level -= 1
elif ty[i] == ',':
if nesting_level == 0: # ignore inner commas
if ty[s:i] == '[]' and ty.startswith('Callable['): # ] # for `Callable[[], str]`
types.append('()')
else:
types.append(trans_type(ty[s:i], scope, type_token))
i += 1
while ty[i] == ' ':
i += 1
s = i
#continue # this is not necessary here
i += 1
if ty.startswith('Tuple['): # ]
return '(' + ', '.join(types) + ')'
if ty.startswith('Dict['): # ]
assert(len(types) == 2)
return '[' + types[0] + ' = ' + types[1] + ']'
if ty.startswith('Callable['): # ]
assert(len(types) == 2)
return '(' + types[0] + ' -> ' + types[1] + ')'
if p == 0: # for `Callable`
assert(len(types) != 0)
parens = len(types) > 1
return '('*parens + ', '.join(types) + ')'*parens
return trans_type(ty[:p], scope, type_token) + '[' + ', '.join(types) + ']'
assert(ty.find(',') == -1)
if '.' in ty: # for `category : Token.Category`
return ty # [-TODO: generalize-]
id = scope.find(ty)
if id is None:
raise Error('class `' + ty + '` is not defined', type_token)
if id.type != '(Class)':
raise Error('`' + ty + '`: expected a class name (got variable' + (' of type `' + id.type + '`' if id.type != '' else '') + ')', type_token)
return ty + '&'*id.node.is_inout
class ASTTypeHint(ASTNode):
var : str
type : str
type_args : List[str]
scope : Scope
type_token : Token
is_reference = False
def __init__(self):
self.scope = scope
def trans_type(self, ty):
return trans_type(ty, self.scope, self.type_token)
def to_str_(self, indent, nullable = False):
if self.type == 'Callable':
if self.type_args[0] == '':
args = '()'
else:
tt = self.type_args[0].split(',')
args = ', '.join(self.trans_type(ty) for ty in tt)
if len(tt) > 1:
args = '(' + args + ')'
return ' ' * (indent*3) + '(' + args + ' -> ' + self.trans_type(self.type_args[1]) + ') ' + self.var
elif self.type == 'Optional':
assert(len(self.type_args) == 1)
return ' ' * (indent*3) + self.trans_type(self.type_args[0]) + ('& ' if self.is_reference else '? ') + self.var
return ' ' * (indent*3) + self.trans_type(self.type + ('[' + ', '.join(self.type_args) + ']' if len(self.type_args) else '')) + '?'*nullable + '&'*self.is_reference + ' ' + self.var
def to_str(self, indent):
return self.to_str_(indent) + "\n"
class ASTAssignmentWithTypeHint(ASTTypeHint, ASTNodeWithExpression):
def to_str(self, indent):
if self.type == 'DefaultDict':
assert(self.expression.function_call and self.expression.children[0].to_str() == 'collections:defaultdict')
return super().to_str(indent)
expression_str = self.expression.to_str()
if expression_str == 'N':
return super().to_str_(indent, True) + "\n"
return super().to_str_(indent) + (' = ' + expression_str if expression_str not in ('[]', 'Dict()') else '') + "\n"
class ASTFunctionDefinition(ASTNodeWithChildren):
function_name : str
function_return_type : str = ''
is_const = False
function_arguments : List[Tuple[str, str, str, str]]# = [] # (arg_name, default_value, type_name, qualifier)
first_named_only_argument = None
class VirtualCategory(IntEnum):
NO = 0
NEW = 1
OVERRIDE = 2
ABSTRACT = 3
ASSIGN = 4
virtual_category = VirtualCategory.NO
scope : Scope
def __init__(self):
super().__init__()
self.function_arguments = []
self.scope = scope
def serialize_to_dict(self):
return {'function_arguments': ['; '.join(arg) for arg in self.function_arguments]}
def deserialize_from_dict(self, d):
self.function_arguments = [arg.split('; ') for arg in d['function_arguments']]
def to_str(self, indent):
if self.function_name in ('move', 'copy', 'ref') and type(self.parent) == ASTProgram:
assert(len(self.function_arguments) == 1)
return ''
fargs = []
for arg in self.function_arguments:
farg = ''
default_value = arg[1]
if arg[2] != '':
ty = trans_type(arg[2], self.scope, tokens[self.tokeni])
# if ty.endswith('&'): # fix error ‘expected function's argument name’ at `F trazar(Rayo& =r; prof)` (when there was `r = ...` instead of `rr = ...`)
# arg = (arg[0].lstrip('='), arg[1], arg[2])
farg += ty
if default_value == 'N':
farg += '?'
assert(arg[3] == '')
farg += ' '
if ty.startswith(('Array[', '[', 'Dict[', 'DefaultDict[')) or arg[3] == '&': # ]]]]
farg += '&'
else:
if arg[3] == '&':
farg += '&'
farg += arg[0] + ('' if default_value == '' else ' = ' + default_value)
fargs.append((farg, arg[2] != ''))
if self.first_named_only_argument is not None:
fargs.insert(self.first_named_only_argument, ("'", fargs[self.first_named_only_argument][1]))
if len(self.function_arguments) and self.function_arguments[0][0] == 'self' and type(self.parent) == ASTClassDefinition:
fargs.pop(0)
fargs_str = ''
if len(fargs):
fargs_str = fargs[0][0]
prev_type = fargs[0][1]
for farg in fargs[1:]:
fargs_str += ('; ' if prev_type and not farg[1] else ', ') + farg[0]
prev_type = farg[1]
if self.virtual_category == self.VirtualCategory.ABSTRACT:
return ' ' * (indent*3) + 'F.virtual.abstract ' + self.function_name + '(' + fargs_str + ') -> ' + trans_type(self.function_return_type, self.scope, tokens[self.tokeni]) + "\n"
return self.children_to_str(indent, ('F', 'F.virtual.new', 'F.virtual.override', '', 'F.virtual.assign')[self.virtual_category] + '.const'*self.is_const + ' ' +
{'__init__':'', '__call__':'()', '__and__':'[&]', '__lt__':'<', '__eq__':'==', '__add__':'+', '__sub__':'-', '__mul__':'*', '__str__':'String'}.get(self.function_name, self.function_name)
+ '(' + fargs_str + ')'
+ ('' if self.function_return_type == '' else ' -> ' + trans_type(self.function_return_type, self.scope, tokens[self.tokeni])))
class ASTIf(ASTNodeWithChildren, ASTNodeWithExpression):
else_or_elif : ASTNode = None
def walk_expressions(self, f):
super().walk_expressions(f)
if self.else_or_elif is not None and isinstance(self.else_or_elif, ASTElseIf):
self.else_or_elif.walk_expressions(f)
def walk_children(self, f):
super().walk_children(f)
if self.else_or_elif is not None:
self.else_or_elif.walk_children(f)
def to_str(self, indent):
return self.children_to_str(indent, 'I ' + self.expression.to_str()) + (self.else_or_elif.to_str(indent) if self.else_or_elif is not None else '')
class ASTElse(ASTNodeWithChildren):
def to_str(self, indent):
return self.children_to_str(indent, 'E')
class ASTElseIf(ASTNodeWithChildren, ASTNodeWithExpression):
else_or_elif : ASTNode = None
def walk_expressions(self, f):
super().walk_expressions(f)
if self.else_or_elif is not None and isinstance(self.else_or_elif, ASTElseIf):
self.else_or_elif.walk_expressions(f)
def walk_children(self, f):
super().walk_children(f)
if self.else_or_elif is not None:
self.else_or_elif.walk_children(f)
def to_str(self, indent):
return self.children_to_str(indent, 'E I ' + self.expression.to_str()) + (self.else_or_elif.to_str(indent) if self.else_or_elif is not None else '')
class ASTSwitch(ASTNodeWithExpression):
class Case(ASTNodeWithChildren, ASTNodeWithExpression):
def __init__(self):
super().__init__()
self.tokeni = 0
cases : List[Case]
def __init__(self):
self.cases = []
def walk_children(self, f):
for case in self.cases:
f(case)
def to_str(self, indent):
r = ' ' * (indent*3) + 'S ' + self.expression.to_str() + "\n"
for case in self.cases:
r += case.children_to_str(indent + 1, 'E' if case.expression.token_str() == 'E' else case.expression.to_str())
return r
class ASTWhile(ASTNodeWithChildren, ASTNodeWithExpression):
def to_str(self, indent):
return self.children_to_str(indent, 'L' if self.expression.token.category == Token.Category.CONSTANT and self.expression.token.value(source) == 'True' else 'L ' + self.expression.to_str())
class ASTFor(ASTNodeWithChildren, ASTNodeWithExpression):
was_no_break : ASTNodeWithChildren = None
loop_variables : List[str]
os_walk = False
dir_filter = None
def walk_children(self, f):
super().walk_children(f)
if self.was_no_break is not None:
self.was_no_break.walk_children(f)
def to_str(self, indent):
if self.os_walk:
dir_filter = ''
if self.dir_filter is not None:
dir_filter = ", dir_filter' " + self.dir_filter # (
return self.children_to_str(indent, 'L(_fname) ' + self.expression.to_str()[:-1] + dir_filter + ", files_only' 0B)\n"
+ ' ' * ((indent+1)*3) + 'V ' + self.loop_variables[0] + " = fs:path:dir_name(_fname)\n"
+ ' ' * ((indent+1)*3) + '[String] ' + self.loop_variables[1] + ', ' + self.loop_variables[2] + "\n"
+ ' ' * ((indent+1)*3) + 'I fs:is_dir(_fname) {' + self.loop_variables[1] + ' [+]= fs:path:base_name(_fname)} E ' + self.loop_variables[2] + ' [+]= fs:path:base_name(_fname)')
if len(self.loop_variables) == 1:
r = 'L(' + self.loop_variables[0] + ') ' + (self.expression.children[1].to_str()
if self.expression.function_call and self.expression.children[0].token_str() == 'range' and # `L(i) 100` instead of `L(i) 0.<100`
len(self.expression.children) == 3 and self.expression.children[1].token.category == Token.Category.NUMERIC_LITERAL else self.expression.to_str())
if self.expression.token.category == Token.Category.NAME:
sid = self.expression.scope.find(self.expression.token_str())
if sid.type in ('Dict', 'DefaultDict'):
r += '.keys()'
elif self.expression.symbol.id == '(' and len(self.expression.children) == 1 and self.expression.children[0].symbol.id == '.' and len(self.expression.children[0].children) == 2 and self.expression.children[0].children[1].token_str() == 'items': # )
r = 'L(' + ', '.join(self.loop_variables) + ') ' + self.expression.children[0].children[0].to_str()
else:
r = 'L(' + ', '.join(self.loop_variables) + ') ' + self.expression.to_str()
# r = 'L(' + ''.join(self.loop_variables) + ') ' + self.expression.to_str()
# for index, loop_var in enumerate(self.loop_variables):
# r += "\n" + ' ' * ((indent+1)*3) + 'V ' + loop_var + ' = ' + ''.join(self.loop_variables) + '[' + str(index) + ']'
r = self.children_to_str(indent, r)
if self.was_no_break is not None:
r += self.was_no_break.children_to_str(indent, 'L.was_no_break')
return r
class ASTContinue(ASTNode):
def to_str(self, indent):
return ' ' * (indent*3) + "L.continue\n"
class ASTBreak(ASTNode):
def to_str(self, indent):
return ' ' * (indent*3) + "L.break\n"
class ASTReturn(ASTNodeWithExpression):
def to_str(self, indent):
return ' ' * (indent*3) + 'R' + (' ' + self.expression.to_str() if self.expression is not None else '') + "\n"
def walk_expressions(self, f):
if self.expression is not None: f(self.expression)
class ASTException(ASTNodeWithExpression):
def to_str(self, indent):
return ' ' * (indent*3) + 'X ' + self.expression.to_str() + "\n"
class ASTExceptionTry(ASTNodeWithChildren):
def to_str(self, indent):
return self.children_to_str(indent, 'X.try')
class ASTExceptionCatch(ASTNodeWithChildren):
exception_object_type : str
exception_object_name : str = ''
def to_str(self, indent):
return self.children_to_str(indent, 'X.catch' + (' ' + self.exception_object_type if self.exception_object_type != '' else '')
+ (' ' + self.exception_object_name if self.exception_object_name != '' else ''))
class ASTDel(ASTNodeWithExpression):
def to_str(self, indent):
assert(self.expression.slicing and len(self.expression.children) == 3)
return ' ' * (indent*3) + self.expression.children[0].to_str() + '.del(' + self.expression.children[1].to_str() + ' .< ' + self.expression.children[2].to_str() + ")\n"
class ASTClassDefinition(ASTNodeWithChildren):
base_class_name : str = None
base_class_node : 'ASTClassDefinition' = None
class_name : str
is_inout = False
def find_member_including_base_classes(self, name):
for child in self.children:
if isinstance(child, ASTTypeHint) and child.var == name:
return True
if self.base_class_node is not None:
return self.base_class_node.find_member_including_base_classes(name)
return False
def to_str(self, indent):
if self.base_class_name == 'IntEnum':
r = ' ' * (indent*3) + 'T.enum ' + self.class_name + "\n"
current_index = 0
for c in self.children:
assert(type(c) == ASTExprAssignment and c.expression.token.category == Token.Category.NUMERIC_LITERAL)
r += ' ' * ((indent+1)*3) + c.dest_expression.to_str()
if current_index != int(c.expression.token_str()):
current_index = int(c.expression.token_str())
r += ' = ' + c.expression.token_str()
current_index += 1
r += "\n"
return r
return self.children_to_str(indent, 'T ' + self.class_name + ('(' + self.base_class_name + ')' if self.base_class_name and self.base_class_name != 'Exception' else ''))
class ASTPass(ASTNode):
def to_str(self, indent):
return ' ' * ((indent-1)*3) + "{\n"\
+ ' ' * ((indent-1)*3) + "}\n"
class ASTStart(ASTNodeWithChildren):
def to_str(self, indent):
return self.children_to_str(indent-1, ':start:')
class Error(Exception):
def __init__(self, message, token):
self.message = message
self.pos = token.start
self.end = token.end
def next_token(): # why ‘next_token’: >[https://youtu.be/Nlqv6NtBXcA?t=1203]:‘we'll have an advance method which will fetch the next token’
global token, tokeni, tokensn
if token is None and tokeni != -1:
raise Error('no more tokens', Token(len(source), len(source), Token.Category.STATEMENT_SEPARATOR))
tokeni += 1
if tokeni == len(tokens):
token = None
tokensn = None
else:
token = tokens[tokeni]
tokensn = SymbolNode(token)
if token.category != Token.Category.INDENT:
if token.category != Token.Category.KEYWORD or token.value(source) in allowed_keywords_in_expressions:
key : str
if token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL):
key = '(literal)'
elif token.category == Token.Category.NAME:
key = '(name)'
if token.value(source) in ('V', 'C', 'I', 'E', 'F', 'L', 'N', 'R', 'S', 'T', 'X', 'var', 'fn', 'loop', 'null', 'switch', 'type', 'exception', 'sign'):
tokensn.token_str_override = '_' + token.value(source).lower() + '_'
elif token.category == Token.Category.CONSTANT:
key = '(constant)'
elif token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT):
key = ';'
else:
key = token.value(source)
tokensn.symbol = symbol_table[key]
def advance(value):
if token.value(source) != value:
raise Error('expected `' + value + '`', token)
next_token()
def peek_token(how_much = 1):
return tokens[tokeni+how_much] if tokeni+how_much < len(tokens) else Token()
# This implementation is based on [http://svn.effbot.org/public/stuff/sandbox/topdown/tdop-4.py]
def expression(rbp = 0):
def check_tokensn():
if tokensn.symbol is None:
raise Error('no symbol corresponding to token `' + token.value(source) + '` (belonging to ' + str(token.category) +') found while parsing expression', token)
check_tokensn()
t = tokensn
next_token()
check_tokensn()
left = t.symbol.nud(t)
while rbp < tokensn.symbol.lbp:
t = tokensn
next_token()
left = t.symbol.led(t, left)
check_tokensn()
return left
def infix(id, bp):
def led(self, left):
self.append_child(left)
self.append_child(expression(self.symbol.led_bp))
return self
symbol(id, bp).set_led_bp(bp, led)
def infix_r(id, bp):
def led(self, left):
self.append_child(left)
self.append_child(expression(self.symbol.led_bp - 1))
return self
symbol(id, bp).set_led_bp(bp, led)
def prefix(id, bp):
def nud(self):
self.append_child(expression(self.symbol.nud_bp))
return self
symbol(id).set_nud_bp(bp, nud)
symbol("lambda", 20)
symbol("if", 20); symbol("else") # ternary form
infix_r("or", 30); infix_r("and", 40); prefix("not", 50)
infix("in", 60); infix("not", 60) # not in
infix("is", 60);
infix("<", 60); infix("<=", 60)
infix(">", 60); infix(">=", 60)
infix("<>", 60); infix("!=", 60); infix("==", 60)
infix("|", 70); infix("^", 80); infix("&", 90)
infix("<<", 100); infix(">>", 100)
infix("+", 110); infix("-", 110)
infix("*", 120); infix("/", 120); infix("//", 120)
infix("%", 120)
prefix("-", 130); prefix("+", 130); prefix("~", 130)
infix_r("**", 140)
symbol(".", 150); symbol("[", 150); symbol("(", 150); symbol(")"); symbol("]")
infix_r('+=', 10); infix_r('-=', 10); infix_r('*=', 10); infix_r('/=', 10); infix_r('//=', 10); infix_r('%=', 10); infix_r('>>=', 10); infix_r('<<=', 10); infix_r('**=', 10); infix_r('|=', 10); infix_r('^=', 10); infix_r('&=', 10)
symbol("(name)").nud = lambda self: self
symbol("(literal)").nud = lambda self: self
symbol('(constant)').nud = lambda self: self
#symbol("(end)")
symbol(';')
symbol(',')
def led(self, left):
if token.category != Token.Category.NAME:
raise Error('expected an attribute name', token)
self.append_child(left)
self.append_child(tokensn)
next_token()
return self
symbol('.').led = led
def led(self, left):
self.function_call = True
self.append_child(left) # (
if token.value(source) != ')':
while True:
if token.value(source) == '*': # >[https://stackoverflow.com/a/19525681/2692494 <- google:‘python iterable unpacking precedence’]:‘The unpacking `*` is not an operator; it's part of the call syntax.’
if len(self.children) != 1:
raise Error('iterable unpacking is supported only in first agrument', token)
if not (left.token.category == Token.Category.NAME and left.token_str() == 'print'):
raise Error('iterable unpacking is supported only for `print()` function', token)
self.iterable_unpacking = True
next_token()
self.append_child(expression())
if token.value(source) == '=':
next_token()
self.append_child(expression())
else:
self.children.append(None)
if token.value(source) != ',':
break
advance(',') # (
advance(')')
return self
symbol('(').led = led
def nud(self):
comma = False # ((
if token.value(source) != ')':
while True:
if token.value(source) == ')':
break
self.append_child(expression())
if token.value(source) != ',':
break
comma = True
advance(',')
advance(')')
if len(self.children) == 0 or comma:
self.tuple = True
return self
symbol('(').nud = nud # )
def led(self, left):
self.append_child(left)
if token.value(source) == ':':
self.slicing = True
self.children.append(None)
next_token() # [
if token.value(source) != ']': # for `arr[:]`
if token.value(source) == ':':
self.children.append(None)
next_token()
self.append_child(expression())
else:
self.append_child(expression())
if token.value(source) == ':':
next_token()
self.append_child(expression())
else:
self.append_child(expression())
if token.value(source) == ':':
self.slicing = True
next_token() # [[
if token.value(source) != ']':
if token.value(source) == ':':
self.children.append(None)
next_token()
self.append_child(expression())
else:
self.append_child(expression())
if token.value(source) == ':':
next_token()
self.append_child(expression())
else:
self.children.append(None)
advance(']')
return self
symbol('[').led = led
def nud(self):
self.is_list = True
while True: # [
if token.value(source) == ']':
break
self.append_child(expression())
if token.value(source) != ',':
break
advance(',')
advance(']')
return self
symbol('[').nud = nud # ]
def nud(self): # {{{{
if token.value(source) != '}':
while True:
if token.value(source) == '}':
break
self.append_child(expression())
if token.value(source) != ':':
self.is_set = True
while True:
if token.value(source) != ',':
break
advance(',')
if token.value(source) == '}':
break
self.append_child(expression())
break
advance(':')
self.append_child(expression())
if self.children[-1].symbol.id == 'for':
for_scope = self.children[-1].children[0].scope
def set_scope_recursive(sn):
assert(sn.scope == scope)
sn.scope = for_scope
for child in sn.children:
if child is not None:
set_scope_recursive(child)
set_scope_recursive(self.children[0])
break
if token.value(source) != ',':
break
advance(',')
advance('}')
return self
symbol('{').nud = nud
symbol('}')
def led(self, left):
self.append_child(left)
self.append_child(expression())
advance('else')
self.append_child(expression())
return self
symbol('if').led = led
symbol(':'); symbol('='); symbol('->')
def nud(self):
global scope
prev_scope = scope
scope = Scope([])
scope.is_lambda_or_for = True
scope.parent = prev_scope
if token.value(source) != ':':
while True:
if token.category != Token.Category.NAME:
raise Error('expected an argument name', token)
tokensn.scope = scope
scope.add_var(tokensn.token_str())
self.append_child(tokensn)
next_token()
if token.value(source) == '=':
next_token()
self.append_child(expression())
else:
self.children.append(None)
if token.value(source) != ',':
break
advance(',')
advance(':')
self.append_child(expression())
scope = prev_scope
return self
symbol('lambda').nud = nud
def led(self, left):
global scope
prev_scope = scope
scope = for_scope = Scope([])
scope.is_lambda_or_for = True
scope.parent = prev_scope
def set_scope_recursive(sn):
if sn.scope == prev_scope:
sn.scope = scope
elif sn.scope.parent == prev_scope: # for nested list comprehensions
sn.scope.parent = scope
else: # this `sn.scope` was already processed
assert(sn.scope.parent == scope)
for child in sn.children:
if child is not None:
set_scope_recursive(child)
set_scope_recursive(left)
tokensn.scope = scope
scope.add_var(tokensn.token_str())
self.append_child(left)
self.append_child(tokensn)
next_token()
if token.value(source) == ',':
sn = SymbolNode(Token(token.start, token.start, Token.Category.OPERATOR_OR_DELIMITER))
sn.symbol = symbol_table['('] # )
sn.tuple = True
sn.append_child(self.children.pop())
self.append_child(sn)
next_token()
scope.add_var(tokensn.token_str())
sn.append_child(tokensn)
next_token()
if token.value(source) == ',':
next_token()
scope.add_var(tokensn.token_str())
sn.append_child(tokensn)
next_token()
scope = prev_scope
advance('in')
if_lbp = symbol('if').lbp
symbol('if').lbp = 0
self.append_child(expression())
symbol('if').lbp = if_lbp
if token.value(source) == 'if':
scope = for_scope
next_token()
self.append_child(expression())
scope = prev_scope
if self.children[2].token_str() == 'for': # this is a multiloop
for_scope.add_var(self.children[2].children[1].token_str())
def set_scope_recursive(sn):
sn.scope = scope
for child in sn.children:
if child is not None:
set_scope_recursive(child)
set_scope_recursive(self.children[2].children[0])
def set_for_scope_recursive(sn):
sn.scope = for_scope
for child in sn.children:
if child is not None:
set_for_scope_recursive(child)
if self.children[2].children[2].token_str() == 'for': # this is a multiloop3
for_scope.add_var(self.children[2].children[2].children[1].token_str())
if len(self.children[2].children[2].children) == 4:
set_for_scope_recursive(self.children[2].children[2].children[3])
else:
if len(self.children[2].children) == 4:
set_for_scope_recursive(self.children[2].children[3])
return self
symbol('for', 20).led = led
# multitoken operators
def led(self, left):
if token.value(source) != 'in':
raise Error('invalid syntax', token)
next_token()
self.append_child(left)
self.append_child(expression(60))
return self
symbol('not').led = led
def led(self, left):
if token.value(source) == 'not':
next_token()
self.is_not = True
self.append_child(left)
self.append_child(expression(60))
return self
symbol('is').led = led
def parse_internal(this_node, one_line_scope = False):
global token
def new_scope(node, func_args = None):
if token.value(source) != ':':
raise Error('expected `:`', Token(tokens[tokeni-1].end, tokens[tokeni-1].end, tokens[tokeni-1].category))
next_token()
global scope
prev_scope = scope
scope = Scope(func_args)
scope.parent = prev_scope
if token.category != Token.Category.INDENT: # handling of `if ...: break`, `def ...(...): return ...`, etc.
if one_line_scope:
raise Error('unexpected `:` (only one `:` in one line is allowed)', tokens[tokeni-1])
tokensn.scope = scope # for `if ...: new_var = ...` (though code `if ...: new_var = ...` has no real application, this line is needed for correct error message outputting)
parse_internal(node, True)
else:
next_token()
parse_internal(node)
scope = prev_scope
if token is not None:
tokensn.scope = scope
def expected(ch):
if token.value(source) != ch:
raise Error('expected `'+ch+'`', token)
next_token()
def expected_name(what_name):
next_token()
if token.category != Token.Category.NAME:
raise Error('expected ' + what_name, token)
token_value = tokensn.token_str()
next_token()
return token_value
def check_vars_defined(sn : SymbolNode):
if sn.token.category == Token.Category.NAME:
if sn.parent is None or sn.parent.symbol.id != '.' or sn is sn.parent.children[0]: # in `a.b` only `a` [first child] is checked
if not sn.skip_find_and_get_prefix:
sn.scope_prefix = sn.scope.find_and_get_prefix(sn.token_str(), sn.token)
else:
if sn.function_call:
check_vars_defined(sn.children[0])
for i in range(1, len(sn.children), 2):
if sn.children[i+1] is None:
check_vars_defined(sn.children[i])
else:
check_vars_defined(sn.children[i+1]) # checking of named arguments (sn.children[i]) is skipped
else:
for child in sn.children:
if child is not None:
check_vars_defined(child)
while token is not None:
if token.category == Token.Category.KEYWORD:
global scope
if token.value(source) == 'import':
if type(this_node) != ASTProgram:
raise Error('only global import statements are supported', token)
node = ASTImport()
next_token()
while True:
if token.category != Token.Category.NAME:
raise Error('expected module name', token)
module_name = token.value(source)
while peek_token().value(source) == '.':
next_token()
next_token()
if token.category != Token.Category.NAME:
raise Error('expected module name', token)
module_name += '.' + token.value(source)
node.modules.append(module_name)
# Process module [transpile it if necessary]
if module_name not in ('sys', 'tempfile', 'os', 'time', 'datetime', 'math', 'cmath', 're', 'random', 'collections', 'heapq', 'itertools', 'eldf'):
if this_node.imported_modules is not None:
this_node.imported_modules.append(module_name)
module_file_name = os.path.join(os.path.dirname(file_name), module_name.replace('.', '/')).replace('\\', '/') # `os.path.join()` is needed for case when `os.path.dirname(file_name)` is empty string, `replace('\\', '/')` is needed for passing 'tests/parser/errors.txt'
try:
modulefstat = os.stat(module_file_name + '.py')
except FileNotFoundError:
raise Error('can not import module `' + module_name + "`: file '" + module_file_name + ".py' is not found", token)
_11l_file_mtime = 0
if os.path.isfile(module_file_name + '.11l'):
_11l_file_mtime = os.stat(module_file_name + '.11l').st_mtime
modified = _11l_file_mtime == 0 \
or modulefstat.st_mtime > _11l_file_mtime \
or os.stat(__file__).st_mtime > _11l_file_mtime \
or os.stat(os.path.dirname(__file__) + '/tokenizer.py').st_mtime > _11l_file_mtime \
or not os.path.isfile(module_file_name + '.py_global_scope')
if not modified: # check for dependent modules modifications
py_global_scope = eldf.parse(open(module_file_name + '.py_global_scope', encoding = 'utf-8-sig').read())
py_imported_modules = py_global_scope['Imported modules']
for m in py_imported_modules:
if os.stat(os.path.join(os.path.dirname(module_file_name), m.replace('.', '/') + '.py')).st_mtime > _11l_file_mtime:
modified = True
break
if modified:
module_source = open(module_file_name + '.py', encoding = 'utf-8-sig').read()
imported_modules = []
prev_scope = scope
s = parse_and_to_str(tokenizer.tokenize(module_source), module_source, module_file_name + '.py', imported_modules)
modules[module_name] = Module(scope)
open(module_file_name + '.11l', 'w', encoding = 'utf-8', newline = "\n").write(s)
open(module_file_name + '.py_global_scope', 'w', encoding = 'utf-8', newline = "\n").write(eldf.to_eldf(scope.serialize_to_dict(imported_modules)))
scope = prev_scope
if this_node.imported_modules is not None:
this_node.imported_modules.extend(imported_modules)
else:
module_scope = Scope(None)
module_scope.deserialize_from_dict(py_global_scope)
modules[module_name] = Module(module_scope)
if this_node.imported_modules is not None:
this_node.imported_modules.extend(py_imported_modules)
if '.' in module_name:
scope.add_var(module_name.split('.')[0], True, '(Module)')
scope.add_var(module_name, True, '(Module)')
next_token()
if token.value(source) != ',':
break
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'from':
next_token()
assert(token.value(source) in ('typing', 'functools', 'itertools', 'enum', 'copy'))
next_token()
advance('import')
while True:
if token.category != Token.Category.NAME:
raise Error('expected name', token)
next_token()
if token.value(source) != ',':
break
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
continue
elif token.value(source) == 'def':
node = ASTFunctionDefinition()
node.function_name = expected_name('function name')
scope.add_var(node.function_name, True, node = node)
if token.value(source) != '(': # )
raise Error('expected `(` after function name', token) # )(
next_token()
was_default_argument = False
def advance_type():
type_ = token.value(source)
next_token()
if token.value(source) == '[': # ]
nesting_level = 0
while True:
type_ += token.value(source)
if token.value(source) == '[':
next_token()
nesting_level += 1
elif token.value(source) == ']':
next_token()
nesting_level -= 1
if nesting_level == 0:
break
elif token.value(source) == ',':
type_ += ' '
next_token()
else:
if token.category != Token.Category.NAME:
raise Error('expected subtype name', token)
next_token()
return type_
while token.value(source) != ')':
if token.value(source) == '*':
assert(node.first_named_only_argument is None)
node.first_named_only_argument = len(node.function_arguments)
next_token()
advance(',')
continue
if token.category != Token.Category.NAME:
raise Error('expected function\'s argument name', token)
func_arg_name = tokensn.token_str()
next_token()
type_ = ''
qualifier = ''
if token.value(source) == ':': # this is a type hint
next_token()
if token.category == Token.Category.STRING_LITERAL:
type_ = token.value(source)[1:-1]
if token.value(source)[0] == '"': # `def insert(i, n : "Node"):` -> `F insert(i, Node &n)`
qualifier = '&'
next_token()
else:
type_ = advance_type()
if type_ == 'list':
type_ = ''
qualifier = '&'
if token.value(source) == '=':
next_token()
expr = expression()
check_vars_defined(expr)
default = expr.to_str()
was_default_argument = True
else:
if was_default_argument and node.first_named_only_argument is None:
raise Error('non-default argument follows default argument', tokens[tokeni-1])
default = ''
node.function_arguments.append((func_arg_name, default, type_, qualifier)) # ((
if token.value(source) not in ',)':
raise Error('expected `,` or `)` in function\'s arguments list', token)
if token.value(source) == ',':
next_token()
next_token()
if token.value(source) == '->':
next_token()
if token.value(source) == 'None':
node.function_return_type = 'None'
next_token()
else:
node.function_return_type = advance_type()
if source[token.end:token.end+7] == ' # -> &':
node.function_return_type += '&'
elif source[token.end:token.end+8] == ' # const':
node.is_const = True
node.parent = this_node
new_scope(node, map(lambda arg: (arg[0], arg[2]), node.function_arguments))
if len(node.children) == 0: # needed for:
n = ASTPass() # class FileToStringProxy:
n.parent = node # def __init__(self):
node.children.append(n) # self.result = []
# Detect virtual functions and assign `virtual_category`
if type(this_node) == ASTClassDefinition and node.function_name != '__init__':
if this_node.base_class_node is not None:
for child in this_node.base_class_node.children:
if type(child) == ASTFunctionDefinition and child.function_name == node.function_name:
if child.virtual_category == ASTFunctionDefinition.VirtualCategory.NO:
if child.function_return_type == '':
raise Error('please specify return type of virtual function', tokens[child.tokeni])
if len(child.children) and type(child.children[0]) == ASTException and child.children[0].expression.symbol.id == '(' and child.children[0].expression.children[0].token.value(source) == 'NotImplementedError': # )
child.virtual_category = ASTFunctionDefinition.VirtualCategory.ABSTRACT
else:
child.virtual_category = ASTFunctionDefinition.VirtualCategory.NEW
node.virtual_category = ASTFunctionDefinition.VirtualCategory.ASSIGN if child.virtual_category == ASTFunctionDefinition.VirtualCategory.ABSTRACT else ASTFunctionDefinition.VirtualCategory.OVERRIDE
if node.function_return_type == '': # specifying return type of overriden virtual functions is not necessary — it can be taken from original virtual function definition
node.function_return_type = child.function_return_type
break
elif token.value(source) == 'class':
node = ASTClassDefinition()
node.class_name = expected_name('class name')
scope.add_var(node.class_name, True, '(Class)', node = node)
if token.value(source) == '(':
node.base_class_name = expected_name('base class name')
if node.base_class_name != 'Exception':
base_class = scope.find(node.base_class_name)
if base_class is None:
raise Error('class `' + node.base_class_name + '` is not defined', tokens[tokeni-1])
if base_class.type != '(Class)':
raise Error('expected a class name', tokens[tokeni-1])
assert(type(base_class.node) == ASTClassDefinition)
node.base_class_node = base_class.node
expected(')')
if source[token.end:token.end+4] == ' # &':
node.is_inout = True
new_scope(node)
elif token.value(source) == 'pass':
node = ASTPass()
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'if':
if peek_token().value(source) == '__name__':
node = ASTStart()
next_token()
next_token()
assert(token.value(source) == '==')
next_token()
assert(token.value(source) in ("'__main__'", '"__main__"'))
next_token()
new_scope(node)
else:
node = ASTIf()
next_token()
node.set_expression(expression())
new_scope(node)
n = node
while token is not None and token.value(source) in ('elif', 'else'):
if token.value(source) == 'elif':
n.else_or_elif = ASTElseIf()
n.else_or_elif.parent = n
n = n.else_or_elif
next_token()
n.set_expression(expression())
new_scope(n)
if token is not None and token.value(source) == 'else':
n.else_or_elif = ASTElse()
n.else_or_elif.parent = n
next_token()
new_scope(n.else_or_elif)
break
elif token.value(source) == 'while':
node = ASTWhile()
next_token()
node.set_expression(expression())
if node.expression.token.category in (Token.Category.CONSTANT, Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL) and node.expression.token.value(source) != 'True':
raise Error('do you mean `while True`?', node.expression.token) # forbid `while 1:`
new_scope(node)
elif token.value(source) == 'for':
node = ASTFor()
next_token()
prev_scope = scope
scope = Scope(None)
scope.parent = prev_scope
node.loop_variables = [tokensn.token_str()]
scope.add_var(node.loop_variables[0], True)
next_token()
while token.value(source) == ',':
next_token()
node.loop_variables.append(tokensn.token_str())
scope.add_var(tokensn.token_str(), True)
next_token()
advance('in')
node.set_expression(expression())
new_scope(node)
scope = prev_scope
if token is not None and token.value(source) == 'else':
node.was_no_break = ASTNodeWithChildren()
node.was_no_break.parent = node
next_token()
new_scope(node.was_no_break)
elif token.value(source) == 'continue':
node = ASTContinue()
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'break':
node = ASTBreak()
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'return':
node = ASTReturn()
next_token()
if token.category in (Token.Category.DEDENT, Token.Category.STATEMENT_SEPARATOR):
node.expression = None
else:
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('nonlocal', 'global'):
nonlocal_or_global = token.value(source)
next_token()
while True:
if token.category != Token.Category.NAME:
raise Error('expected ' + nonlocal_or_global + ' variable name', token)
if nonlocal_or_global == 'nonlocal':
if source[token.end + 1 : token.end + 5] == "# =\n":
scope.nonlocals_copy.add(token.value(source))
else:
scope.nonlocals.add(token.value(source))
else:
scope.globals.add(token.value(source))
next_token()
if token.value(source) == ',':
next_token()
else:
break
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
continue
elif token.value(source) == 'assert':
node = ASTAssert()
next_token()
node.set_expression(expression())
if token.value(source) == ',':
next_token()
node.set_expression2(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'raise':
node = ASTException()
next_token()
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'try':
node = ASTExceptionTry()
next_token()
new_scope(node)
elif token.value(source) == 'except':
node = ASTExceptionCatch()
prev_scope = scope
scope = Scope(None)
scope.parent = prev_scope
if peek_token().value(source) != ':':
node.exception_object_type = expected_name('exception object type name')
while token.value(source) == '.':
node.exception_object_type += ':' + expected_name('type name')
if node.exception_object_type.startswith('self:'):
node.exception_object_type = '.' + node.exception_object_type[5:]
if token.value(source) != ':':
advance('as')
if token.category != Token.Category.NAME:
raise Error('expected exception object name', token)
node.exception_object_name = tokensn.token_str()
scope.add_var(node.exception_object_name, True)
next_token()
else:
next_token()
node.exception_object_type = ''
new_scope(node)
scope = prev_scope
elif token.value(source) == 'del':
node = ASTDel()
next_token()
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
else:
raise Error('unrecognized statement started with keyword', token)
elif token.category == Token.Category.NAME and peek_token().value(source) == '=':
name_token = token
name_token_str = tokensn.token_str()
node = ASTExprAssignment()
node.set_dest_expression(tokensn)
next_token()
next_token()
node.set_expression(expression())
if node.expression.symbol.id == '.' and len(node.expression.children) == 2 and node.expression.children[1].token_str().isupper(): # replace `category = Token.Category.NAME` with `category = NAME`
node.set_expression(node.expression.children[1])
node.expression.parent = None
node.expression.skip_find_and_get_prefix = True # this can not be replaced with `isupper()` check before `find_and_get_prefix()` call because there will be conflict with uppercase [constant] variables, like `WIDTH` or `HEIGHT` (they[‘variables’] will not be checked, but they should)
type_name = ''
if node.expression.token.category == Token.Category.STRING_LITERAL or (node.expression.function_call and node.expression.children[0].token_str() == 'str') \
or (node.expression.symbol.id == '+' and len(node.expression.children) == 2 and (node.expression.children[0].token.category == Token.Category.STRING_LITERAL
or node.expression.children[1].token.category == Token.Category.STRING_LITERAL)):
type_name = 'str'
elif node.expression.var_type() == 'List':
type_name = 'List'
elif node.expression.is_dict():
type_name = 'Dict'
elif node.expression.function_call and node.expression.children[0].symbol.id == '.' and \
node.expression.children[0].children[0].token_str() == 'collections' and \
node.expression.children[0].children[1].token_str() == 'defaultdict':
type_name = 'DefaultDict'
node.add_vars = [scope.add_var(name_token_str, False, type_name, name_token)]
if node.expression.symbol.id == '[' and len(node.expression.children) == 0: # ]
if node.add_vars[0]:
raise Error('please specify type of empty list', Token(node.dest_expression.token.start, node.expression.token.end + 1, Token.Category.NAME))
node.drop_list = True
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT)): # `poss_nbors = (x-1,y),(x-1,y+1)`
raise Error('expected end of statement', token) # ^
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
if ((node.dest_expression.token_str() == 'Char' and node.expression.token_str() == 'str') # skip `Char = str` statement
or (node.dest_expression.token_str() == 'Byte' and node.expression.token_str() == 'int') # skip `Byte = int` statement
or (node.dest_expression.token_str() == 'Int64' and node.expression.token_str() == 'int') # skip `Int64 = int` statement
or (node.dest_expression.token_str() == 'UInt64' and node.expression.token_str() == 'int') # skip `UInt64 = int` statement
or (node.dest_expression.token_str() == 'UInt32' and node.expression.token_str() == 'int')): # skip `UInt32 = int` statement
continue
elif token.category == Token.Category.NAME and (peek_token().value(source) == ':' # this is type hint
or (token.value(source) == 'self' and peek_token().value(source) == '.' and peek_token(2).category == Token.Category.NAME)
and peek_token(3).value(source) == ':'):
is_self = peek_token().value(source) == '.'
if is_self:
if not (type(this_node) == ASTFunctionDefinition and this_node.function_name == '__init__'):
raise Error('type annotation for `self.*` is permitted only inside `__init__`', token)
next_token()
next_token()
name_token = token
var = tokensn.token_str()
next_token()
advance(':')
if token.category not in (Token.Category.NAME, Token.Category.STRING_LITERAL):
raise Error('expected type name', token)
type_ = token.value(source) if token.category == Token.Category.NAME else token.value(source)[1:-1]
type_token = token
next_token()
while token.value(source) == '.': # for `category : Token.Category`
type_ += '.' + expected_name('type name')
if is_self:
scope.parent.add_var(var, True, type_, name_token)
else:
scope.add_var(var, True, type_, name_token)
type_args = []
if token.value(source) == '[':
next_token()
while token.value(source) != ']':
if token.value(source) == '[': # for `Callable[[str, int], str]`
next_token()
if token.value(source) == ']': # for `Callable[[], str]`
type_arg = ''
else:
type_arg = token.value(source)
next_token()
while token.value(source) == ',':
next_token()
type_arg += ',' + token.value(source)
next_token() # [
advance(']')
type_args.append(type_arg)
elif peek_token().value(source) == '[': # ] # for `table : List[List[List[str]]] = []` and `empty_list : List[List[str]] = []`
type_arg = token.value(source)
next_token()
nesting_level = 0
while True:
type_arg += token.value(source)
if token.value(source) == '[':
next_token()
nesting_level += 1
elif token.value(source) == ']':
next_token()
nesting_level -= 1
if nesting_level == 0:
break
elif token.value(source) == ',':
type_arg += ' '
next_token()
else:
assert(token.category == Token.Category.NAME)
next_token()
type_args.append(type_arg)
else:
type_args.append(token.value(source))
next_token()
while token.value(source) == '.': # for `datetime.date` in `dates : List[datetime.date] = []`
type_args[-1] += '.' + expected_name('subtype name') # [[
if token.value(source) not in ',]':
raise Error('expected `,` or `]` in type\'s arguments list', token)
if token.value(source) == ',':
next_token()
next_token()
if token is not None and token.value(source) == '=':
node = ASTAssignmentWithTypeHint()
next_token()
node.set_expression(expression())
else:
node = ASTTypeHint()
if source[tokens[tokeni-1].end:tokens[tokeni-1].end+4] == ' # &':
node.is_reference = True
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT)):
raise Error('expected end of statement', token)
node.type_token = type_token
node.var = var
node.type = type_
node.type_args = type_args
assert(token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT)) # [-replace with `raise Error` with meaningful error message after first precedent of triggering this assert-]
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
if is_self:
node.parent = this_node.parent
this_node.parent.children.append(node)
node.walk_expressions(check_vars_defined)
continue
elif token.category == Token.Category.DEDENT:
next_token()
if token.category == Token.Category.STATEMENT_SEPARATOR: # Token.Category.EOF
next_token()
assert(token is None)
return
else:
node_expression = expression()
if token is not None and token.value(source) == '=':
node = ASTExprAssignment()
if node_expression.token.category == Token.Category.NAME:
assert(False) #node.add_var = scope.add_var(node_expression.token.value(source))
if node_expression.tuple:
node.add_vars = []
for v in node_expression.children:
if v.token.category != Token.Category.NAME:
node.is_tuple_assign_expression = True
break
node.add_vars.append(scope.add_var(v.token_str()))
else:
node.add_vars = [False]
node.set_dest_expression(node_expression)
next_token()
while True:
expr = expression()
if token is not None and token.value(source) == '=':
expr.ast_parent = node
node.additional_dest_expressions.append(expr)
next_token()
else:
node.set_expression(expr)
break
else:
node = ASTExpression()
node.set_expression(node_expression)
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT)):
raise Error('expected end of statement', token)
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT)): # `(w, h) = int(w1), int(h1)`
raise Error('expected end of statement', token) # ^
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
if (type(node) == ASTExprAssignment and node_expression.token_str() == '.' and node_expression.children[0].token_str() == 'self'
and type(this_node) == ASTFunctionDefinition and this_node.function_name == '__init__'): # only in constructors
assert(type(this_node.parent) == ASTClassDefinition)
found_in_base_class = False
if this_node.parent.base_class_node is not None:
found_in_base_class = this_node.parent.base_class_node.find_member_including_base_classes(node_expression.children[1].token_str())
if not found_in_base_class and scope.parent.add_var(node_expression.children[1].token_str()):
if node.expression.symbol.id == '[' and len(node.expression.children) == 0: # ]
raise Error('please specify type of empty list', Token(node.dest_expression.leftmost(), node.expression.rightmost(), Token.Category.NAME))
node.add_vars = [True]
node.set_dest_expression(node_expression.children[1])
node.parent = this_node.parent
this_node.parent.children.append(node)
node.walk_expressions(check_vars_defined)
continue
elif ((node.expression.symbol.id == '[' and len(node.expression.children) == 0) # ] # skip `self.* = []` because `create_array({})` is meaningless
or (node.expression.symbol.id == '(' and len(node.expression.children) == 1 and node.expression.children[0].token_str() == 'set')): # ) # skip `self.* = set()`
continue
node.walk_expressions(check_vars_defined)
node.parent = this_node
this_node.children.append(node)
if one_line_scope and tokens[tokeni-1].value(source) != ';':
return
return
tokens = []
source = ''
tokeni = -1
token = Token(0, 0, Token.Category.STATEMENT_SEPARATOR)
scope = Scope(None)
tokensn = SymbolNode(token)
file_name = ''
def parse_and_to_str(tokens_, source_, file_name_, imported_modules = None):
if len(tokens_) == 0: return ASTProgram().to_str()
global tokens, source, tokeni, token, scope, tokensn, file_name
prev_tokens = tokens
prev_source = source
prev_tokeni = tokeni
prev_token = token
# prev_scope = scope
prev_tokensn = tokensn
prev_file_name = file_name
tokens = tokens_ + [Token(len(source_), len(source_), Token.Category.STATEMENT_SEPARATOR)]
source = source_
tokeni = -1
token = None
scope = Scope(None)
for pytype in python_types_to_11l:
scope.add_var(pytype)
scope.add_var('IntEnum', True, '(Class)', node = ASTClassDefinition())
file_name = file_name_
next_token()
p = ASTProgram()
p.imported_modules = imported_modules
parse_internal(p)
def check_for_and_or(node):
def f(e : SymbolNode):
if e.symbol.id == 'or' and \
(e.children[0].symbol.id == 'and' or e.children[1].symbol.id == 'and'):
if e.children[0].symbol.id == 'and':
start = e.children[0].children[0].leftmost()
end = e.children[1].rightmost()
midend = e.children[0].children[1].rightmost()
midstart = e.children[0].children[1].leftmost()
else:
start = e.children[0].leftmost()
end = e.children[1].children[1].rightmost()
midend = e.children[1].children[0].rightmost()
midstart = e.children[1].children[0].leftmost()
raise Error("relative precedence of operators `and` and `or` is undetermined; please add parentheses this way:\n`("
+ source[start:midend ] + ')' + source[midend :end] + "`\nor this way:\n`"
+ source[start:midstart] + '(' + source[midstart:end] + ')`', Token(start, end, Token.Category.OPERATOR_OR_DELIMITER))
for child in e.children:
if child is not None:
f(child)
node.walk_expressions(f)
node.walk_children(check_for_and_or)
check_for_and_or(p)
def transformations(node):
if isinstance(node, ASTNodeWithChildren):
index = 0
while index < len(node.children):
child = node.children[index]
if index < len(node.children) - 1 and type(child) == ASTExprAssignment and child.dest_expression.token.category == Token.Category.NAME and type(node.children[index+1]) == ASTIf and type(node.children[index+1].else_or_elif) == ASTElseIf: # transform if-elif-else chain into switch
if_node = node.children[index+1]
var_name = child.dest_expression.token.value(source)
transformation_possible = True
while True:
if not (if_node.expression.symbol.id == '==' and if_node.expression.children[0].token.category == Token.Category.NAME and if_node.expression.children[0].token.value(source) == var_name
and if_node.expression.children[1].token.category in (Token.Category.STRING_LITERAL, Token.Category.NUMERIC_LITERAL)):
transformation_possible = False
break
if_node = if_node.else_or_elif
if if_node is None or type(if_node) == ASTElse:
break
if transformation_possible:
tid = child.dest_expression.scope.find(var_name)
assert(tid is not None)
found_reference_to_var_name = False
def find_reference_to_var_name(node):
def f(e : SymbolNode):
if e.token.category == Token.Category.NAME and e.token_str() == var_name and id(e.scope.find(var_name)) == id(tid):
nonlocal found_reference_to_var_name
found_reference_to_var_name = True
return
for child in e.children:
if child is not None:
f(child)
node.walk_expressions(f)
node.walk_children(find_reference_to_var_name)
if_node = node.children[index+1]
while True:
if_node.walk_children(find_reference_to_var_name) # looking for switch variable inside switch statements
if found_reference_to_var_name:
break
if type(if_node) == ASTElse:
break
if_node = if_node.else_or_elif
if if_node is None:
break
if not found_reference_to_var_name:
i = index + 2
while i < len(node.children):
find_reference_to_var_name(node.children[i]) # looking for switch variable after switch
if found_reference_to_var_name:
break
i += 1
switch_node = ASTSwitch()
switch_node.set_expression(child.dest_expression if found_reference_to_var_name else child.expression)
if_node = node.children[index+1]
while True:
case = ASTSwitch.Case()
case.parent = switch_node
case.set_expression(SymbolNode(Token(0, 0, Token.Category.KEYWORD), 'E') if type(if_node) == ASTElse else if_node.expression.children[1])
case.children = if_node.children
for child in case.children:
child.parent = case
switch_node.cases.append(case)
if type(if_node) == ASTElse:
break
if_node = if_node.else_or_elif
if if_node is None:
break
if found_reference_to_var_name:
index += 1
else:
node.children.pop(index)
node.children.pop(index)
node.children.insert(index, switch_node)
switch_node.parent = node
continue # to update child = node.children[index]
if index < len(node.children) - 1 and type(child) == ASTExpression and child.expression.symbol.id == '-=' and child.expression.children[1].token.value(source) == '1' \
and type(node.children[index+1]) == ASTIf and len(node.children[index+1].expression.children) == 2 \
and node.children[index+1].expression.children[0].token.value(source) == child.expression.children[0].token.value(source): # transform `nesting_level -= 1 \n if nesting_level == 0:` into `if --nesting_level == 0`
child.expression.parent = node.children[index+1].expression#.children[0].parent
node.children[index+1].expression.children[0] = child.expression
node.children.pop(index)
continue
if type(child) == ASTFor:
if len(child.loop_variables): # detect loop variables' changing/modification, and add qualifier `=` to changing ones
lvars = child.loop_variables
found = set()
def detect_lvars_modification(node):
if type(node) == ASTExprAssignment:
nonlocal found
if node.dest_expression.token_str() in lvars:
found.add(node.dest_expression.token_str())
if len(lvars) == 1:
return
elif node.dest_expression.tuple:
for t in node.dest_expression.children:
if t.token_str() in lvars:
found.add(t.token_str())
if len(lvars) == 1:
return
def f(e : SymbolNode):
if e.symbol.id[-1] == '=' and e.symbol.id not in ('==', '!=', '<=', '>=') and e.children[0].token_str() in lvars: # +=, -=, *=, /=, etc.
nonlocal found
found.add(e.children[0].token_str())
node.walk_expressions(f)
node.walk_children(detect_lvars_modification)
detect_lvars_modification(child)
for lvar in found:
lvari = lvars.index(lvar)
child.loop_variables[lvari] = '=' + child.loop_variables[lvari]
if child.expression.symbol.id == '(' and child.expression.children[0].symbol.id == '.' \
and child.expression.children[0].children[0].token_str() == 'os' \
and child.expression.children[0].children[1].token_str() == 'walk': # ) # detect `for ... in os.walk(...)` and remove `dirs[:] = ...` statement
child.os_walk = True
assert(len(child.loop_variables) == 3)
c0 = child.children[0]
if (type(c0) == ASTExprAssignment and c0.dest_expression.symbol.id == '[' # ]
and len(c0.dest_expression.children) == 2
and c0.dest_expression.children[1] is None
and c0.dest_expression.children[0].token_str() == child.loop_variables[1]
and c0.expression.symbol.id == '[' # ]
and len(c0.expression.children) == 1
and c0.expression.children[0].symbol.id == 'for'
and len(c0.expression.children[0].children) == 4
and c0.expression.children[0].children[1].to_str()
== c0.expression.children[0].children[0].to_str()):
child.dir_filter = c0.expression.children[0].children[1].to_str() + ' -> ' + c0.expression.children[0].children[3].to_str()
child.children.pop(0)
elif child.expression.symbol.id == '(' and child.expression.children[0].token_str() == 'enumerate': # )
assert(len(child.loop_variables) == 2)
set_index_node = ASTExprAssignment()
set_index_node.set_dest_expression(SymbolNode(Token(0, 0, Token.Category.NAME), child.loop_variables[0].lstrip('=')))
child.loop_variables.pop(0)
start = ''
if len(child.expression.children) >= 5:
if child.expression.children[4] is not None:
assert(child.expression.children[3].to_str() == 'start')
start = child.expression.children[4].to_str()
else:
start = child.expression.children[3].to_str()
set_index_node.set_expression(SymbolNode(Token(0, 0, Token.Category.NAME), 'L.index' + (' + ' + start if start != '' else '')))
set_index_node.add_vars = [True]
set_index_node.parent = child
child.children.insert(0, set_index_node)
child.expression.children[0].parent = child.expression.parent
child.expression.children[0].ast_parent = child.expression.ast_parent
child.expression = child.expression.children[1]
elif type(child) == ASTFunctionDefinition: # detect function's arguments changing/modification inside this function, and add qualifier `=` to changing ones
if len(child.function_arguments):
fargs = [farg[0] for farg in child.function_arguments]
found = set()
def detect_arguments_modification(node):
if type(node) == ASTExprAssignment:
nonlocal found
if node.dest_expression.token_str() in fargs:
found.add(node.dest_expression.token_str())
if len(fargs) == 1:
return
elif node.dest_expression.tuple:
for t in node.dest_expression.children:
if t.token_str() in fargs:
found.add(t.token_str())
if len(fargs) == 1:
return
def f(e : SymbolNode):
if e.symbol.id[-1] == '=' and e.symbol.id not in ('==', '!=', '<=', '>=') and e.children[0].token_str() in fargs: # +=, -=, *=, /=, etc.
nonlocal found
found.add(e.children[0].token_str())
node.walk_expressions(f)
node.walk_children(detect_arguments_modification)
detect_arguments_modification(child)
for farg in found:
fargi = fargs.index(farg)
if child.function_arguments[fargi][3] != '&': # if argument already has `&` qualifier, then qualifier `=` is not needed
child.function_arguments[fargi] = ('=' + child.function_arguments[fargi][0], child.function_arguments[fargi][1], child.function_arguments[fargi][2], child.function_arguments[fargi][3])
index += 1
node.walk_children(transformations)
transformations(p)
s = p.to_str() # call `to_str()` moved here [from outside] because it accesses global variables `source` (via `token.value(source)`) and `tokens` (via `tokens[ti]`)
tokens = prev_tokens
source = prev_source
tokeni = prev_tokeni
token = prev_token
# scope = prev_scope
tokensn = prev_tokensn
file_name = prev_file_name
return s
| 11l | /11l-2021.3-py3-none-any.whl/python_to_11l/parse.py | parse.py |
from typing import List, Tuple
Char = str
from enum import IntEnum
keywords = [ # https://docs.python.org/3/reference/lexical_analysis.html#keywords
'False', 'await', 'else', 'import', 'pass',
'None', 'break', 'except', 'in', 'raise',
'True', 'class', 'finally', 'is', 'return',
'and', 'continue', 'for', 'lambda', 'try',
'as', 'def', 'from', 'nonlocal', 'while',
'assert', 'del', 'global', 'not', 'with',
'async', 'elif', 'if', 'or', 'yield',]
operators = [ # https://docs.python.org/3/reference/lexical_analysis.html#operators
'+', '-', '*', '**', '/', '//', '%', '@',
'<<', '>>', '&', '|', '^', '~',
'<', '>', '<=', '>=', '==', '!=',]
#operators.sort(key = lambda x: len(x), reverse = True)
delimiters = [ # https://docs.python.org/3/reference/lexical_analysis.html#delimiters
'(', ')', '[', ']', '{', '}',
',', ':', '.', ';', '@', '=', '->',
'+=', '-=', '*=', '/=', '//=', '%=', '@=',
'&=', '|=', '^=', '>>=', '<<=', '**=',]
#delimiters.sort(key = lambda x: len(x), reverse = True)
operators_and_delimiters = sorted(operators + delimiters, key = lambda x: len(x), reverse = True)
class Error(Exception):
message : str
pos : int
end : int
def __init__(self, message, pos):
self.message = message
self.pos = pos
self.end = pos
class Token:
class Category(IntEnum): # why ‘Category’: >[https://docs.python.org/3/reference/lexical_analysis.html#other-tokens]:‘the following categories of tokens exist’
NAME = 0 # or IDENTIFIER
KEYWORD = 1
CONSTANT = 2
OPERATOR_OR_DELIMITER = 3
NUMERIC_LITERAL = 4
STRING_LITERAL = 5
INDENT = 6 # [https://docs.python.org/3/reference/lexical_analysis.html#indentation][-1]
DEDENT = 7
STATEMENT_SEPARATOR = 8
start : int
end : int
category : Category
def __init__(self, start, end, category):
self.start = start
self.end = end
self.category = category
def __repr__(self):
return str(self.start)
def value(self, source):
return source[self.start:self.end]
def to_str(self, source):
return 'Token('+str(self.category)+', "'+self.value(source)+'")'
def tokenize(source, newline_chars : List[int] = None, comments : List[Tuple[int, int]] = None):
tokens : List[Token] = []
indentation_levels : List[int] = []
nesting_elements : List[Tuple[Char, int]] = [] # parentheses, square brackets or curly braces
begin_of_line = True
expected_an_indented_block = False
i = 0
while i < len(source):
if begin_of_line: # at the beginning of each line, the line's indentation level is compared to the last of the indentation_levels [:1]
begin_of_line = False
linestart = i
indentation_level = 0
while i < len(source):
if source[i] == ' ':
indentation_level += 1
elif source[i] == "\t":
indentation_level += 8 # consider tab as just 8 spaces (I know that Python 3 use different rules, but I disagree with Python 3 approach ([-1]:‘Tabs are replaced (from left to right) by one to eight spaces’), so I decided to use this simpler solution)
else:
break
i += 1
if i == len(source): # end of source
break
if source[i] in "\r\n#": # lines with only whitespace and/or comments do not affect the indentation
continue
prev_indentation_level = indentation_levels[-1] if len(indentation_levels) else 0
if expected_an_indented_block:
if not indentation_level > prev_indentation_level:
raise Error('expected an indented block', i)
if indentation_level == prev_indentation_level: # [1:] [-1]:‘If it is equal, nothing happens.’ [:2]
if len(tokens):
tokens.append(Token(linestart-1, linestart, Token.Category.STATEMENT_SEPARATOR))
elif indentation_level > prev_indentation_level: # [2:] [-1]:‘If it is larger, it is pushed on the stack, and one INDENT token is generated.’ [:3]
if not expected_an_indented_block:
raise Error('unexpected indent', i)
expected_an_indented_block = False
indentation_levels.append(indentation_level)
tokens.append(Token(linestart, i, Token.Category.INDENT))
else: # [3:] [-1]:‘If it is smaller, it ~‘must’ be one of the numbers occurring on the stack; all numbers on the stack that are larger are popped off, and for each number popped off a DEDENT token is generated.’ [:4]
while True:
indentation_levels.pop()
tokens.append(Token(i, i, Token.Category.DEDENT))
level = indentation_levels[-1] if len(indentation_levels) else 0
if level == indentation_level:
break
if level < indentation_level:
raise Error('unindent does not match any outer indentation level', i)
ch = source[i]
if ch in " \t":
i += 1 # just skip whitespace characters
elif ch in "\r\n":
if newline_chars is not None:
newline_chars.append(i)
i += 1
if ch == "\r" and source[i:i+1] == "\n":
i += 1
if len(nesting_elements) == 0: # [https://docs.python.org/3/reference/lexical_analysis.html#implicit-line-joining ‘Implicit line joining’]:‘Expressions in parentheses, square brackets or curly braces can be split over more than one physical line without using backslashes.’
begin_of_line = True
elif ch == '#':
comment_start = i
i += 1
while i < len(source) and source[i] not in "\r\n":
i += 1
if comments is not None:
comments.append((comment_start, i))
else:
expected_an_indented_block = ch == ':'
operator_or_delimiter = ''
for op in operators_and_delimiters:
if source[i:i+len(op)] == op:
operator_or_delimiter = op
break
lexem_start = i
i += 1
category : Token.Category
if operator_or_delimiter != '':
i = lexem_start + len(operator_or_delimiter)
category = Token.Category.OPERATOR_OR_DELIMITER
if ch in '([{':
nesting_elements.append((ch, lexem_start))
elif ch in ')]}': # ([{
if len(nesting_elements) == 0 or nesting_elements[-1][0] != {')':'(', ']':'[', '}':'{'}[ch]: # }])
raise Error('there is no corresponding opening parenthesis/bracket/brace for `' + ch + '`', lexem_start)
nesting_elements.pop()
elif ch == ';':
category = Token.Category.STATEMENT_SEPARATOR
elif ch in ('"', "'") or (ch in 'rRbB' and source[i:i+1] in ('"', "'")):
ends : str
if ch in 'rRbB':
ends = source[i:i+3] if source[i:i+3] in ('"""', "'''") else source[i]
else:
i -= 1
ends = source[i:i+3] if source[i:i+3] in ('"""', "'''") else ch
i += len(ends)
while True:
if i == len(source):
raise Error('unclosed string literal', lexem_start)
if source[i] == '\\':
i += 1
if i == len(source):
continue
elif source[i:i+len(ends)] == ends:
i += len(ends)
break
i += 1
category = Token.Category.STRING_LITERAL
elif ch.isalpha() or ch == '_': # this is NAME/IDENTIFIER or KEYWORD
while i < len(source):
ch = source[i]
if not (ch.isalpha() or ch == '_' or '0' <= ch <= '9' or ch == '?'):
break
i += 1
if source[lexem_start:i] in keywords:
if source[lexem_start:i] in ('None', 'False', 'True'):
category = Token.Category.CONSTANT
else:
category = Token.Category.KEYWORD
else:
category = Token.Category.NAME
elif (ch in '-+' and '0' <= source[i:i+1] <= '9') or '0' <= ch <= '9': # this is NUMERIC_LITERAL
if ch in '-+':
assert(False) # considering sign as a part of numeric literal is a bad idea — expressions like `j-3` are cease to parse correctly
#sign = ch
ch = source[i+1]
else:
i -= 1
is_hex = ch == '0' and source[i+1:i+2] in ('x', 'X')
is_oct = ch == '0' and source[i+1:i+2] in ('o', 'O')
is_bin = ch == '0' and source[i+1:i+2] in ('b', 'B')
if is_hex or is_oct or is_bin:
i += 2
# if not '0' <= source[i:i+1] <= '9':
# raise Error('expected digit', i)
start = i
i += 1
if is_hex:
while i < len(source) and ('0' <= source[i] <= '9' or 'a' <= source[i] <= 'z' or 'A' <= source[i] <= 'Z' or source[i] == '_'):
i += 1
elif is_oct:
while i < len(source) and ('0' <= source[i] <= '7' or source[i] == '_'):
i += 1
elif is_bin:
while i < len(source) and source[i] in '01_':
i += 1
else:
while i < len(source) and ('0' <= source[i] <= '9' or source[i] in '_.eE'):
if source[i] in 'eE':
if source[i+1:i+2] in '-+':
i += 1
i += 1
if source[i:i+1] in ('j', 'J'):
i += 1
if '_' in source[start:i] and not '.' in source[start:i]: # float numbers do not checked for a while
number = source[start:i].replace('_', '')
number_with_separators = ''
j = len(number)
while j > 3:
number_with_separators = '_' + number[j-3:j] + number_with_separators
j -= 3
number_with_separators = number[0:j] + number_with_separators
if source[start:i] != number_with_separators:
raise Error('digit separator in this number is located in the wrong place (should be: '+ number_with_separators +')', start)
category = Token.Category.NUMERIC_LITERAL
elif ch == '\\':
if source[i] not in "\r\n":
raise Error('only new line character allowed after backslash', i)
if source[i] == "\r":
i += 1
if source[i] == "\n":
i += 1
continue
else:
raise Error('unexpected character ' + ch, lexem_start)
tokens.append(Token(lexem_start, i, category))
if len(nesting_elements):
raise Error('there is no corresponding closing parenthesis/bracket/brace for `' + nesting_elements[-1][0] + '`', nesting_elements[-1][1])
if expected_an_indented_block:
raise Error('expected an indented block', i)
while len(indentation_levels): # [4:] [-1]:‘At the end of the file, a DEDENT token is generated for each number remaining on the stack that is larger than zero.’
tokens.append(Token(i, i, Token.Category.DEDENT))
indentation_levels.pop()
return tokens
| 11l | /11l-2021.3-py3-none-any.whl/python_to_11l/tokenizer.py | tokenizer.py |
try:
from tokenizer import Token
import tokenizer
except ImportError:
from .tokenizer import Token
from . import tokenizer
from typing import List, Tuple, Dict, Callable, Set
from enum import IntEnum
import os, eldf
class Error(Exception):
def __init__(self, message, token):
self.message = message
self.pos = token.start
self.end = token.end
class Scope:
parent : 'Scope'
node : 'ASTNode' = None
class Id:
type : str
type_node : 'ASTTypeDefinition' = None
ast_nodes : List['ASTNodeWithChildren']
last_occurrence : 'SymbolNode' = None
def __init__(self, type, ast_node = None):
assert(type is not None)
self.type = type
self.ast_nodes = []
if ast_node is not None:
self.ast_nodes.append(ast_node)
def init_type_node(self, scope):
if self.type != '':
tid = scope.find(self.type)
if tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition:
self.type_node = tid.ast_nodes[0]
def serialize_to_dict(self):
ast_nodes = []
for ast_node in self.ast_nodes:
if type(ast_node) in (ASTFunctionDefinition, ASTTypeDefinition):
ast_nodes.append(ast_node.serialize_to_dict())
return {'type': self.type, 'ast_nodes': ast_nodes}
def deserialize_from_dict(self, d):
#self.type = d['type']
for ast_node_dict in d['ast_nodes']:
ast_node = ASTFunctionDefinition() if ast_node_dict['node_type'] == 'function' else ASTTypeDefinition()
ast_node.deserialize_from_dict(ast_node_dict)
self.ast_nodes.append(ast_node)
ids : Dict[str, Id]
is_function : bool
is_lambda = False
def __init__(self, func_args):
self.parent = None
if func_args is not None:
self.is_function = True
self.ids = dict(map(lambda x: (x[0], Scope.Id(x[1])), func_args))
else:
self.is_function = False
self.ids = {}
def init_ids_type_node(self):
for id in self.ids.values():
id.init_type_node(self.parent)
def serialize_to_dict(self):
ids_dict = {}
for name, id in self.ids.items():
ids_dict[name] = id.serialize_to_dict()
return ids_dict
def deserialize_from_dict(self, d):
for name, id_dict in d.items():
id = Scope.Id(id_dict['type'])
id.deserialize_from_dict(id_dict)
self.ids[name] = id
def find_in_current_function(self, name):
s = self
while True:
if name in s.ids:
return True
if s.is_function:
return False
s = s.parent
if s is None:
return False
def find_in_current_type_function(self, name):
s = self
while True:
if name in s.ids:
return True
if s.is_function and type(s.node) == ASTFunctionDefinition and type(s.node.parent) == ASTTypeDefinition:
return False
s = s.parent
if s is None:
return False
def find(self, name):
s = self
while True:
id = s.ids.get(name)
if id is not None:
return id
s = s.parent
if s is None:
return None
def find_and_return_scope(self, name):
s = self
if type(s.node) == ASTTypeDefinition:
id = s.ids.get(name)
if id is not None:
return id, s
while True:
if type(s.node) != ASTTypeDefinition:
id = s.ids.get(name)
if id is not None:
return id, s
s = s.parent
if s is None:
return None, None
def add_function(self, name, ast_node):
if name in self.ids: # V &id = .ids.set_if_not_present(name, Id(N)) // [[[or `put_if_absent` as in Java, or `add_if_absent`]]] note that this is an error: `V id = .ids.set_if_not_present(...)`, but you can do this: `V id = copy(.ids.set_if_not_present(...))`
assert(type(self.ids[name].ast_nodes[0]) == ASTFunctionDefinition) # assert(id.ast_nodes.empty | T(id.ast_nodes[0]) == ASTFunctionDefinition)
self.ids[name].ast_nodes.append(ast_node) # id.ast_nodes [+]= ast_node
else:
self.ids[name] = Scope.Id('', ast_node)
def add_name(self, name, ast_node):
if name in self.ids: # I !.ids.set(name, Id(N, ast_node))
if isinstance(ast_node, ASTVariableDeclaration):
t = ast_node.type_token
elif isinstance(ast_node, ASTNodeWithChildren):
t = tokens[ast_node.tokeni + 1]
else:
t = token
raise Error('redefinition of already defined identifier is not allowed', t) # X Error(‘redefinition ...’, ...)
self.ids[name] = Scope.Id('', ast_node)
scope : Scope
class SymbolBase:
id : str
lbp : int
nud_bp : int
led_bp : int
nud : Callable[['SymbolNode'], 'SymbolNode']
led : Callable[['SymbolNode', 'SymbolNode'], 'SymbolNode']
def set_nud_bp(self, nud_bp, nud):
self.nud_bp = nud_bp
self.nud = nud
def set_led_bp(self, led_bp, led):
self.led_bp = led_bp
self.led = led
def __init__(self):
def nud(s): raise Error('unknown unary operator', s.token)
self.nud = nud
def led(s, l): raise Error('unknown binary operator', s.token)
self.led = led
int_is_int64 = False
class SymbolNode:
token : Token
symbol : SymbolBase = None
children : List['SymbolNode']# = []
parent : 'SymbolNode' = None
ast_parent : 'ASTNode'
function_call : bool = False
tuple : bool = False
is_list : bool = False
is_dict : bool = False
is_type : bool = False
postfix : bool = False
scope : Scope
token_str_override : str
def __init__(self, token, token_str_override = None, symbol = None):
self.token = token
self.children = []
self.scope = scope
self.token_str_override = token_str_override
self.symbol = symbol
def append_child(self, child):
child.parent = self
self.children.append(child)
def leftmost(self):
if self.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL, Token.Category.NAME, Token.Category.CONSTANT):
return self.token.start
if self.symbol.id == '(': # )
if self.function_call:
return self.children[0].token.start
else:
return self.token.start
elif self.symbol.id == '[': # ]
if self.is_list or self.is_dict:
return self.token.start
else:
return self.children[0].token.start
if len(self.children) in (2, 3):
return self.children[0].leftmost()
return self.token.start
def rightmost(self):
if self.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL, Token.Category.NAME, Token.Category.CONSTANT):
return self.token.end
if self.symbol.id in '([': # ])
if len(self.children) == 0:
return self.token.end + 1
return (self.children[-1] or self.children[-2]).rightmost() + 1
return self.children[-1].rightmost()
def left_to_right_token(self):
return Token(self.leftmost(), self.rightmost(), Token.Category.NAME)
def token_str(self):
return self.token.value(source) if not self.token_str_override else self.token_str_override
def to_type_str(self):
if self.symbol.id == '[': # ]
if self.is_list:
assert(len(self.children) == 1)
return 'Array[' + self.children[0].to_type_str() + ']'
elif self.is_dict:
assert(len(self.children) == 1 and self.children[0].symbol.id == '=')
return 'Dict[' + self.children[0].children[0].to_type_str() + ', ' \
+ self.children[0].children[1].to_type_str() + ']'
else:
assert(self.is_type)
r = self.children[0].token.value(source) + '['
for i in range(1, len(self.children)):
r += self.children[i].to_type_str()
if i < len(self.children) - 1:
r += ', '
return r + ']'
elif self.symbol.id == '(': # )
if len(self.children) == 1 and self.children[0].symbol.id == '->':
r = 'Callable['
c0 = self.children[0]
if c0.children[0].symbol.id == '(': # )
for child in c0.children[0].children:
r += child.to_type_str() + ', '
else:
r += c0.children[0].to_type_str() + ', '
return r + c0.children[1].to_type_str() + ']'
else:
assert(self.tuple)
r = '('
for i in range(len(self.children)):
assert(self.children[i].symbol.id != '->')
r += self.children[i].to_type_str()
if i < len(self.children) - 1:
r += ', '
return r + ')'
assert(self.token.category == Token.Category.NAME)
return self.token_str()
def to_str(self):
if self.token.category == Token.Category.NAME:
if self.token_str() in ('L.index', 'Ц.индекс', 'loop.index', 'цикл.индекс'):
parent = self
while parent.parent:
parent = parent.parent
ast_parent = parent.ast_parent
while True:
if type(ast_parent) == ASTLoop:
ast_parent.has_L_index = True
break
ast_parent = ast_parent.parent
return 'Lindex'
if self.token_str() == '(.)':
# if self.parent is not None and self.parent.symbol.id == '=' and self is self.parent.children[1]: # `... = (.)` -> `... = this;`
# return 'this'
return '*this'
tid = self.scope.find(self.token_str())
if tid is not None and ((len(tid.ast_nodes) and isinstance(tid.ast_nodes[0], ASTVariableDeclaration) and tid.ast_nodes[0].is_ptr and not tid.ast_nodes[0].nullable) # `animals [+]= animal` -> `animals.append(std::move(animal));`
or (tid.type_node is not None and (tid.type_node.has_virtual_functions or tid.type_node.has_pointers_to_the_same_type))) \
and (self.parent is None or self.parent.symbol.id not in ('.', ':')):
if tid.last_occurrence is None:
last_reference = None
var_name = self.token_str()
def find_last_reference_to_identifier(node):
def f(e : SymbolNode):
if e.token.category == Token.Category.NAME and e.token_str() == var_name and id(e.scope.find(var_name)) == id(tid):
nonlocal last_reference
last_reference = e
for child in e.children:
if child is not None:
f(child)
node.walk_expressions(f)
node.walk_children(find_last_reference_to_identifier)
if tid.type_node is not None:
find_last_reference_to_identifier(self.scope.node)
tid.last_occurrence = last_reference
else:
for index in range(len(tid.ast_nodes[0].parent.children)):
if id(tid.ast_nodes[0].parent.children[index]) == id(tid.ast_nodes[0]):
for index in range(index + 1, len(tid.ast_nodes[0].parent.children)):
find_last_reference_to_identifier(tid.ast_nodes[0].parent.children[index])
tid.last_occurrence = last_reference
break
if id(tid.last_occurrence) == id(self):
return 'std::move(' + self.token_str() + ')'
if tid is not None and len(tid.ast_nodes) and isinstance(tid.ast_nodes[0], ASTVariableDeclaration) and tid.ast_nodes[0].is_ptr and tid.ast_nodes[0].nullable:
if self.parent is None or (not (self.parent.symbol.id in ('==', '!=') and self.parent.children[1].token_str() in ('N', 'Н', 'null', 'нуль'))
and not (self.parent.symbol.id == '.')
and not (self.parent.symbol.id == '?')
and not (self.parent.symbol.id == '=' and self is self.parent.children[0])):
return '*' + self.token_str()
return self.token_str().lstrip('@=').replace(':', '::')
if self.token.category == Token.Category.KEYWORD and self.token_str() in ('L.last_iteration', 'Ц.последняя_итерация', 'loop.last_iteration', 'цикл.последняя_итерация'):
parent = self
while parent.parent:
parent = parent.parent
ast_parent = parent.ast_parent
while True:
if type(ast_parent) == ASTLoop:
ast_parent.has_L_last_iteration = True
break
ast_parent = ast_parent.parent
return '(__begin == __end)'
if self.token.category == Token.Category.NUMERIC_LITERAL:
n = self.token_str()
if n[-1] in 'oо':
return '0' + n[:-1] + 'LL'*int_is_int64
if n[-1] in 'bд':
return '0b' + n[:-1] + 'LL'*int_is_int64
if n[-1] == 's':
return n[:-1] + 'f'
if n[4:5] == "'" or n[-3:-2] == "'" or n[-2:-1] == "'":
nn = ''
for c in n:
nn += {'А':'A','Б':'B','С':'C','Д':'D','Е':'E','Ф':'F'}.get(c, c)
if n[-2:-1] == "'":
nn = nn.replace("'", '')
return '0x' + nn
if '.' in n or 'e' in n:
return n
return n + 'LL'*int_is_int64
if self.token.category == Token.Category.STRING_LITERAL:
s = self.token_str()
if s[0] == '"':
return 'u' + s + '_S'
eat_left = 0
while s[eat_left] == "'":
eat_left += 1
eat_right = 0
while s[-1-eat_right] == "'":
eat_right += 1
s = s[1+eat_left*2:-1-eat_right*2]
if '\\' in s or "\n" in s:
delimiter = '' # (
while ')' + delimiter + '"' in s:
delimiter += "'"
return 'uR"' + delimiter + '(' + s + ')' + delimiter + '"_S'
return 'u"' + repr(s)[1:-1].replace('"', R'\"').replace(R"\'", "'") + '"_S'
if self.token.category == Token.Category.CONSTANT:
return {'N': 'nullptr', 'Н': 'nullptr', 'null': 'nullptr', 'нуль': 'nullptr', '0B': 'false', '0В': 'false', '1B': 'true', '1В': 'true'}[self.token_str()]
def is_char(child):
ts = child.token_str()
return child.token.category == Token.Category.STRING_LITERAL and (len(ts) == 3 or (ts[:2] == '"\\' and len(ts) == 4))
def char_or_str(child, is_char):
if is_char:
if child.token_str()[1:-1] == "\\":
return R"u'\\'_C"
return "u'" + child.token_str()[1:-1].replace("'", R"\'") + "'_C"
return child.to_str()
if self.symbol.id == '(': # )
if self.function_call:
func_name = self.children[0].to_str()
f_node = None
if self.children[0].symbol.id == '.':
if len(self.children[0].children) == 1:
s = self.scope
while True:
if s.is_function:
if type(s.node) != ASTFunctionDefinition:
assert(s.is_lambda)
raise Error('probably `@` is missing (before this dot)', self.children[0].token)
if type(s.node.parent) == ASTTypeDefinition:
assert(s.node.parent.scope == s.parent)
fid = s.node.parent.find_id_including_base_types(self.children[0].children[0].to_str())
if fid is None:
raise Error('call of undefined method `' + func_name + '`', self.children[0].children[0].token)
if len(fid.ast_nodes) > 1:
raise Error('methods\' overloading is not supported for now', self.children[0].children[0].token)
f_node = fid.ast_nodes[0]
if type(f_node) == ASTTypeDefinition:
if len(f_node.constructors) == 0:
f_node = ASTFunctionDefinition()
else:
if len(f_node.constructors) > 1:
raise Error('constructors\' overloading is not supported for now (see type `' + f_node.type_name + '`)', self.children[0].left_to_right_token())
f_node = f_node.constructors[0]
break
s = s.parent
assert(s)
elif func_name.endswith('.map') and self.children[2].token.category == Token.Category.NAME and self.children[2].token_str()[0].isupper():
c2 = self.children[2].to_str()
return func_name + '([](const auto &x){return ' + {'Int':'to_int', 'Int64':'to_int64', 'UInt64':'to_uint64', 'UInt32':'to_uint32', 'Float':'to_float'}.get(c2, c2) + '(x);})'
elif func_name.endswith('.split'):
f_node = type_of(self.children[0])
if f_node is None: # assume this is String method
f_node = builtins_scope.find('String').ast_nodes[0].scope.ids.get('split').ast_nodes[0]
elif self.children[0].children[1].token.value(source) == 'union':
func_name = self.children[0].children[0].to_str() + '.set_union'
else:
f_node = type_of(self.children[0])
elif func_name == 'Int':
if self.children[1] is not None and self.children[1].token_str() == "bytes'":
return 'int_from_bytes(' + self.children[2].to_str() + ')'
func_name = 'to_int'
f_node = builtins_scope.find('Int').ast_nodes[0].constructors[0]
elif func_name == 'Int64':
func_name = 'to_int64'
f_node = builtins_scope.find('Int').ast_nodes[0].constructors[0]
elif func_name == 'UInt64':
func_name = 'to_uint64'
f_node = builtins_scope.find('Int').ast_nodes[0].constructors[0]
elif func_name == 'UInt32':
func_name = 'to_uint32'
f_node = builtins_scope.find('Int').ast_nodes[0].constructors[0]
elif func_name == 'Float':
func_name = 'to_float'
elif func_name == 'Char' and self.children[2].token.category == Token.Category.STRING_LITERAL:
assert(self.children[1] is None) # [-TODO: write a good error message-]
if not is_char(self.children[2]):
raise Error('Char can be constructed only from single character string literals', self.children[2].token)
return char_or_str(self.children[2], True)
elif func_name.startswith('Array['): # ]
func_name = 'Array<' + func_name[6:-1] + '>'
elif func_name == 'Array': # `list(range(1,10))` -> `Array(1.<10)` -> `create_array(range_el(1, 10))`
func_name = 'create_array'
elif self.children[0].symbol.id == '[' and self.children[0].is_list: # ] # `[Type]()` -> `Array<Type>()`
func_name = trans_type(self.children[0].to_type_str(), self.children[0].scope, self.children[0].token)
elif func_name == 'Dict':
func_name = 'create_dict'
elif func_name.startswith('DefaultDict['): # ]
func_name = 'DefaultDict<' + ', '.join(trans_type(c.to_type_str(), c.scope, c.token) for c in self.children[0].children[1:]) + '>'
elif func_name in ('Set', 'Deque'):
func_name = 'create_set' if func_name == 'Set' else 'create_deque'
if self.children[2].is_list:
c = self.children[2].children
res = func_name + ('<' + trans_type(c[0].children[0].token_str(), self.scope, c[0].children[0].token)
+ '>' if len(c) > 1 and c[0].function_call and c[0].children[0].token_str()[0].isupper() else '') + '({'
for i in range(len(c)):
res += c[i].to_str()
if i < len(c)-1:
res += ', '
return res + '})'
elif func_name.startswith(('Set[', 'Deque[')): # ]]
c = self.children[0].children[1]
func_name = func_name[:func_name.find('[')] + '<' + trans_type(c.to_type_str(), c.scope, c.token) + '>' # ]
elif func_name == 'sum' and self.children[2].function_call and self.children[2].children[0].symbol.id == '.' and self.children[2].children[0].children[1].token_str() == 'map':
assert(len(self.children) == 3)
return 'sum_map(' + self.children[2].children[0].children[0].to_str() + ', ' + self.children[2].children[2].to_str() + ')'
elif func_name in ('min', 'max') and len(self.children) == 5 and self.children[3] is not None and self.children[3].token_str() == "key'":
return func_name + '_with_key(' + self.children[2].to_str() + ', ' + self.children[4].to_str() + ')'
elif func_name == 'copy':
s = self.scope
while True:
if s.is_function:
if type(s.node.parent) == ASTTypeDefinition:
fid = s.parent.ids.get('copy')
if fid is not None:
func_name = '::copy'
break
s = s.parent
assert(s)
elif func_name == 'move':
func_name = 'std::move'
elif func_name == '*this':
func_name = '(*this)' # function call has higher precedence than dereference in C++, so `*this(...)` is equivalent to `*(this(...))`
elif self.children[0].symbol.id == '[': # ]
pass
elif self.children[0].function_call: # for `enumFromTo(0)(1000)`
pass
else:
if self.children[0].symbol.id == ':':
fid, sc = find_module(self.children[0].children[0].to_str()).scope.find_and_return_scope(self.children[0].children[1].token_str())
else:
fid, sc = self.scope.find_and_return_scope(func_name)
if fid is None:
raise Error('call of undefined function `' + func_name + '`', self.children[0].left_to_right_token())
if len(fid.ast_nodes) > 1:
raise Error('functions\' overloading is not supported for now', self.children[0].left_to_right_token())
if len(fid.ast_nodes) == 0:
if sc.is_function: # for calling of function arguments, e.g. `F amb(comp, ...)...comp(prev, opt)`
f_node = None
else:
raise Error('node of function `' + func_name + '` is not found', self.children[0].left_to_right_token())
else:
f_node = fid.ast_nodes[0]
if type(f_node) == ASTLoop: # for `L(justify) [(s, w) -> ...]...justify(...)`
f_node = None
else:
#assert(type(f_node) in (ASTFunctionDefinition, ASTTypeDefinition) or (type(f_node) in (ASTVariableInitialization, ASTVariableDeclaration) and f_node.function_pointer)
# or (type(f_node) == ASTVariableInitialization and f_node.expression.symbol.id == '->'))
if type(f_node) == ASTTypeDefinition:
if f_node.has_virtual_functions or f_node.has_pointers_to_the_same_type:
func_name = 'std::make_unique<' + func_name + '>'
# elif f_node.has_pointers_to_the_same_type:
# func_name = 'make_SharedPtr<' + func_name + '>'
if len(f_node.constructors) == 0:
f_node = ASTFunctionDefinition()
else:
if len(f_node.constructors) > 1:
raise Error('constructors\' overloading is not supported for now (see type `' + f_node.type_name + '`)', self.children[0].left_to_right_token())
f_node = f_node.constructors[0]
last_function_arg = 0
res = func_name + '('
for i in range(1, len(self.children), 2):
if self.children[i] is None:
cstr = self.children[i+1].to_str()
if f_node is not None and type(f_node) == ASTFunctionDefinition:
if last_function_arg >= len(f_node.function_arguments):
raise Error('too many arguments for function `' + func_name + '`', self.children[0].left_to_right_token())
if f_node.first_named_only_argument is not None and last_function_arg >= f_node.first_named_only_argument:
raise Error('argument `' + f_node.function_arguments[last_function_arg][0] + '` of function `' + func_name + '` is named-only', self.children[i+1].token)
if len(f_node.function_arguments[last_function_arg]) > 3 and '&' in f_node.function_arguments[last_function_arg][3] and not (self.children[i+1].symbol.id == '&' and len(self.children[i+1].children) == 1):
raise Error('argument `' + f_node.function_arguments[last_function_arg][0] + '` of function `' + func_name + '` is in-out, but there is no `&` prefix', self.children[i+1].token)
if f_node.function_arguments[last_function_arg][2] == 'File?':
tid = self.scope.find(self.children[i+1].token_str())
if tid is None or tid.type != 'File?':
res += '&'
elif f_node.function_arguments[last_function_arg][2].endswith('?') and f_node.function_arguments[last_function_arg][2] != 'Int?' and not cstr.startswith(('std::make_unique<', 'make_SharedPtr<')):
res += '&'
res += cstr
last_function_arg += 1
else:
if f_node is None or type(f_node) != ASTFunctionDefinition:
raise Error('function `' + func_name + '` is not found (you can remove named arguments in function call to suppress this error)', self.children[0].left_to_right_token())
argument_name = self.children[i].token_str()[:-1]
while True:
if last_function_arg == len(f_node.function_arguments):
raise Error('argument `' + argument_name + '` is not found in function `' + func_name + '`', self.children[i].token)
if f_node.function_arguments[last_function_arg][0] == argument_name:
last_function_arg += 1
break
if f_node.function_arguments[last_function_arg][1] == '':
raise Error('argument `' + f_node.function_arguments[last_function_arg][0] + '` of function `' + func_name + '` has no default value, please specify its value here', self.children[i].token)
res += f_node.function_arguments[last_function_arg][1] + ', '
last_function_arg += 1
if f_node.function_arguments[last_function_arg-1][2].endswith('?') and not '->' in f_node.function_arguments[last_function_arg-1][2]:
res += '&'
res += self.children[i+1].to_str()
if i < len(self.children)-2:
res += ', '
if f_node is not None:
if type(f_node) == ASTFunctionDefinition:
while last_function_arg < len(f_node.function_arguments):
if f_node.function_arguments[last_function_arg][1] == '':
t = self.children[len(self.children)-1].token
raise Error('missing required argument `'+ f_node.function_arguments[last_function_arg][0] + '`', Token(t.end, t.end, Token.Category.DELIMITER))
last_function_arg += 1
elif f_node.function_pointer:
if last_function_arg != len(f_node.type_args):
raise Error('wrong number of arguments passed to function pointer', Token(self.children[0].token.end, self.children[0].token.end, Token.Category.DELIMITER))
return res + ')'
elif self.tuple:
res = 'make_tuple('
for i in range(len(self.children)):
res += self.children[i].to_str()
if i < len(self.children)-1:
res += ', '
return res + ')'
else:
assert(len(self.children) == 1)
if self.children[0].symbol.id in ('..', '.<', '.+', '<.', '<.<'): # чтобы вместо `(range_el(0, seq.len()))` было `range_el(0, seq.len())`
return self.children[0].to_str()
return '(' + self.children[0].to_str() + ')'
elif self.symbol.id == '[': # ]
if self.is_list:
if len(self.children) == 0:
raise Error('empty array is not supported', self.left_to_right_token())
type_of_values_is_char = True
for child in self.children:
if not is_char(child):
type_of_values_is_char = False
break
res = 'create_array' + ('<' + trans_type(self.children[0].children[0].token_str(), self.scope, self.children[0].children[0].token)
+ '>' if len(self.children) > 1 and self.children[0].function_call and self.children[0].children[0].token_str()[0].isupper() and self.children[0].children[0].token_str() not in ('Array', 'Set') else '') + '({'
for i in range(len(self.children)):
res += char_or_str(self.children[i], type_of_values_is_char)
if i < len(self.children)-1:
res += ', '
return res + '})'
elif self.is_dict:
char_key = True
char_val = True
for child in self.children:
assert(child.symbol.id == '=')
if not is_char(child.children[0]):
char_key = False
if not is_char(child.children[1]):
char_val = False
res = 'create_dict(dict_of'
for child in self.children:
c0 = child.children[0]
if c0.symbol.id == '.' and len(c0.children) == 2 and c0.children[1].token_str().isupper():
c0str = c0.to_str().replace('.', '::') # replace `python_to_11l:tokenizer:Token.Category.NAME` with `python_to_11l::tokenizer::Token::Category::NAME`
else:
c0str = char_or_str(c0, char_key)
res += '(' + c0str + ', ' + char_or_str(child.children[1], char_val) + ')'
return res + ')'
elif self.children[1].token.category == Token.Category.NUMERIC_LITERAL:
return '_get<' + self.children[1].to_str() + '>(' + self.children[0].to_str() + ')' # for support tuples (e.g. `(1, 2)[0]` -> `_get<0>(make_tuple(1, 2))`)
else:
c1 = self.children[1].to_str()
if c1.startswith('(len)'):
return self.children[0].to_str() + '.at_plus_len(' + c1[len('(len)'):] + ')'
return self.children[0].to_str() + '[' + c1 + ']'
elif self.symbol.id in ('S', 'В', 'switch', 'выбрать'):
char_val = True
for i in range(1, len(self.children), 2):
if not is_char(self.children[i+1]):
char_val = False
res = '[&](const auto &a){return ' # `[&]` is for `cc = {'а':'A','б':'B','с':'C','д':'D','е':'E','ф':'F'}.get(c.lower(), c)` -> `[&](const auto &a){return a == u'а'_C ? u"A"_S : ... : c;}(c.lower())`
was_break = False
for i in range(1, len(self.children), 2):
if self.children[i].token.value(source) in ('E', 'И', 'else', 'иначе'):
res += char_or_str(self.children[i+1], char_val)
was_break = True
break
res += ('a == ' + (char_or_str(self.children[i], is_char(self.children[i]))[:-2] if self.children[i].token.category == Token.Category.STRING_LITERAL else self.children[i].to_str()) if self.children[i].symbol.id not in ('..', '.<', '.+', '<.', '<.<')
else 'in(a, ' + self.children[i].to_str() + ')') + ' ? ' + char_or_str(self.children[i+1], char_val) + ' : '
# L.was_no_break
# res ‘’= ‘throw KeyError(a)’
return res + ('throw KeyError(a)' if not was_break else '') + ';}(' + self.children[0].to_str() + ')'
if len(self.children) == 1:
#return '(' + self.symbol.id + self.children[0].to_str() + ')'
if self.postfix:
return self.children[0].to_str() + self.symbol.id
elif self.symbol.id == ':':
c0 = self.children[0].to_str()
if c0 in ('stdin', 'stdout', 'stderr'):
return '_' + c0
if importing_module:
return os.path.basename(file_name)[:-4] + '::' + c0
return '::' + c0
elif self.symbol.id == '.':
c0 = self.children[0].to_str()
sn = self
while True:
if sn.symbol.id == '.' and len(sn.children) == 3:
return 'T.' + c0 + '()'*(c0 in ('len', 'last', 'empty')) # T means *‘t’emporary [variable], and it can be safely used because `T` is a keyletter
if sn.parent is None:
n = sn.ast_parent
while n is not None:
if type(n) == ASTWith:
return 'T.' + c0
n = n.parent
break
sn = sn.parent
if self.scope.find_in_current_function(c0):
return 'this->' + c0
else:
return c0
elif self.symbol.id == '..':
c0 = self.children[0].to_str()
if c0.startswith('(len)'):
return 'range_elen_i(' + c0[len('(len)'):] + ')'
else:
return 'range_ei(' + c0 + ')'
elif self.symbol.id == '&':
assert(self.parent.function_call)
return self.children[0].to_str()
else:
return {'(-)':'~'}.get(self.symbol.id, self.symbol.id) + self.children[0].to_str()
elif len(self.children) == 2:
#return '(' + self.children[0].to_str() + ' ' + self.symbol.id + ' ' + self.children[1].to_str() + ')'
def char_if_len_1(child):
return char_or_str(child, is_char(child))
if self.symbol.id == '.':
cts0 = self.children[0].token_str()
c1 = self.children[1].to_str()
if cts0 == '@':
if self.scope.find_in_current_type_function(c1):
return 'this->' + c1
else:
return c1
if cts0 == '.' and len(self.children[0].children) == 1: # `.left.tree_indent()` -> `left->tree_indent()`
c00 = self.children[0].children[0].token_str()
id_ = self.scope.find(c00)
if id_ is None and type(self.scope.node) == ASTFunctionDefinition and type(self.scope.node.parent) == ASTTypeDefinition:
id_ = self.scope.node.parent.find_id_including_base_types(c00)
if id_ is not None and len(id_.ast_nodes) and type(id_.ast_nodes[0]) in (ASTVariableInitialization, ASTVariableDeclaration):
if id_.ast_nodes[0].is_reference:
return c00 + '->' + c1
tid = self.scope.find(id_.ast_nodes[0].type.rstrip('?'))
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and tid.ast_nodes[0].has_pointers_to_the_same_type:
return c00 + '->' + c1
if cts0 == ':' and len(self.children[0].children) == 1: # `:token_node.symbol` -> `::token_node->symbol`
id_ = global_scope.find(self.children[0].children[0].token_str())
if id_ is not None and len(id_.ast_nodes) and type(id_.ast_nodes[0]) in (ASTVariableInitialization, ASTVariableDeclaration):
tid = self.scope.find(id_.ast_nodes[0].type)#.rstrip('?')
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and tid.ast_nodes[0].has_pointers_to_the_same_type:
return '::' + self.children[0].children[0].token_str() + '->' + c1
if cts0 == '.' and len(self.children[0].children) == 2: # // for `ASTNode token_node; token_node.symbol.id = sid` -> `... token_node->symbol->id = sid`
t_node = type_of(self.children[0]) # \\ and `ASTNode token_node; ... :token_node.symbol.id = sid` -> `... ::token_node->symbol->id = sid`
if t_node is not None and type(t_node) in (ASTVariableDeclaration, ASTVariableInitialization) and (t_node.is_reference or t_node.is_ptr): # ( # t_node.is_shared_ptr):
return self.children[0].to_str() + '->' + c1
if cts0 == '(': # ) # `parse(expr_str).eval()` -> `parse(expr_str)->eval()`
fid, sc = self.scope.find_and_return_scope(self.children[0].children[0].token_str())
if fid is not None and len(fid.ast_nodes) == 1:
f_node = fid.ast_nodes[0]
if type(f_node) == ASTFunctionDefinition and f_node.function_return_type != '':
frtid = sc.find(f_node.function_return_type)
if frtid is not None and len(frtid.ast_nodes) == 1 and type(frtid.ast_nodes[0]) == ASTTypeDefinition and frtid.ast_nodes[0].has_pointers_to_the_same_type:
return self.children[0].to_str() + '->' + c1
if cts0 in ('Float', 'Float32') and c1 == 'infinity':
return 'std::numeric_limits<' + cpp_type_from_11l[cts0] + '>::infinity()'
id_, s = self.scope.find_and_return_scope(cts0.lstrip('@='))
if id_ is not None:
if id_.type != '' and id_.type.endswith('?'):
return cts0.lstrip('@=') + '->' + c1
if len(id_.ast_nodes) and type(id_.ast_nodes[0]) == ASTLoop and id_.ast_nodes[0].is_loop_variable_a_ptr and cts0 == id_.ast_nodes[0].loop_variable:
return cts0 + '->' + c1
if len(id_.ast_nodes) and type(id_.ast_nodes[0]) == ASTVariableInitialization and (id_.ast_nodes[0].is_ptr): # ( # or id_.ast_nodes[0].is_shared_ptr):
return self.children[0].to_str() + '->' + c1 + '()'*(c1 in ('len', 'last', 'empty')) # `to_str()` is needed for such case: `animal.say(); animals [+]= animal; animal.say()` -> `animal->say(); animals.append(animal); std::move(animal)->say();`
if len(id_.ast_nodes) and type(id_.ast_nodes[0]) in (ASTVariableInitialization, ASTVariableDeclaration): # `Node tree = ...; tree.tree_indent()` -> `... tree->tree_indent()` # (
tid = self.scope.find(id_.ast_nodes[0].type)#.rstrip('?'))
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and tid.ast_nodes[0].has_pointers_to_the_same_type:
return cts0 + '->' + c1
if id_.type != '' and s.is_function:
tid = s.find(id_.type)
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and tid.ast_nodes[0].has_pointers_to_the_same_type:
return cts0 + '->' + c1
if c1.isupper():
c0 = self.children[0].to_str()
#assert(c0[0].isupper())
return c0.replace('.', '::') + '::' + c1 # replace `Token.Category.STATEMENT_SEPARATOR` with `Token::Category::STATEMENT_SEPARATOR`
return char_if_len_1(self.children[0]) + '.' + c1 + '()'*(c1 in ('len', 'last', 'empty', 'real', 'imag') and not (self.parent is not None and self.parent.function_call and self is self.parent.children[0])) # char_if_len_1 is needed here because `u"0"_S.code` (have gotten from #(11l)‘‘0’.code’) is illegal [correct: `u'0'_C.code`]
elif self.symbol.id == ':':
c0 = self.children[0].to_str()
c0 = {'time':'timens', # 'time': a symbol with this name already exists and therefore this name cannot be used as a namespace name
'random':'randomns'}.get(c0, c0) # GCC: .../11l-lang/_11l_to_cpp/11l_hpp/random.hpp:1:11: error: ‘namespace random { }’ redeclared as different kind of symbol
c1 = self.children[1].to_str()
return c0 + '::' + (c1 if c1 != '' else '_')
elif self.symbol.id == '->':
captured_variables = set()
def gather_captured_variables(sn):
if sn.token.category == Token.Category.NAME:
if sn.token_str().startswith('@'):
by_ref = True # sn.parent.children[0] is sn and ((sn.parent.symbol.id[-1] == '=' and sn.parent.symbol.id not in ('==', '!='))
# or (sn.parent.symbol.id == '.' and sn.parent.children[1].token_str() == 'append'))
t = sn.token_str()[1:]
if t.startswith('='):
t = t[1:]
by_ref = False
captured_variables.add('this' if t == '' else '&'*by_ref + t)
elif sn.token.value(source) == '(.)':
captured_variables.add('this')
else:
for child in sn.children:
if child is not None and child.symbol.id != '->':
gather_captured_variables(child)
gather_captured_variables(self.children[1])
return '[' + ', '.join(sorted(captured_variables)) + '](' + ', '.join(map(lambda c: 'const ' + ('auto &' if c.symbol.id != '=' else 'decltype(' + c.children[1].to_str() + ') &') + c.to_str(),
self.children[0].children if self.children[0].symbol.id == '(' else [self.children[0]])) + '){return ' + self.children[1].to_str() + ';}' # )
elif self.symbol.id in ('..', '.<', '.+', '<.', '<.<'):
s = {'..':'ee', '.<':'el', '.+':'ep', '<.':'le', '<.<':'ll'}[self.symbol.id]
c0 = char_if_len_1(self.children[0])
c1 = char_if_len_1(self.children[1])
b = s[0]
if c0.startswith('(len)'):
b += 'len'
c0 = c0[len('(len)'):]
e = s[1]
if c1.startswith('(len)'):
e += 'len'
c1 = c1[len('(len)'):]
return 'range_' + b + '_'*(len(b) > 1 or len(e) > 1) + e + '(' + c0 + ', ' + c1 + ')'
elif self.symbol.id in ('C', 'С', 'in'):
return 'in(' + char_if_len_1(self.children[0]) + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id in ('!C', '!С', '!in'):
return '!in(' + char_if_len_1(self.children[0]) + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id in ('I/', 'Ц/'):
return 'idiv(' + self.children[0].to_str() + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id in ('I/=', 'Ц/='):
return self.children[0].to_str() + ' = idiv(' + self.children[0].to_str() + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id in ('==', '!=') and self.children[1].token.category == Token.Category.STRING_LITERAL:
return self.children[0].to_str() + ' ' + self.symbol.id + ' ' + char_if_len_1(self.children[1])[:-2]
elif self.symbol.id in ('==', '!=', '=') and self.children[1].token.category == Token.Category.NAME and self.children[1].token_str().isupper(): # `token.category == NAME` -> `token.category == decltype(token.category)::NAME` and `category = NAME` -> `category = decltype(category)::NAME`
return self.children[0].to_str() + ' ' + self.symbol.id + ' decltype(' + self.children[0].to_str() + ')::' + self.children[1].token_str()
elif self.symbol.id in ('==', '!=') and self.children[0].symbol.id == '&' and len(self.children[0].children) == 1 and self.children[1].symbol.id == '&' and len(self.children[1].children) == 1: # `&a == &b` -> `&a == &b`
id_, s = self.scope.find_and_return_scope(self.children[0].children[0].token_str())
if id_ is not None and len(id_.ast_nodes) and type(id_.ast_nodes[0]) == ASTLoop and id_.ast_nodes[0].is_loop_variable_a_ptr and self.children[0].children[0].token_str() == id_.ast_nodes[0].loop_variable: # `L(obj)...&obj != &objChoque` -> `...&*obj != objChoque`
return '&*' + self.children[0].children[0].token_str() + ' ' + self.symbol.id + ' ' + self.children[1].children[0].token_str()
return '&' + self.children[0].children[0].token_str() + ' ' + self.symbol.id + ' &' + self.children[1].children[0].token_str()
elif self.symbol.id == '==' and self.children[0].symbol.id == '==': # replace `a == b == c` with `equal(a, b, c)`
def f(child):
if child.symbol.id == '==':
return f(child.children[0]) + ', ' + child.children[1].to_str()
return child.to_str()
return 'equal(' + f(self) + ')'
elif self.symbol.id == '=' and self.children[0].symbol.id == '[': # ] # replace `a[k] = v` with `a.set(k, v)`
if self.children[0].children[1].token.category == Token.Category.NUMERIC_LITERAL: # replace `a[0] = v` with `_set<0>(a, v)` to support tuples
return '_set<' + self.children[0].children[1].to_str() + '>(' + self.children[0].children[0].to_str() + ', ' + char_if_len_1(self.children[1]) + ')'
else:
c01 = self.children[0].children[1].to_str()
if c01.startswith('(len)'):
return self.children[0].children[0].to_str() + '.set_plus_len(' + c01[len('(len)'):] + ', ' + char_if_len_1(self.children[1]) + ')'
else:
return self.children[0].children[0].to_str() + '.set(' + c01 + ', ' + char_if_len_1(self.children[1]) + ')'
elif self.symbol.id == '[+]=': # replace `a [+]= v` with `a.append(v)`
return self.children[0].to_str() + '.append(' + self.children[1].to_str() + ')'
elif self.symbol.id == '=' and self.children[0].tuple:
#assert(False)
return 'assign_from_tuple(' + ', '.join(c.to_str() for c in self.children[0].children) + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id == '?':
return '[&]{auto R = ' + self.children[0].to_str() + '; return R != nullptr ? *R : ' + self.children[1].to_str() + ';}()'
elif self.symbol.id == '^':
c1 = self.children[1].to_str()
if c1 == '2':
return 'square(' + self.children[0].to_str() + ')'
if c1 == '3':
return 'cube(' + self.children[0].to_str() + ')'
return 'pow(' + self.children[0].to_str() + ', ' + c1 + ')'
elif self.symbol.id == '%':
return 'mod(' + self.children[0].to_str() + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id == '[&]' and self.parent is not None and self.parent.symbol.id in ('==', '!='): # there is a difference in precedence of operators `&` and `==`/`!=` in Python/11l and C++
return '(' + self.children[0].to_str() + ' & ' + self.children[1].to_str() + ')'
elif self.symbol.id == '(concat)' and self.parent is not None and self.parent.symbol.id in ('+', '-', '==', '!='): # `print(‘id = ’id+1)` -> `print((‘id = ’id)+1)`, `a & b != u"1x"` -> `(a & b) != u"1x"` [[[`'-'` is needed because `print(‘id = ’id-1)` also should generate a compile-time error]]]
return '(' + self.children[0].to_str() + ' & ' + self.children[1].to_str() + ')'
else:
def is_integer(t):
return t.category == Token.Category.NUMERIC_LITERAL and ('.' not in t.value(source)) and ('e' not in t.value(source))
if self.symbol.id == '/' and (is_integer(self.children[0].token) or is_integer(self.children[1].token)):
if is_integer(self.children[0].token):
return self.children[0].token_str() + '.0 / ' + self.children[1].to_str()
else:
return self.children[0].to_str() + ' / ' + self.children[1].token_str() + '.0'
if self.symbol.id == '=' and self.children[0].symbol.id == '.' and len(self.children[0].children) == 2: # `:token_node.symbol = :symbol_table[...]` -> `::token_node->symbol = &::symbol_table[...]`
t_node = type_of(self.children[0])
if t_node is not None and type(t_node) in (ASTVariableDeclaration, ASTVariableInitialization) and t_node.is_reference:
c1s = self.children[1].to_str()
return self.children[0].to_str() + ' = ' + '&'*(c1s != 'nullptr') + c1s
return self.children[0].to_str() + ' ' + {'&':'&&', '|':'||', '[&]':'&', '[&]=':'&=', '[|]':'|', '[|]=':'|=', '(concat)':'&', '[+]':'+', '‘’=':'&=', '(+)':'^', '(+)=':'^='}.get(self.symbol.id, self.symbol.id) + ' ' + self.children[1].to_str()
elif len(self.children) == 3:
if self.children[1].token.category == Token.Category.SCOPE_BEGIN:
assert(self.symbol.id == '.')
if self.children[2].symbol.id == '?': # not necessary, just to beautify generated C++
return '[&](auto &&T){auto X = ' + self.children[2].children[0].to_str() + '; return X != nullptr ? *X : ' + self.children[2].children[1].to_str() + ';}(' + self.children[0].to_str() + ')'
return '[&](auto &&T){return ' + self.children[2].to_str() + ';}(' + self.children[0].to_str() + ')' # why I prefer `auto &&T` to `auto&& T`: ampersand is related to the variable, but not to the type, for example in `int &i, j` `j` is not a reference, but just an integer
assert(self.symbol.id in ('I', 'Е', 'if', 'если'))
return self.children[0].to_str() + ' ? ' + self.children[1].to_str() + ' : ' + self.children[2].to_str()
return ''
symbol_table : Dict[str, SymbolBase] = {}
allowed_keywords_in_expressions : List[str] = []
def symbol(id, bp = 0):
try:
s = symbol_table[id]
except KeyError:
s = SymbolBase()
s.id = id
s.lbp = bp
symbol_table[id] = s
if id[0].isalpha() and not id in ('I/', 'Ц/', 'I/=', 'Ц/=', 'C', 'С', 'in'): # this is keyword-in-expression
assert(id.isalpha() or id in ('L.last_iteration', 'Ц.последняя_итерация', 'loop.last_iteration', 'цикл.последняя_итерация'))
allowed_keywords_in_expressions.append(id)
else:
s.lbp = max(bp, s.lbp)
return s
class ASTNode:
parent : 'ASTNode' = None
access_specifier_public = 1
def walk_expressions(self, f):
pass
def walk_children(self, f):
pass
class ASTNodeWithChildren(ASTNode):
# children : List['ASTNode'] = [] # OMFG! This actually means static (common for all objects of type ASTNode) variable, not default value of member variable, that was unexpected to me as it contradicts C++11 behavior
children : List['ASTNode']
tokeni : int
#scope : Scope
def __init__(self):
self.children = []
self.tokeni = tokeni
def walk_children(self, f):
for child in self.children:
f(child)
def children_to_str(self, indent, t, place_opening_curly_bracket_on_its_own_line = True, add_at_beginning = ''):
r = ''
if self.tokeni > 0:
ti = self.tokeni - 1
while ti > 0 and tokens[ti].category in (Token.Category.SCOPE_END, Token.Category.STATEMENT_SEPARATOR):
ti -= 1
r = (min(source[tokens[ti].end:tokens[self.tokeni].start].count("\n"), 2) - 1) * "\n"
r += ' ' * (indent*4) + t + (("\n" + ' ' * (indent*4) + "{\n") if place_opening_curly_bracket_on_its_own_line else " {\n") # }
r += add_at_beginning
for c in self.children:
r += c.to_str(indent+1)
return r + ' ' * (indent*4) + "}\n"
def children_to_str_detect_single_stmt(self, indent, r, check_for_if = False):
def has_if(node):
while True:
if not isinstance(node, ASTNodeWithChildren) or len(node.children) != 1:
return False
if type(node) == ASTIf:
return True
node = node.children[0]
if (len(self.children) != 1
or (check_for_if and (type(self.children[0]) == ASTIf or has_if(self.children[0]))) # for correctly handling of dangling-else
or type(self.children[0]) == ASTLoopRemoveCurrentElementAndContinue): # `L.remove_current_element_and_continue` ‘раскрывается в 2 строки кода’\‘is translated into 2 statements’
return self.children_to_str(indent, r, False)
assert(len(self.children) == 1)
c0str = self.children[0].to_str(indent+1)
if c0str.startswith(' ' * ((indent+1)*4) + "was_break = true;\n"):
return self.children_to_str(indent, r, False)
return ' ' * (indent*4) + r + "\n" + c0str
class ASTNodeWithExpression(ASTNode):
expression : SymbolNode
def set_expression(self, expression):
self.expression = expression
self.expression.ast_parent = self
def walk_expressions(self, f):
f(self.expression)
class ASTProgram(ASTNodeWithChildren):
beginning_extra = ''
def to_str(self):
r = self.beginning_extra
prev_global_statement = True
code_block_id = 1
for c in self.children:
global_statement = type(c) in (ASTVariableDeclaration, ASTVariableInitialization, ASTTupleInitialization, ASTFunctionDefinition, ASTTypeDefinition, ASTTypeAlias, ASTTypeEnum, ASTMain)
if global_statement != prev_global_statement:
prev_global_statement = global_statement
if not global_statement:
sname = 'CodeBlock' + str(code_block_id)
r += "\n"*(c is not self.children[0]) + 'struct ' + sname + "\n{\n " + sname + "()\n {\n"
else:
r += " }\n} code_block_" + str(code_block_id) + ";\n"
code_block_id += 1
r += c.to_str(2*(not global_statement))
if prev_global_statement != True: # {{
r += " }\n} code_block_" + str(code_block_id) + ";\n"
return r
class ASTExpression(ASTNodeWithExpression):
def to_str(self, indent):
if self.expression.symbol.id == '=' and type(self.parent) == ASTTypeDefinition:
return ' ' * (indent*4) + 'decltype(' + self.expression.children[1].to_str() + ') ' + self.expression.to_str() + ";\n"
return ' ' * (indent*4) + self.expression.to_str() + ";\n"
cpp_type_from_11l = {'auto&':'auto&', 'V':'auto', 'П':'auto', 'var':'auto', 'перем':'auto',
'Int':'int', 'Int64':'Int64', 'UInt64':'UInt64', 'UInt32':'uint32_t', 'Float':'double', 'Float32':'float', 'Complex':'Complex', 'String':'String', 'Bool':'bool', 'Byte':'Byte',
'N':'void', 'Н':'void', 'null':'void', 'нуль':'void',
'Array':'Array', 'Tuple':'Tuple', 'Dict':'Dict', 'DefaultDict':'DefaultDict', 'Set':'Set', 'Deque':'Deque'}
def trans_type(ty, scope, type_token, ast_type_node = None, is_reference = False):
if ty[-1] == '?':
ty = ty[:-1]
t = cpp_type_from_11l.get(ty)
if t is not None:
if t == 'int' and int_is_int64:
return 'Int64'
return t
else:
if '.' in ty: # for `Token.Category category`
return ty.replace('.', '::') # [-TODO: generalize-]
if ty.startswith('('):
assert(ty[-1] == ')')
i = 1
s = i
nesting_level = 0
types = ''
while True:
if ty[i] in ('(', '['):
nesting_level += 1
elif ty[i] in (')', ']'):
if nesting_level == 0:
assert(i == len(ty)-1)
types += trans_type(ty[s:i], scope, type_token, ast_type_node)
break
nesting_level -= 1
elif ty[i] == ',':
if nesting_level == 0: # ignore inner commas
types += trans_type(ty[s:i], scope, type_token, ast_type_node) + ', '
i += 1
while ty[i] == ' ':
i += 1
s = i
continue
i += 1
tuple_types = types.split(', ')
if tuple_types[0] in ('int', 'float', 'double') and tuple_types.count(tuple_types[0]) == len(tuple_types) and len(tuple_types) in range(2, 5):
return {'int':'i', 'float':'', 'double':'d'}[tuple_types[0]] + 'vec' + str(len(tuple_types))
return 'Tuple<' + types + '>'
p = ty.find('[') # ]
if p != -1:
if '=' in ty:
assert(p == 0 and ty[0] == '[' and ty[-1] == ']')
tylist = ty[1:-1].split('=')
assert(len(tylist) == 2)
return 'Dict<' + trans_type(tylist[0], scope, type_token, ast_type_node) + ', ' \
+ trans_type(tylist[1], scope, type_token, ast_type_node) + '>'
if ty.startswith('Callable['): # ]
tylist = ty[p+1:-1].split(', ')
def trans_ty(ty):
tt = trans_type(ty, scope, type_token, ast_type_node)
return tt if tt.startswith('std::unique_ptr<') else 'const ' + tt + ('&'*(ty not in ('Int', 'Float')))
return 'std::function<' + trans_type(tylist[-1], scope, type_token, ast_type_node) + '(' + ', '.join(trans_ty(t) for t in tylist[:-1]) + ')>'
return (trans_type(ty[:p], scope, type_token, ast_type_node) if p != 0 else 'Array') + '<' + trans_type(ty[p+1:-1], scope, type_token, ast_type_node) + '>'
p = ty.find(',')
if p != -1:
return trans_type(ty[:p], scope, type_token, ast_type_node) + ', ' + trans_type(ty[p+1:].lstrip(' '), scope, type_token, ast_type_node)
id = scope.find(ty)
if id is None or len(id.ast_nodes) == 0:
raise Error('type `' + ty + '` is not defined', type_token)
if type(id.ast_nodes[0]) in (ASTTypeAlias, ASTTypeEnum):
return ty
if type(id.ast_nodes[0]) != ASTTypeDefinition:
raise Error('`' + ty + '`: expected a type name', type_token)
if id.ast_nodes[0].has_virtual_functions or id.ast_nodes[0].has_pointers_to_the_same_type:
if ast_type_node is not None and tokens[id.ast_nodes[0].tokeni].start > type_token.start: # if type `ty` was declared after this variable, insert a forward declaration of type `ty`
ast_type_node.forward_declared_types.add(ty)
return ty if is_reference else 'std::unique_ptr<' + ty + '>'# if id.ast_nodes[0].has_virtual_functions else 'SharedPtr<' + ty + '>'
return ty
class ASTVariableDeclaration(ASTNode):
vars : List[str]
type : str
type_args : List[str]
is_const = False
function_pointer = False
is_reference = False
scope : Scope
type_token : Token
is_ptr = False
nullable = False
#is_shared_ptr = False
def __init__(self):
self.scope = scope
def trans_type(self, ty, is_reference = False):
if ty.endswith('&'):
assert(trans_type(ty[:-1], self.scope, self.type_token, self.parent if type(self.parent) == ASTTypeDefinition else None, is_reference) == 'auto')
return 'auto&'
return trans_type(ty, self.scope, self.type_token, self.parent if type(self.parent) == ASTTypeDefinition else None, is_reference)
def to_str(self, indent):
if self.function_pointer:
def trans_type(ty):
tt = self.trans_type(ty)
return tt if tt.startswith('std::unique_ptr<') else 'const ' + tt + ('&'*(ty not in ('Int', 'Float')))
return ' ' * (indent*4) + 'std::function<' + self.trans_type(self.type) + '(' + ', '.join(trans_type(ty) for ty in self.type_args) + ')> ' + ', '.join(self.vars) + ";\n"
return ' ' * (indent*4) + 'const '*self.is_const + self.trans_type(self.type, self.is_reference) + ('<' + ', '.join(self.trans_type(ty) for ty in self.type_args) + '>' if len(self.type_args) else '') + ' ' + '*'*self.is_reference + ', '.join(self.vars) + ";\n"
class ASTVariableInitialization(ASTVariableDeclaration, ASTNodeWithExpression):
def to_str(self, indent):
return super().to_str(indent)[:-2] + ' = ' + self.expression.to_str() + ";\n"
class ASTTupleInitialization(ASTNodeWithExpression):
dest_vars : List[str]
is_const = False
bind_array = False
def __init__(self):
self.dest_vars = []
def to_str(self, indent):
e = self.expression.to_str()
if self.bind_array:
e = 'bind_array<' + str(len(self.dest_vars)) + '>(' + e + ')'
return ' ' * (indent*4) + 'const '*self.is_const + 'auto [' + ', '.join(self.dest_vars) + '] = ' + e + ";\n"
class ASTTupleAssignment(ASTNodeWithExpression):
dest_vars : List[Tuple[str, bool]]
def __init__(self):
self.dest_vars = []
def to_str(self, indent):
r = ''
for i, dv in enumerate(self.dest_vars):
if dv[1]:
r += ' ' * (indent*4) + 'TUPLE_ELEMENT_T(' + str(i) + ', ' + self.expression.to_str() + ') ' + dv[0] + ";\n"
return r + ' ' * (indent*4) + 'assign_from_tuple(' + ', '.join(dv[0] for dv in self.dest_vars) + ', ' + self.expression.to_str() + ')' + ";\n"
class ASTWith(ASTNodeWithChildren, ASTNodeWithExpression):
def to_str(self, indent):
return self.children_to_str(indent, '[&](auto &&T)', False)[:-1] + '(' + self.expression.to_str() + ");\n"
class ASTFunctionDefinition(ASTNodeWithChildren):
function_name : str = ''
function_return_type : str = ''
is_const = False
function_arguments : List[Tuple[str, str, str, str]]# = [] # (arg_name, default_value, type_, qualifier)
first_named_only_argument = None
last_non_default_argument : int
class VirtualCategory(IntEnum):
NO = 0
NEW = 1
OVERRIDE = 2
ABSTRACT = 3
ASSIGN = 4
FINAL = 5
virtual_category = VirtualCategory.NO
scope : Scope
member_initializer_list = ''
def __init__(self, function_arguments = None, function_return_type = ''):
super().__init__()
self.function_arguments = function_arguments or []
self.function_return_type = function_return_type
self.scope = scope
def serialize_to_dict(self, node_type = True):
r = {}
if node_type: # 'node_type' is inserted in dict before 'function_arguments' as this looks more logical in .11l_global_scope
r['node_type'] = 'function'
r['function_arguments'] = ['; '.join(arg) for arg in self.function_arguments]
return r
def deserialize_from_dict(self, d):
self.function_arguments = [arg.split('; ') for arg in d['function_arguments']]
def to_str(self, indent):
is_const = False
if type(self.parent) == ASTTypeDefinition:
if self.function_name == '': # this is constructor
s = self.parent.type_name
elif self.function_name == '(destructor)':
s = '~' + self.parent.type_name
elif self.function_name == 'String':
s = 'operator String'
is_const = True
else:
s = ('auto' if self.function_return_type == '' else trans_type(self.function_return_type, self.scope, tokens[self.tokeni])) + ' ' + \
{'()':'operator()', '[&]':'operator&', '<':'operator<', '==':'operator==', '+':'operator+', '-':'operator-', '*':'operator*'}.get(self.function_name, self.function_name)
if self.virtual_category != self.VirtualCategory.NO:
arguments = []
for index, arg in enumerate(self.function_arguments):
if arg[2] == '': # if there is no type specified
raise Error('type should be specified for argument `' + arg[0] + '` [for virtual functions all arguments should have types]', tokens[self.tokeni])
else:
arguments.append(
('' if '=' in arg[3] or '&' in arg[3] else 'const ')
+ trans_type(arg[2].rstrip('?'), self.scope, tokens[self.tokeni]) + '* '*0 + ' '
+ ('&' if '&' in arg[3] or '=' not in arg[3] else '')
+ arg[0] + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = ' + arg[1]))
s = 'virtual ' + s + '(' + ', '.join(arguments) + ')' + ('', ' override', ' = 0', ' override', ' final')[self.virtual_category - 1]
return ' ' * (indent*4) + s + ";\n" if self.virtual_category == self.VirtualCategory.ABSTRACT else self.children_to_str(indent, s)
elif type(self.parent) != ASTProgram: # local functions [i.e. functions inside functions] are represented as C++ lambdas
captured_variables = set()
def gather_captured_variables(node):
def f(sn : SymbolNode):
if sn.token.category == Token.Category.NAME:
if sn.token.value(source)[0] == '@':
by_ref = True # sn.parent and sn.parent.children[0] is sn and sn.parent.symbol.id[-1] == '=' and sn.parent.symbol.id not in ('==', '!=')
t = sn.token.value(source)[1:]
if t.startswith('='):
t = t[1:]
by_ref = False
captured_variables.add('this' if t == '' else '&'*by_ref + t)
elif sn.token.value(source) == '(.)':
captured_variables.add('this')
else:
for child in sn.children:
if child is not None:
f(child)
node.walk_expressions(f)
node.walk_children(gather_captured_variables)
gather_captured_variables(self)
arguments = []
for arg in self.function_arguments:
if arg[2] == '': # if there is no type specified
arguments.append(('auto ' if '=' in arg[3] else 'const auto &') + arg[0] if arg[1] == '' else
('' if '=' in arg[3] else 'const ') + 'decltype(' + arg[1] + ') ' + arg[0] + ' = ' + arg[1])
else:
tid = self.scope.parent.find(arg[2].rstrip('?'))
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and (tid.ast_nodes[0].has_virtual_functions or tid.ast_nodes[0].has_pointers_to_the_same_type):
arguments.append('std::unique_ptr<' + arg[2].rstrip('?') + '> ' + arg[0] + ('' if arg[1] == '' else ' = ' + arg[1]))
else:
arguments.append(('' if '=' in arg[3] else 'const ') + trans_type(arg[2], self.scope, tokens[self.tokeni]) + ' ' + ('&'*((arg[2] not in ('Int', 'Float')) and ('=' not in arg[3]))) + arg[0] + ('' if arg[1] == '' else ' = ' + arg[1]))
return self.children_to_str(indent, ('auto' if self.function_return_type == '' else 'std::function<' + trans_type(self.function_return_type, self.scope, tokens[self.tokeni]) + '(' + ', '.join(trans_type(arg[2], self.scope, tokens[self.tokeni]) for arg in self.function_arguments) + ')>') + ' ' + self.function_name
+ ' = [' + ', '.join(sorted(filter(lambda v: not '&'+v in captured_variables, captured_variables))) + ']('
+ ', '.join(arguments) + ')')[:-1] + ";\n"
else:
s = ('auto' if self.function_return_type == '' else trans_type(self.function_return_type, self.scope, tokens[self.tokeni])) + ' ' + self.function_name
if len(self.function_arguments) == 0:
return self.children_to_str(indent, s + '()' + ' const'*(self.is_const or is_const))
templates = []
arguments = []
for index, arg in enumerate(self.function_arguments):
if arg[2] == '': # if there is no type specified
templates.append('typename T' + str(index + 1) + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = decltype(' + arg[1] + ')'))
arguments.append(('T' + str(index + 1) + ' ' if '=' in arg[3] else 'const '*(arg[3] != '&') + 'T' + str(index + 1) + ' &')
+ arg[0] + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = ' + arg[1]))
else:
tid = self.scope.parent.find(arg[2].rstrip('?'))
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and (tid.ast_nodes[0].has_virtual_functions or tid.ast_nodes[0].has_pointers_to_the_same_type):
arguments.append('std::unique_ptr<' + arg[2].rstrip('?') + '> '
#+ ('' if '=' in arg[3] else 'const ')
+ arg[3] # add `&` if needed
+ arg[0] + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = ' + arg[1]))
elif arg[2].endswith('?'):
arguments.append(trans_type(arg[2].rstrip('?'), self.scope, tokens[self.tokeni]) + '* '
+ ('' if '=' in arg[3] else 'const ')
+ arg[0] + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = ' + arg[1]))
else:
ty = trans_type(arg[2], self.scope, tokens[self.tokeni])
arguments.append(
(('' if arg[3] == '=' else 'const ') + ty + ' ' + '&'*(arg[2] not in ('Int', 'Float') and arg[3] != '=') if arg[3] != '&' else ty + ' &')
+ arg[0] + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = ' + arg[1]))
if self.member_initializer_list == '' and self.function_name == '' and type(self.parent) == ASTTypeDefinition:
i = 0
while i < len(self.children):
c = self.children[i]
if isinstance(c, ASTExpression) and c.expression.symbol.id == '=' \
and c.expression.children[0].symbol.id == '.' \
and len(c.expression.children[0].children) == 1 \
and c.expression.children[0].children[0].token.category == Token.Category.NAME \
and c.expression.children[1].token.category == Token.Category.NAME \
and c.expression.children[1].token_str() in (arg[0] for arg in self.function_arguments):
if self.scope.parent.ids.get(c.expression.children[0].children[0].token_str()) is None: # this member variable is defined in the base type/class
i += 1
continue
if self.member_initializer_list == '':
self.member_initializer_list = " :\n"
else:
self.member_initializer_list += ",\n"
ec1 = c.expression.children[1].token_str()
for index, arg in enumerate(self.function_arguments):
if arg[0] == ec1:
if arguments[index].startswith('std::unique_ptr<'):
ec1 = 'std::move(' + ec1 + ')'
break
self.member_initializer_list += ' ' * ((indent+1)*4) + c.expression.children[0].children[0].token_str() + '(' + ec1 + ')'
self.children.pop(i)
continue
i += 1
r = self.children_to_str(indent, ('template <' + ', '.join(templates) + '> ')*(len(templates) != 0) + s + '(' + ', '.join(arguments) + ')' + ' const'*(self.is_const or self.function_name in tokenizer.sorted_operators) + self.member_initializer_list)
if isinstance(self.parent, ASTTypeDefinition) and self.function_name in ('+', '-', '*', '/') and self.function_name + '=' not in self.parent.scope.ids:
r += ' ' * (indent*4) + 'template <typename Ty> auto &operator' + self.function_name + "=(const Ty &t)\n"
r += ' ' * (indent*4) + "{\n"
r += ' ' * ((indent+1)*4) + '*this = *this ' + self.function_name + " t;\n"
r += ' ' * ((indent+1)*4) + "return *this;\n"
r += ' ' * (indent*4) + "}\n"
return r
class ASTIf(ASTNodeWithChildren, ASTNodeWithExpression):
else_or_elif : ASTNode = None
likely = 0
def walk_children(self, f):
super().walk_children(f)
if self.else_or_elif is not None:
self.else_or_elif.walk_children(f)
def to_str(self, indent):
if self.likely == 0:
s = 'if (' + self.expression.to_str() + ')'
elif self.likely == 1:
s = 'if (likely(' + self.expression.to_str() + '))'
else:
assert(self.likely == -1)
s = 'if (unlikely(' + self.expression.to_str() + '))'
return self.children_to_str_detect_single_stmt(indent, s, check_for_if = True) + (self.else_or_elif.to_str(indent) if self.else_or_elif is not None else '')
class ASTElseIf(ASTNodeWithChildren, ASTNodeWithExpression):
else_or_elif : ASTNode = None
def walk_children(self, f):
super().walk_children(f)
if self.else_or_elif is not None:
self.else_or_elif.walk_children(f)
def to_str(self, indent):
return self.children_to_str_detect_single_stmt(indent, 'else if (' + self.expression.to_str() + ')', check_for_if = True) + (self.else_or_elif.to_str(indent) if self.else_or_elif is not None else '')
class ASTElse(ASTNodeWithChildren):
def to_str(self, indent):
return self.children_to_str_detect_single_stmt(indent, 'else')
class ASTSwitch(ASTNodeWithExpression):
class Case(ASTNodeWithChildren, ASTNodeWithExpression):
pass
cases : List[Case]
has_string_case = False
def __init__(self):
self.cases = []
def walk_children(self, f):
for case in self.cases:
for child in case.children:
f(child)
def to_str(self, indent):
def is_char(child):
ts = child.token_str()
return child.token.category == Token.Category.STRING_LITERAL and (len(ts) == 3 or (ts[:2] == '"\\' and len(ts) == 4))
def char_if_len_1(child):
if is_char(child):
if child.token_str()[1:-1] == "\\":
return R"u'\\'"
return "u'" + child.token_str()[1:-1].replace("'", R"\'") + "'"
return child.to_str()
if self.has_string_case: # C++ does not support strings in case labels so insert if-elif-else chain in this case
r = ''
for case in self.cases:
if case.expression.token_str() in ('E', 'И', 'else', 'иначе'):
assert(id(case) == id(self.cases[-1]))
r += case.children_to_str_detect_single_stmt(indent, 'else')
else:
r += case.children_to_str_detect_single_stmt(indent, ('if' if id(case) == id(self.cases[0]) else 'else if') + ' (' + self.expression.to_str() + ' == ' + char_if_len_1(case.expression) + ')', check_for_if = True)
return r
r = ' ' * (indent*4) + 'switch (' + self.expression.to_str() + ")\n" + ' ' * (indent*4) + "{\n"
for case in self.cases:
r += ' ' * (indent*4) + ('default' if case.expression.token_str() in ('E', 'И', 'else', 'иначе') else 'case ' + char_if_len_1(case.expression)) + ":\n"
for c in case.children:
r += c.to_str(indent+1)
r += ' ' * ((indent+1)*4) + "break;\n"
return r + ' ' * (indent*4) + "}\n"
class ASTLoopWasNoBreak(ASTNodeWithChildren):
def to_str(self, indent):
return ''
class ASTLoop(ASTNodeWithChildren, ASTNodeWithExpression):
loop_variable : str = None
is_loop_variable_a_reference = False
copy_loop_variable = False
break_label_needed = -1
has_continue = False
has_L_index = False
has_L_last_iteration = False
has_L_remove_current_element_and_continue = False
is_loop_variable_a_ptr = False
was_no_break_node : ASTLoopWasNoBreak = None
def has_L_was_no_break(self):
return self.was_no_break_node is not None
def to_str(self, indent):
r = ''
if self.has_L_was_no_break():
r = ' ' * (indent*4) + "{bool was_break = false;\n"
loop_auto = False
if self.expression is not None and self.expression.token.category == Token.Category.NUMERIC_LITERAL:
lv = self.loop_variable if self.loop_variable is not None else 'Lindex'
tr = 'for (int ' + lv + ' = 0; ' + lv + ' < ' + self.expression.to_str() + '; ' + lv + '++)'
else:
if self.loop_variable is not None or (self.expression is not None and self.expression.symbol.id in ('..', '.<')):
if self.loop_variable is not None and ',' in self.loop_variable:
tr = 'for (auto ' + '&&'*(not self.copy_loop_variable) + '[' + self.loop_variable + '] : ' + self.expression.to_str() + ')'
else:
loop_auto = True
tr = 'for (auto ' + ('&' if self.is_loop_variable_a_reference else '&&'*(self.is_loop_variable_a_ptr or (not self.copy_loop_variable and not (
self.expression.symbol.id in ('..', '.<') or (self.expression.symbol.id == '(' and self.expression.children[0].symbol.id == '.' and self.expression.children[0].children[0].symbol.id == '(' and self.expression.children[0].children[0].children[0].symbol.id in ('..', '.<'))))) # ))
) + (self.loop_variable if self.loop_variable is not None else '__unused') + ' : ' + self.expression.to_str() + ')'
else:
if self.expression is not None and self.expression.token.category == Token.Category.NAME:
l = tokens[self.tokeni].value(source)
raise Error('please write `' + l + ' ' + self.expression.token_str() + ' != 0` or `'
+ l + ' 1..' + self.expression.token_str() + '` instead of `'
+ l + ' ' + self.expression.token_str() + '`', Token(tokens[self.tokeni].start, self.expression.token.end, Token.Category.NAME))
tr = 'while (' + (self.expression.to_str() if self.expression is not None else 'true') + ')'
rr = self.children_to_str_detect_single_stmt(indent, tr)
if self.has_L_remove_current_element_and_continue:
if not loop_auto:
raise Error('this kind of loop does not support `L.remove_current_element_and_continue`', tokens[self.tokeni])
if self.has_L_last_iteration:
raise Error('`L.last_iteration` can not be used with `L.remove_current_element_and_continue`', tokens[self.tokeni])
if self.has_L_index:
raise Error('`L.index` can not be used with `L.remove_current_element_and_continue`', tokens[self.tokeni]) # {
rr = ' ' * (indent*4) + '{auto &&__range = ' + self.expression.to_str() + ";\n" \
+ ' ' * (indent*4) + "auto __end = __range.end();\n" \
+ ' ' * (indent*4) + "auto __dst = __range.begin();\n" \
+ self.children_to_str(indent, 'for (auto __src = __range.begin(); __src != __end;)', False,
add_at_beginning = ' ' * ((indent+1)*4) + 'auto &&'+ self.loop_variable + " = *__src;\n")[:-indent*4-2] \
+ ' ' * ((indent+1)*4) + "if (__dst != __src)\n" \
+ ' ' * ((indent+1)*4) + " *__dst = std::move(*__src);\n" \
+ ' ' * ((indent+1)*4) + "++__dst;\n" \
+ ' ' * ((indent+1)*4) + "++__src;\n" \
+ ' ' * (indent*4) + "}\n" \
+ ' ' * (indent*4) + "__range.erase(__dst, __end);}\n"
if self.has_L_last_iteration:
if not loop_auto:
raise Error('this kind of loop does not support `L.last_iteration`', tokens[self.tokeni])
rr = ' ' * (indent*4) + '{auto &&__range = ' + self.expression.to_str() \
+ ";\n" + self.children_to_str(indent, 'for (auto __begin = __range.begin(), __end = __range.end(); __begin != __end;)', False,
add_at_beginning = ' ' * ((indent+1)*4) + 'auto &&'+ self.loop_variable + " = *__begin; ++__begin;\n")
elif self.has_L_index and not (self.loop_variable is None and self.expression is not None and self.expression.token.category == Token.Category.NUMERIC_LITERAL):
rr = self.children_to_str(indent, tr, False)
if self.has_L_index and not (self.loop_variable is None and self.expression is not None and self.expression.token.category == Token.Category.NUMERIC_LITERAL):
if self.has_continue:
brace_pos = int(rr[0] == "\n") + indent*4 + len(tr) + 1
rr = rr[:brace_pos+1] + rr[brace_pos:] # {
r += ' ' * (indent*4) + "{int Lindex = 0;\n" + rr[:-indent*4-2] + "} on_continue:\n"*self.has_continue + ' ' * ((indent+1)*4) + "Lindex++;\n" + ' ' * (indent*4) + "}}\n"
else:
r += rr
if self.has_L_last_iteration:
r = r[:-1] + "}\n"
if self.has_L_was_no_break(): # {
r += self.was_no_break_node.children_to_str_detect_single_stmt(indent, 'if (!was_break)') + ' ' * (indent*4) + "}\n"
if self.break_label_needed != -1:
r += ' ' * (indent*4) + 'break_' + ('' if self.break_label_needed == 0 else str(self.break_label_needed)) + ":;\n"
return r
def walk_expressions(self, f):
if self.expression is not None: f(self.expression)
class ASTContinue(ASTNode):
token : Token
def to_str(self, indent):
n = self.parent
while True:
if type(n) == ASTLoop:
n.has_continue = True
break
n = n.parent
if n is None:
raise Error('loop corresponding to this statement is not found', self.token)
return ' ' * (indent*4) + 'goto on_'*n.has_L_index + "continue;\n"
break_label_index = -1
class ASTLoopBreak(ASTNode):
loop_variable : str = ''
loop_level = 0
token : Token
def to_str(self, indent):
r = ''
n = self.parent
loop_level = 0
while True:
if type(n) == ASTLoop:
if loop_level == self.loop_level if self.loop_variable == '' else self.loop_variable == n.loop_variable:
if n.has_L_was_no_break():
r = ' ' * (indent*4) + "was_break = true;\n"
if loop_level > 0:
if n.break_label_needed == -1:
global break_label_index
break_label_index += 1
n.break_label_needed = break_label_index
return r + ' ' * (indent*4) + 'goto break_' + ('' if n.break_label_needed == 0 else str(n.break_label_needed)) + ";\n"
break
loop_level += 1
n = n.parent
if n is None:
raise Error('loop corresponding to this `' + '^'*self.loop_level + 'L' + ('(' + self.loop_variable + ')')*(self.loop_variable != '') + '.break` statement is not found', self.token)
n = self.parent
while True:
if type(n) == ASTSwitch:
n = n.parent
while True:
if type(n) == ASTLoop:
if n.break_label_needed == -1:
break_label_index += 1
n.break_label_needed = break_label_index
return r + ' ' * (indent*4) + 'goto break_' + ('' if n.break_label_needed == 0 else str(n.break_label_needed)) + ";\n"
n = n.parent
if type(n) == ASTLoop:
break
n = n.parent
return r + ' ' * (indent*4) + "break;\n"
class ASTLoopRemoveCurrentElementAndContinue(ASTNode):
def to_str(self, indent):
n = self.parent
while True:
if type(n) == ASTLoop:
n.has_L_remove_current_element_and_continue = True
break
n = n.parent
return ' ' * (indent*4) + "++__src;\n" \
+ ' ' * (indent*4) + "continue;\n"
class ASTReturn(ASTNodeWithExpression):
def to_str(self, indent):
expr_str = ''
if self.expression is not None:
if self.expression.is_list and len(self.expression.children) == 0: # `R []`
n = self.parent
while type(n) != ASTFunctionDefinition:
n = n.parent
if n.function_return_type == '':
raise Error('Function returning an empty array should have return type specified', self.expression.left_to_right_token())
if not n.function_return_type.startswith('Array['): # ]
raise Error('Function returning an empty array should have an Array based return type', self.expression.left_to_right_token())
expr_str = trans_type(n.function_return_type, self.expression.scope, self.expression.token) + '()'
elif self.expression.function_call and self.expression.children[0].token_str() == 'Dict' and len(self.expression.children) == 1: # `R Dict()`
n = self.parent
while type(n) != ASTFunctionDefinition:
n = n.parent
if n.function_return_type == '':
raise Error('Function returning an empty dict should have return type specified', self.expression.left_to_right_token())
if not n.function_return_type.startswith('Dict['): # ]
raise Error('Function returning an empty dict should have a Dict based return type', self.expression.left_to_right_token())
expr_str = trans_type(n.function_return_type, self.expression.scope, self.expression.token) + '()'
else:
expr_str = self.expression.to_str()
return ' ' * (indent*4) + 'return' + (' ' + expr_str if expr_str != '' else '') + ";\n"
def walk_expressions(self, f):
if self.expression is not None: f(self.expression)
class ASTException(ASTNodeWithExpression):
def to_str(self, indent):
return ' ' * (indent*4) + 'throw ' + self.expression.to_str() + ";\n"
class ASTExceptionTry(ASTNodeWithChildren):
def to_str(self, indent):
return self.children_to_str(indent, 'try')
class ASTExceptionCatch(ASTNodeWithChildren):
exception_object_type : str
exception_object_name : str = ''
def to_str(self, indent):
if self.exception_object_type == '':
return self.children_to_str(indent, 'catch (...)')
return self.children_to_str(indent, 'catch (const ' + self.exception_object_type + '&' + (' ' + self.exception_object_name if self.exception_object_name != '' else '') + ')')
class ASTTypeDefinition(ASTNodeWithChildren):
base_types : List[str]
type_name : str
constructors : List[ASTFunctionDefinition]
has_virtual_functions = False
has_pointers_to_the_same_type = False
forward_declared_types : Set[str]
serializable = False
def __init__(self, constructors = None):
super().__init__()
self.base_types = []
self.constructors = constructors or []
self.scope = scope # needed for built-in types, e.g. `File(full_fname, ‘w’, encoding' ‘utf-8-sig’).write(...)`
self.forward_declared_types = set()
def serialize_to_dict(self):
return {'node_type': 'type', 'constructors': [c.serialize_to_dict(False) for c in self.constructors]}
def deserialize_from_dict(self, d):
for c_dict in d['constructors']:
c = ASTFunctionDefinition()
c.deserialize_from_dict(c_dict)
self.constructors.append(c)
def find_id_including_base_types(self, id):
tid = self.scope.ids.get(id)
if tid is None:
for base_type_name in self.base_types:
tid = self.scope.parent.find(base_type_name)
assert(tid is not None and len(tid.ast_nodes) == 1)
assert(isinstance(tid.ast_nodes[0], ASTTypeDefinition))
tid = tid.ast_nodes[0].find_id_including_base_types(id)
if tid is not None:
break
return tid
def set_serializable_to_children(self):
self.serializable = True
for c in self.children:
if type(c) == ASTTypeDefinition:
c.set_serializable_to_children()
def to_str(self, indent):
r = ''
if self.tokeni > 0:
ti = self.tokeni - 1
while ti > 0 and tokens[ti].category in (Token.Category.SCOPE_END, Token.Category.STATEMENT_SEPARATOR):
ti -= 1
r = (source[tokens[ti].end:tokens[self.tokeni].start].count("\n")-1) * "\n"
base_types = []
# if self.has_pointers_to_the_same_type:
# base_types += ['SharedObject']
base_types += self.base_types
r += ' ' * (indent*4) \
+ 'class ' + self.type_name + (' : ' + ', '.join(map(lambda c: 'public ' + c, base_types)) if len(base_types) else '') \
+ "\n" + ' ' * (indent*4) + "{\n"
access_specifier_public = -1
for c in self.children:
if c.access_specifier_public != access_specifier_public:
r += ' ' * (indent*4) + ['private', 'public'][c.access_specifier_public] + ":\n"
access_specifier_public = c.access_specifier_public
r += c.to_str(indent+1)
if len(self.forward_declared_types):
r = "\n".join(' ' * (indent*4) + 'class ' + t + ';' for t in self.forward_declared_types) + "\n\n" + r
if self.serializable:
r += "\n" + ' ' * ((indent+1)*4) + "void serialize(ldf::Serializer &s)\n" + ' ' * ((indent+1)*4) + "{\n"
for c in self.children:
if type(c) in (ASTVariableDeclaration, ASTVariableInitialization):
for var in c.vars:
r += ' ' * ((indent+2)*4) + 's(u"' + var + '", ' + (var if var != 's' else 'this->s') + ");\n"
r += ' ' * ((indent+1)*4) + "}\n"
return r + ' ' * (indent*4) + "};\n"
class ASTTypeAlias(ASTNode):
name : str
defining_type : str # this term is taken from C++ Standard (‘using identifier attribute-specifier-seqopt = defining-type-id ;’)
template_params : List[str]
def __init__(self):
self.template_params = []
def to_str(self, indent):
r = ' ' * (indent*4)
if len(self.template_params):
r += 'template <' + ', '.join(self.template_params) + '> '
return r + 'using ' + self.name + ' = ' + self.defining_type + ";\n"
class ASTTypeEnum(ASTNode):
enum_name : str
enumerators : List[str]
def __init__(self):
super().__init__()
self.enumerators = []
def to_str(self, indent):
r = ' ' * (indent*4) + 'enum class ' + self.enum_name + " {\n"
for i in range(len(self.enumerators)):
r += ' ' * ((indent+1)*4) + self.enumerators[i]
if i < len(self.enumerators) - 1:
r += ','
r += "\n"
return r + ' ' * (indent*4) + "};\n"
class ASTMain(ASTNodeWithChildren):
found_reference_to_argv = False
def to_str(self, indent):
if importing_module:
return ''
if not self.found_reference_to_argv:
return self.children_to_str(indent, 'int main()')
return self.children_to_str(indent, 'int MAIN_WITH_ARGV()', add_at_beginning = ' ' * ((indent+1)*4) + "INIT_ARGV();\n\n")
def type_of(sn):
assert(sn.symbol.id == '.' and len(sn.children) == 2)
if sn.children[0].symbol.id == '.':
if len(sn.children[0].children) == 1:
return None
left = type_of(sn.children[0])
if left is None: # `Array[Array[Array[String]]] table... table.last.append([...])`
return None
elif sn.children[0].symbol.id == '[': # ]
return None
elif sn.children[0].symbol.id == '(': # )
if not sn.children[0].function_call:
return None
if sn.children[0].children[0].symbol.id == '.':
return None
tid = sn.scope.find(sn.children[0].children[0].token_str())
if tid is None:
return None
if type(tid.ast_nodes[0]) == ASTFunctionDefinition: # `input().split(...)`
if tid.ast_nodes[0].function_return_type == '':
return None
type_name = tid.ast_nodes[0].function_return_type
tid = tid.ast_nodes[0].scope.find(type_name)
else: # `Converter(habr_html, ohd).to_html(instr, outfilef)`
type_name = sn.children[0].children[0].token_str()
assert(tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition)
tid = tid.ast_nodes[0].scope.ids.get(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) in (ASTVariableDeclaration, ASTVariableInitialization, ASTFunctionDefinition)):
if type_name == 'auto&':
return None
raise Error('method `' + sn.children[1].token_str() + '` is not found in type `' + type_name + '`', sn.left_to_right_token())
return tid.ast_nodes[0]
elif sn.children[0].symbol.id == ':':
if len(sn.children[0].children) == 2:
return None # [-TODO-]
assert(len(sn.children[0].children) == 1)
tid = global_scope.find(sn.children[0].children[0].token_str())
if tid is None or len(tid.ast_nodes) != 1:
raise Error('`' + sn.children[0].children[0].token_str() + '` is not found in global scope', sn.left_to_right_token()) # this error occurs without this code: ` or (self.token_str()[0].isupper() and self.token_str() != self.token_str().upper())`
left = tid.ast_nodes[0]
elif sn.children[0].token_str() == '@':
s = sn.scope
while True:
if s.is_function:
if s.is_lambda:
assert(s.node is None)
snp = s.parent.node
else:
snp = s.node.parent
if type(snp) == ASTFunctionDefinition:
if type(snp.parent) == ASTTypeDefinition:
fid = snp.parent.find_id_including_base_types(sn.children[1].token_str())
if fid is None:
raise Error('call of undefined method `' + sn.children[1].token_str() + '`', sn.left_to_right_token())
if len(fid.ast_nodes) > 1:
raise Error('methods\' overloading is not supported for now', sn.left_to_right_token())
f_node = fid.ast_nodes[0]
if type(f_node) == ASTFunctionDefinition:
return f_node
break
s = s.parent
assert(s)
return None
elif sn.children[0].token_str().startswith('@'):
return None # [-TODO-]
else:
if sn.children[0].token.category == Token.Category.STRING_LITERAL:
tid = builtins_scope.ids.get('String')
tid = tid.ast_nodes[0].scope.ids.get(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTFunctionDefinition):
raise Error('method `' + sn.children[1].token_str() + '` is not found in type `String`', sn.left_to_right_token())
return tid.ast_nodes[0]
tid, s = sn.scope.find_and_return_scope(sn.children[0].token_str())
if tid is None:
raise Error('identifier is not found', sn.children[0].token)
if len(tid.ast_nodes) != 1: # for `F f(active_window, s)... R s.find(‘.’) ? s.len`
if tid.type != '' and s.is_function: # for `F nud(ASTNode self)... self.symbol.nud_bp`
if '[' in tid.type: # ] # for `F decompress(Array[Int] &compressed)`
return None
tid = s.find(tid.type)
assert(tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition)
tid = tid.ast_nodes[0].scope.ids.get(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) in (ASTVariableDeclaration, ASTVariableInitialization, ASTFunctionDefinition, ASTExpression)): # `ASTExpression` is needed to fix an error ‘identifier `disInter` is not found in `r`’ in '9.yopyra.py' (when there is no `disInter : float`)
raise Error('identifier `' + sn.children[1].token_str() + '` is not found in `' + sn.children[0].token_str() + '`', sn.children[1].token)
if isinstance(tid.ast_nodes[0], ASTExpression):
return None
return tid.ast_nodes[0]
return None
left = tid.ast_nodes[0]
if type(left) == ASTLoop:
return None
if type(left) in (ASTTypeDefinition, ASTTupleInitialization, ASTTupleAssignment):
return None # [-TODO-]
if type(left) not in (ASTVariableDeclaration, ASTVariableInitialization):
raise Error('left type is `' + str(type(left)) + '`', sn.left_to_right_token())
if left.type in ('V', 'П', 'var', 'перем', 'V?', 'П?', 'var?', 'перем?', 'V&', 'П&', 'var&', 'перем&'): # for `V selection_strings = ... selection_strings.map(...)`
assert(type(left) == ASTVariableInitialization)
if left.expression.function_call and left.expression.children[0].token.category == Token.Category.NAME and left.expression.children[0].token_str()[0].isupper(): # for `V n = Node()`
tid = sn.scope.find(left.expression.children[0].token_str())
assert(tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition)
tid = tid.ast_nodes[0].find_id_including_base_types(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) in (ASTVariableDeclaration, ASTVariableInitialization, ASTFunctionDefinition, ASTExpression)): # `ASTExpression` is needed to fix an error ‘identifier `Vhor` is not found in type `Scene`’ in '9.yopyra.py' (when `Vhor = .look.pVectorial(.upCamara)`, i.e. when there is no `Vhor : Vector`)
raise Error('identifier `' + sn.children[1].token_str() + '` is not found in type `' + left.expression.children[0].token_str() + '`', sn.left_to_right_token()) # error message example: method `remove` is not found in type `Array`
if isinstance(tid.ast_nodes[0], ASTExpression):
return None
return tid.ast_nodes[0]
if ((left.expression.function_call and left.expression.children[0].symbol.id == '.' and len(left.expression.children[0].children) == 2 and left.expression.children[0].children[1].token_str() in ('map', 'filter')) # for `V a = ....map(Int); a.sort(reverse' 1B)`
or left.expression.is_list): # for `V employees = [...]; employees.sort(key' e -> e.name)`
tid = builtins_scope.find('Array').ast_nodes[0].scope.ids.get(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) in (ASTVariableDeclaration, ASTVariableInitialization, ASTFunctionDefinition)):
raise Error('member `' + sn.children[1].token_str() + '` is not found in type `Array`', sn.left_to_right_token())
return tid.ast_nodes[0]
return None
# if len(left.type_args): # `Array[String] ending_tags... ending_tags.append(‘</blockquote>’)`
# return None # [-TODO-]
if left.type == 'T':
return None
tid = left.scope.find(left.type.rstrip('?'))
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition):
if left.type.startswith('('): # )
return None
raise Error('type `' + left.type + '` is not found', sn.left_to_right_token())
tid = tid.ast_nodes[0].scope.ids.get(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) in (ASTVariableDeclaration, ASTVariableInitialization, ASTFunctionDefinition)):
raise Error('member `' + sn.children[1].token_str() + '` is not found in type `' + left.type.rstrip('?') + '`', sn.left_to_right_token())
return tid.ast_nodes[0]
# List of C++ keywords is taken from here[https://en.cppreference.com/w/cpp/keyword]
cpp_keywords = {'alignas', 'alignof', 'and', 'and_eq', 'asm', 'auto', 'bitand', 'bitor', 'bool', 'break', 'case', 'catch', 'char', 'char8_t', 'char16_t', 'char32_t', 'class', 'compl', 'concept', 'const',
'consteval', 'constexpr', 'constinit', 'const_cast', 'continue', 'co_await', 'co_return', 'co_yield', 'decltype', 'default', 'delete', 'do', 'double', 'dynamic_cast', 'else', 'enum', 'explicit',
'export', 'extern', 'false', 'float', 'for', 'friend', 'goto', 'if', 'inline', 'int', 'long', 'mutable', 'namespace', 'new', 'noexcept', 'not', 'not_eq', 'nullptr', 'operator', 'or', 'or_eq',
'private', 'protected', 'public', 'reflexpr', 'register', 'reinterpret_cast', 'requires', 'return', 'short', 'signed', 'sizeof', 'static', 'static_assert', 'static_cast', 'struct', 'switch',
'template', 'this', 'thread_local', 'throw', 'true', 'try', 'typedef', 'typeid', 'typename', 'union', 'unsigned', 'using', 'virtual', 'void', 'volatile', 'wchar_t', 'while', 'xor', 'xor_eq',
'j0', 'j1', 'jn', 'y0', 'y1', 'yn', 'pascal', 'main'}
def next_token(): # why ‘next_token’: >[https://youtu.be/Nlqv6NtBXcA?t=1203]:‘we'll have an advance method which will fetch the next token’
global token, tokeni, tokensn
if token is None and tokeni != -1:
raise Error('no more tokens', Token(len(source), len(source), Token.Category.STATEMENT_SEPARATOR))
tokeni += 1
if tokeni == len(tokens):
token = None
tokensn = None
else:
token = tokens[tokeni]
tokensn = SymbolNode(token)
if token.category != Token.Category.KEYWORD or token.value(source) in allowed_keywords_in_expressions:
key : str
if token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL):
key = '(literal)'
elif token.category == Token.Category.NAME:
key = '(name)'
if token.value(source)[0] == '@':
if token.value(source)[1:2] == '=':
if token.value(source)[2:] in cpp_keywords:
tokensn.token_str_override = '@=_' + token.value(source)[2:] + '_'
elif token.value(source)[1:] in cpp_keywords:
tokensn.token_str_override = '@_' + token.value(source)[1:] + '_'
elif token.value(source) in cpp_keywords:
tokensn.token_str_override = '_' + token.value(source) + '_'
elif token.category == Token.Category.CONSTANT:
key = '(constant)'
elif token.category == Token.Category.STRING_CONCATENATOR:
key = '(concat)'
elif token.category == Token.Category.SCOPE_BEGIN:
key = '{' # }
elif token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.SCOPE_END):
key = ';'
else:
key = token.value(source)
tokensn.symbol = symbol_table[key]
def advance(value):
if token.value(source) != value:
raise Error('expected `' + value + '`', token)
next_token()
def peek_token(how_much = 1):
return tokens[tokeni+how_much] if tokeni+how_much < len(tokens) else Token()
# This implementation is based on [http://svn.effbot.org/public/stuff/sandbox/topdown/tdop-4.py]
def expression(rbp = 0):
def check_tokensn():
if tokensn is None:
raise Error('unexpected end of source', Token(len(source), len(source), Token.Category.STATEMENT_SEPARATOR))
if tokensn.symbol is None:
raise Error('no symbol corresponding to token `' + token.value(source) + '` (belonging to ' + str(token.category) +') found while parsing expression', token)
check_tokensn()
t = tokensn
next_token()
check_tokensn()
left = t.symbol.nud(t)
while rbp < tokensn.symbol.lbp:
t = tokensn
next_token()
left = t.symbol.led(t, left)
check_tokensn()
return left
def infix(id, bp):
def led(self, left):
self.append_child(left)
self.append_child(expression(self.symbol.led_bp))
return self
symbol(id, bp).set_led_bp(bp, led)
def infix_r(id, bp):
def led(self, left):
self.append_child(left)
self.append_child(expression(self.symbol.led_bp - 1))
return self
symbol(id, bp).set_led_bp(bp, led)
def postfix(id, bp):
def led(self, left):
self.postfix = True
self.append_child(left)
return self
symbol(id, bp).led = led
def prefix(id, bp):
def nud(self):
self.append_child(expression(self.symbol.nud_bp))
return self
symbol(id).set_nud_bp(bp, nud)
infix('[+]', 20); #infix('->', 15) # for `(0 .< h).map(_ -> [0] * @w [+] [1])`
infix('?', 25) # based on C# operator precedence ([http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-334.pdf])
infix('|', 30); infix('&', 40)
infix('==', 50); infix('!=', 50); infix('C', 50); infix('С', 50); infix('in', 50); infix('!C', 50); infix('!С', 50); infix('!in', 50)
#infix('(concat)', 52) # `instr[prevci - 1 .< prevci]‘’prevc C ("/\\", "\\/")` = `(instr[prevci - 1 .< prevci]‘’prevc) C ("/\\", "\\/")`
infix('..', 55); infix('.<', 55); infix('.+', 55); infix('<.', 55); infix('<.<', 55) # ch C ‘0’..‘9’ = ch C (‘0’..‘9’)
#postfix('..', 55)
infix('<', 60); infix('<=', 60)
infix('>', 60); infix('>=', 60)
infix('[|]', 70); infix('(+)', 80); infix('[&]', 90)
infix('<<', 100); infix('>>', 100)
infix('+', 110); infix('-', 110)
infix('(concat)', 115) # `print(‘id = ’id+1)` = `print((‘id = ’id)+1)`, `str(c) + str(1-c)*charstack[0]` -> `String(c)‘’String(1 - c) * charstack[0]` = `String(c)‘’(String(1 - c) * charstack[0])`
infix('*', 120); infix('/', 120); infix('I/', 120); infix('Ц/', 120)
infix('%', 120)
prefix('-', 130); prefix('+', 130); prefix('!', 130); prefix('(-)', 130); prefix('--', 130); prefix('++', 130); prefix('&', 130)
infix_r('^', 140)
symbol('.', 150); symbol(':', 150); symbol('[', 150); symbol('(', 150); symbol(')'); symbol(']'); postfix('--', 150); postfix('++', 150)
prefix('.', 150); prefix(':', 150)
infix_r('=', 10); infix_r('+=', 10); infix_r('-=', 10); infix_r('*=', 10); infix_r('/=', 10); infix_r('I/=', 10); infix_r('Ц/=', 10); infix_r('%=', 10); infix_r('>>=', 10); infix_r('<<=', 10); infix_r('^=', 10)
infix_r('[+]=', 10); infix_r('[&]=', 10); infix_r('[|]=', 10); infix_r('(+)=', 10); infix_r('‘’=', 10)
symbol('(name)').nud = lambda self: self
symbol('(literal)').nud = lambda self: self
symbol('(constant)').nud = lambda self: self
symbol('(.)').nud = lambda self: self
symbol('L.last_iteration').nud = lambda self: self
symbol('Ц.последняя_итерация').nud = lambda self: self
symbol('loop.last_iteration').nud = lambda self: self
symbol('цикл.последняя_итерация').nud = lambda self: self
symbol(';')
symbol(',')
symbol("',")
def led(self, left):
self.append_child(left)
global scope
prev_scope = scope
scope = Scope([])
scope.parent = prev_scope
scope.is_lambda = True
tokensn.scope = scope
for c in left.children if left.symbol.id == '(' else [left]: # )
if not c.token_str()[0].isupper(): # for `((ASTNode, ASTNode) -> ASTNode) led` and `[String = ((Float, Float) -> Float)] b` (fix error 'redefinition of already defined identifier is not allowed')
scope.add_name(c.token_str(), None)
self.append_child(expression(self.symbol.led_bp))
scope = prev_scope
return self
symbol('->', 15).set_led_bp(15, led)
def led(self, left):
self.append_child(left) # [(
if token.value(source) not in (']', ')') and token.category != Token.Category.SCOPE_BEGIN:
self.append_child(expression(self.symbol.led_bp))
return self
symbol('..', 55).set_led_bp(55, led)
def led(self, left):
if token.category == Token.Category.SCOPE_BEGIN:
self.append_child(left)
self.append_child(tokensn)
if token.value(source) == '{': # } # if current token is a `{` then it is "with"-expression, but not "with"-statement
next_token()
self.append_child(expression())
advance('}')
return self
if token.category != Token.Category.NAME:
raise Error('expected an attribute name', token)
self.append_child(left)
self.append_child(tokensn)
next_token()
return self
symbol('.').led = led
class Module:
scope : Scope
def __init__(self, scope):
self.scope = scope
modules : Dict[str, Module] = {}
builtin_modules : Dict[str, Module] = {}
def find_module(name):
if name in modules:
return modules[name]
return builtin_modules[name]
def led(self, left):
if token.category != Token.Category.NAME and token.value(source) != '(' and token.category != Token.Category.STRING_LITERAL: # )
raise Error('expected an identifier name or string literal', token)
# Process module [transpile it if necessary and load it]
global scope
module_name = left.to_str()
if module_name not in modules and module_name not in builtin_modules:
module_file_name = os.path.join(os.path.dirname(file_name), module_name.replace('::', '/')).replace('\\', '/') # `os.path.join()` is needed for case when `os.path.dirname(file_name)` is empty string, `replace('\\', '/')` is needed for passing 'tests/parser/errors.txt'
try:
modulefstat = os.stat(module_file_name + '.11l')
except FileNotFoundError:
raise Error('can not import module `' + module_name + "`: file '" + module_file_name + ".11l' is not found", left.token)
hpp_file_mtime = 0
if os.path.isfile(module_file_name + '.hpp'):
hpp_file_mtime = os.stat(module_file_name + '.hpp').st_mtime
if hpp_file_mtime == 0 \
or modulefstat.st_mtime > hpp_file_mtime \
or os.stat(__file__).st_mtime > hpp_file_mtime \
or os.stat(os.path.dirname(__file__) + '/tokenizer.py').st_mtime > hpp_file_mtime \
or not os.path.isfile(module_file_name + '.11l_global_scope'):
module_source = open(module_file_name + '.11l', encoding = 'utf-8-sig').read()
prev_scope = scope
s = parse_and_to_str(tokenizer.tokenize(module_source), module_source, module_file_name + '.11l', True)
open(module_file_name + '.hpp', 'w', encoding = 'utf-8-sig', newline = "\n").write(s) # utf-8-sig is for MSVC (fix of error C2015: too many characters in constant [`u'‘'`]) # ’
modules[module_name] = Module(scope)
assert(scope.is_function == False) # serializing `is_function` member variable is not necessary because it is always equal to `False`
open(module_file_name + '.11l_global_scope', 'w', encoding = 'utf-8', newline = "\n").write(eldf.to_eldf(scope.serialize_to_dict()))
scope = prev_scope
else:
module_scope = Scope(None)
module_scope.deserialize_from_dict(eldf.parse(open(module_file_name + '.11l_global_scope', encoding = 'utf-8-sig').read()))
modules[module_name] = Module(module_scope)
self.append_child(left)
if token.category == Token.Category.STRING_LITERAL: # for `re:‘pattern’`
self.append_child(SymbolNode(Token(token.start, token.start, Token.Category.NAME), symbol = symbol_table['(name)']))
sn = SymbolNode(Token(token.start, token.start, Token.Category.DELIMITER))
sn.symbol = symbol_table['('] # )
sn.function_call = True
sn.append_child(self)
sn.children.append(None)
sn.append_child(tokensn)
next_token()
return sn
elif token.value(source) != '(': # )
self.append_child(tokensn)
next_token()
else: # for `os:(...)` and `time:(...)`
self.append_child(SymbolNode(Token(token.start, token.start, Token.Category.NAME), symbol = symbol_table['(name)']))
return self
symbol(':').led = led
def led(self, left):
self.function_call = True
self.append_child(left) # (
if token.value(source) != ')':
while True:
if token.category != Token.Category.STRING_LITERAL and token.value(source)[-1] == "'":
self.append_child(tokensn)
next_token()
self.append_child(expression())
else:
self.children.append(None)
self.append_child(expression())
if token.value(source) != ',':
break
advance(',') # (
advance(')')
return self
symbol('(').led = led
def nud(self):
comma = False # ((
if token.value(source) != ')':
while True:
if token.value(source) == ')':
break
self.append_child(expression())
if token.value(source) != ',':
break
comma = True
advance(',')
advance(')')
if len(self.children) == 0 or comma:
self.tuple = True
return self
symbol('(').nud = nud # )
def led(self, left):
self.append_child(left)
if token.value(source)[0].isupper() or (token.value(source) == '(' and source[token.start+1].isupper()): # ) # type name must starts with an upper case letter
self.is_type = True
while True:
self.append_child(expression())
if token.value(source) != ',':
break
advance(',')
else:
self.append_child(expression()) # [
advance(']')
return self
symbol('[').led = led
def nud(self):
i = 1 # [[
if token.value(source) != ']': # for `R []`
if token.value(source) == '(': # for `V celltable = [(1, 2) = 1, (1, 3) = 1, (0, 3) = 1]`
while peek_token(i).value(source) != ')':
i += 1
while peek_token(i).value(source) not in ('=', ',', ']'): # for `V cat_to_class_python = [python_to_11l:tokenizer:Token.Category.NAME = ‘identifier’, ...]`
i += 1
if peek_token(i).value(source) == '=':
self.is_dict = True
while True: # [
self.append_child(expression())
if token.value(source) != ',':
break
advance(',')
advance(']')
else:
self.is_list = True
if token.value(source) != ']':
while True: # [[
# if token.value(source) == ']':
# break
self.append_child(expression())
if token.value(source) != ',':
break
advance(',')
advance(']')
return self
symbol('[').nud = nud # ]
def advance_scope_begin():
if token.category != Token.Category.SCOPE_BEGIN:
raise Error('expected a new scope (indented block or opening curly bracket)', token)
next_token()
def nud(self):
self.append_child(expression())
advance_scope_begin()
while token.category != Token.Category.SCOPE_END:
if token.value(source) in ('E', 'И', 'else', 'иначе'):
self.append_child(tokensn)
next_token()
if token.category == Token.Category.SCOPE_BEGIN:
next_token()
self.append_child(expression())
if token.category != Token.Category.SCOPE_END:
raise Error('expected end of scope (dedented block or closing curly bracket)', token)
next_token()
else:
self.append_child(expression())
else:
self.append_child(expression())
advance_scope_begin()
self.append_child(expression())
if token.category != Token.Category.SCOPE_END:
raise Error('expected end of scope (dedented block or closing curly bracket)', token)
next_token()
if token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
next_token()
return self
symbol('S').nud = nud
symbol('В').nud = nud
symbol('switch').nud = nud
symbol('выбрать').nud = nud
def nud(self):
self.append_child(expression())
advance_scope_begin()
self.append_child(expression())
if token.category != Token.Category.SCOPE_END:
raise Error('expected end of scope (dedented block or closing curly bracket)', token)
next_token()
if not token.value(source) in ('E', 'И', 'else', 'иначе'):
raise Error('expected else block', token)
next_token()
self.append_child(expression())
return self
symbol('I').nud = nud
symbol('Е').nud = nud
symbol('if').nud = nud
symbol('если').nud = nud
symbol('{') # }
def parse_internal(this_node):
global token, scope
def new_scope(node, func_args = None, call_advance_scope_begin = True):
if call_advance_scope_begin:
advance_scope_begin()
global scope
prev_scope = scope
scope = Scope(func_args)
scope.parent = prev_scope
scope.init_ids_type_node()
scope.node = node
tokensn.scope = scope # можно избавиться от этой строки, если не делать вызов next_token() в advance_scope_begin()
node.scope = scope
parse_internal(node)
scope = prev_scope
if token is not None:
tokensn.scope = scope
def expected_name(what_name):
next_token()
if token.category != Token.Category.NAME:
raise Error('expected ' + what_name, token)
token_value = tokensn.token_str()
next_token()
return token_value
def is_tuple_assignment():
if token.value(source) == '(':
ti = 1
while peek_token(ti).value(source) != ')':
if peek_token(ti).value(source) in ('[', '.'): # ] # `(u[i], u[j]) = (u[j], u[i])`, `(.x, .y, .z) = (vx, vy, vz)`
return False
ti += 1
return peek_token(ti + 1).value(source) == '='
return False
access_specifier_private = False
while token is not None:
if token.value(source) == ':' and peek_token().value(source) in ('start', 'старт') and peek_token(2).value(source) == ':':
node = ASTMain()
next_token()
next_token()
advance(':')
assert(token.category == Token.Category.STATEMENT_SEPARATOR)
next_token()
new_scope(node, [], False)
elif token.value(source) == '.' and type(this_node) == ASTTypeDefinition:
access_specifier_private = True
next_token()
continue
elif token.category == Token.Category.KEYWORD:
if token.value(source).startswith(('F', 'Ф', 'fn', 'фн')):
node = ASTFunctionDefinition()
if '.virtual.' in token.value(source) or \
'.виртуал.' in token.value(source):
subkw = token.value(source)[token.value(source).rfind('.')+1:]
if subkw in ('new', 'новая' ): node.virtual_category = node.VirtualCategory.NEW
elif subkw in ('override', 'переопр' ): node.virtual_category = node.VirtualCategory.OVERRIDE
elif subkw in ('abstract', 'абстракт'): node.virtual_category = node.VirtualCategory.ABSTRACT
elif subkw in ('assign', 'опред' ): node.virtual_category = node.VirtualCategory.ASSIGN
elif subkw in ('final', 'финал' ): node.virtual_category = node.VirtualCategory.FINAL
elif token.value(source) in ('F.destructor', 'Ф.деструктор', 'fn.destructor', 'фн.деструктор'):
if type(this_node) != ASTTypeDefinition:
raise Error('destructor declaration allowed only inside types', token)
node.function_name = '(destructor)' # can not use `~` here because `~` can be an operator overload
if '.const' in token.value(source) or \
'.конст' in token.value(source):
node.is_const = True
next_token()
if node.function_name != '(destructor)':
if token.category == Token.Category.NAME:
node.function_name = tokensn.token_str()
next_token()
elif token.value(source) == '(': # this is constructor [`F () {...}` or `F (...) {...}`] or operator() [`F ()(...) {...}`]
if peek_token().value(source) == ')' and peek_token(2).value(source) == '(': # ) # this is operator()
next_token()
next_token()
node.function_name = '()'
else:
node.function_name = ''
if type(this_node) == ASTTypeDefinition:
this_node.constructors.append(node)
elif token.category == Token.Category.OPERATOR:
node.function_name = token.value(source)
next_token()
else:
raise Error('incorrect function name', token)
if token.value(source) != '(': # )
raise Error('expected `(` after function name', token) # )(
next_token()
was_default_argument = False
prev_type_name = ''
while token.value(source) != ')':
if token.value(source) == "',":
assert(node.first_named_only_argument is None)
node.first_named_only_argument = len(node.function_arguments)
next_token()
continue
type_ = '' # (
if token.value(source)[0].isupper() and peek_token().value(source) not in (',', ')'): # this is a type name
type_ = token.value(source)
next_token()
if token.value(source) == '[': # ]
nesting_level = 0
while True:
type_ += token.value(source)
if token.value(source) == '[':
next_token()
nesting_level += 1
elif token.value(source) == ']':
next_token()
nesting_level -= 1
if nesting_level == 0:
break
elif token.value(source) == ',':
type_ += ' '
next_token()
elif token.value(source) == '=':
next_token()
else:
if token.category != Token.Category.NAME:
raise Error('expected subtype name', token)
next_token()
if token.value(source) == '(':
type_ += '('
next_token()
while token.value(source) != ')':
type_ += token.value(source)
if token.value(source) == ',':
type_ += ' '
next_token()
next_token()
type_ += ')'
if token.value(source) == '?':
type_ += '?'
next_token()
if token.value(source) == '(': # )
type_ = expression().to_type_str()
if type_ == '':
type_ = prev_type_name
qualifier = ''
if token.value(source) == '=':
qualifier = '='
next_token()
elif token.value(source) == '&':
qualifier = '&'
next_token()
if token.category != Token.Category.NAME:
raise Error('expected function\'s argument name', token)
func_arg_name = tokensn.token_str()
next_token()
if token.value(source) == '=':
next_token()
default = expression().to_str()
was_default_argument = True
else:
if was_default_argument and node.first_named_only_argument is None:
raise Error('non-default argument follows default argument', tokens[tokeni-1])
default = ''
node.function_arguments.append((func_arg_name, default, type_, qualifier)) # ((
if token.value(source) not in ',;)':
raise Error('expected `)`, `;` or `,` in function\'s arguments list', token)
if token.value(source) == ',':
next_token()
prev_type_name = type_
elif token.value(source) == ';':
next_token()
prev_type_name = ''
node.last_non_default_argument = len(node.function_arguments) - 1
while node.last_non_default_argument >= 0 and node.function_arguments[node.last_non_default_argument][1] != '':
node.last_non_default_argument -= 1
if node.function_name not in cpp_type_from_11l: # there was an error in line `String sitem` because of `F String()`
scope.add_function(node.function_name, node)
next_token()
if token.value(source) == '->':
next_token()
if token.value(source) in ('N', 'Н', 'null', 'нуль'):
node.function_return_type = token.value(source)
next_token()
elif token.value(source) == '&':
node.function_return_type = 'auto&'
next_token()
else:
node.function_return_type = expression().to_type_str()
if node.virtual_category != node.VirtualCategory.ABSTRACT:
new_scope(node, map(lambda arg: (arg[0], arg[2]), node.function_arguments))
else:
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('T', 'Т', 'type', 'тип', 'T.serializable', 'Т.сериализуемый', 'type.serializable', 'тип.сериализуемый'):
serializable = token.value(source) in ('T.serializable', 'Т.сериализуемый', 'type.serializable', 'тип.сериализуемый')
node = ASTTypeDefinition()
node.type_name = expected_name('type name')
if token.value(source) in ('[', '='): # ] # this is a type alias
n = ASTTypeAlias()
n.name = node.type_name
node = n
scope.add_name(node.name, node)
prev_scope = scope
scope = Scope(None)
scope.parent = prev_scope
if token.value(source) == '[':
next_token()
while True:
if token.category == Token.Category.KEYWORD and token.value(source) in ('T', 'Т', 'type', 'тип'):
next_token()
assert(token.category == Token.Category.NAME)
scope.add_name(token.value(source), ASTTypeDefinition())
node.template_params.append('typename ' + token.value(source))
else:
expr = expression()
type_name = trans_type(expr.to_type_str(), scope, expr.left_to_right_token())
assert(token.category == Token.Category.NAME)
scope.add_name(token.value(source), ASTTypeDefinition()) # :(hack):
node.template_params.append(type_name + ' ' + token.value(source))
next_token()
if token.value(source) == ']':
next_token()
break
advance(',')
advance('=')
expr = expression()
node.defining_type = trans_type(expr.to_type_str(), scope, expr.left_to_right_token())
scope = prev_scope
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
else:
scope.add_name(node.type_name, node)
if token.value(source) == '(':
while True:
node.base_types.append(expected_name('base type name'))
if token.value(source) != ',':
break
if token.value(source) != ')': # (
raise Error('expected `)`', token)
next_token()
new_scope(node)
if serializable:
node.set_serializable_to_children()
for child in node.children:
if type(child) == ASTFunctionDefinition and child.virtual_category != child.VirtualCategory.NO:
node.has_virtual_functions = True
break
elif token.value(source) in ('T.enum', 'Т.перечисл', 'type.enum', 'тип.перечисл'):
node = ASTTypeEnum()
node.enum_name = expected_name('enum name')
scope.add_name(node.enum_name, node)
advance_scope_begin()
while True:
if token.category != Token.Category.NAME:
raise Error('expected an enumerator name', token)
enumerator = token.value(source)
if not enumerator.isupper():
raise Error('enumerators must be uppercase', token)
next_token()
if token.value(source) == '=':
next_token()
enumerator += ' = ' + expression().to_str()
node.enumerators.append(enumerator)
if token.category == Token.Category.SCOPE_END:
next_token()
break
assert(token.category == Token.Category.STATEMENT_SEPARATOR)
next_token()
elif token.value(source).startswith(('I', 'Е', 'if', 'если')):
node = ASTIf()
if '.' in token.value(source):
subkw = token.value(source)[token.value(source).find('.')+1:]
if subkw in ('likely', 'часто'):
node.likely = 1
else:
assert(subkw in ('unlikely', 'редко'))
node.likely = -1
next_token()
node.set_expression(expression())
new_scope(node)
n = node
while token is not None and token.value(source) in ('E', 'И', 'else', 'иначе'):
if peek_token().value(source) in ('I', 'Е', 'if', 'если'):
n.else_or_elif = ASTElseIf()
n.else_or_elif.parent = n
n = n.else_or_elif
next_token()
next_token()
n.set_expression(expression())
new_scope(n)
if token is not None and token.value(source) in ('E', 'И', 'else', 'иначе') and not peek_token().value(source) in ('I', 'Е', 'if', 'если'):
n.else_or_elif = ASTElse()
n.else_or_elif.parent = n
next_token()
if token.category == Token.Category.SCOPE_BEGIN:
new_scope(n.else_or_elif)
else: # for support `I fs:is_dir(_fname) {...} E ...` (without this `else` only `I fs:is_dir(_fname) {...} E {...}` is allowed)
expr_node = ASTExpression()
expr_node.set_expression(expression())
expr_node.parent = n.else_or_elif
n.else_or_elif.children.append(expr_node)
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.SCOPE_END)):
raise Error('expected end of statement', token)
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
break
elif token.value(source) in ('S', 'В', 'switch', 'выбрать'):
node = ASTSwitch()
next_token()
node.set_expression(expression())
advance_scope_begin()
while token.category != Token.Category.SCOPE_END:
case = ASTSwitch.Case()
case.parent = node
if token.value(source) in ('E', 'И', 'else', 'иначе'):
case.set_expression(tokensn)
next_token()
else:
case.set_expression(expression())
ts = case.expression.token_str()
if case.expression.token.category == Token.Category.STRING_LITERAL and not (len(ts) == 3 or (ts[:2] == '"\\' and len(ts) == 4)):
node.has_string_case = True
new_scope(case)
node.cases.append(case)
next_token()
elif token.value(source) in ('L', 'Ц', 'loop', 'цикл'):
if peek_token().value(source) == '(' and peek_token(4).value(source) == '.' and peek_token(4).start == peek_token(3).end:
assert(peek_token(5).value(source) in ('break', 'прервать'))
node = ASTLoopBreak()
node.token = token
next_token()
node.loop_variable = expected_name('loop variable')
advance(')')
advance('.')
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
else:
node = ASTLoop()
next_token()
prev_scope = scope
scope = Scope(None)
scope.parent = prev_scope
if token.category == Token.Category.SCOPE_BEGIN:
node.expression = None
else:
if token.value(source) == '(' and token.start == tokens[tokeni-1].end:
if peek_token().value(source) == '&':
node.is_loop_variable_a_reference = True
next_token()
elif peek_token().value(source) == '=':
node.copy_loop_variable = True
next_token()
node.loop_variable = expected_name('loop variable')
while token.value(source) == ',':
if peek_token().value(source) == '=':
node.copy_loop_variable = True
next_token()
node.loop_variable += ', ' + expected_name('loop variable')
advance(')')
node.set_expression(expression())
if node.loop_variable is not None: # check if loop variable is a [smart] pointer
lv_node = None
if node.expression.token.category == Token.Category.NAME:
id = scope.find(node.expression.token_str())
if id is not None and len(id.ast_nodes) == 1:
lv_node = id.ast_nodes[0]
elif node.expression.symbol.id == '.' and len(node.expression.children) == 2:
lv_node = type_of(node.expression)
if lv_node is not None and isinstance(lv_node, ASTVariableDeclaration) and lv_node.type == 'Array':
tid = scope.find(lv_node.type_args[0])
if tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition and tid.ast_nodes[0].has_virtual_functions:
node.is_loop_variable_a_ptr = True
scope.add_name(node.loop_variable, node)
new_scope(node)
scope = prev_scope
if token is not None and token.value(source) in ('L.was_no_break', 'Ц.не_был_прерван', 'loop.was_no_break', 'цикл.не_был_прерван'):
node.was_no_break_node = ASTLoopWasNoBreak()
node.was_no_break_node.parent = node
next_token()
new_scope(node.was_no_break_node)
elif token.value(source) in ('L.continue', 'Ц.продолжить', 'loop.continue', 'цикл.продолжить'):
node = ASTContinue()
node.token = token
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('L.break', 'Ц.прервать', 'loop.break', 'цикл.прервать'):
node = ASTLoopBreak()
node.token = token
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('L.remove_current_element_and_continue', 'Ц.удалить_текущий_элемент_и_продолжить', 'loop.remove_current_element_and_continue', 'цикл.удалить_текущий_элемент_и_продолжить'):
node = ASTLoopRemoveCurrentElementAndContinue()
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('R', 'Р', 'return', 'вернуть'):
node = ASTReturn()
next_token()
if token.category in (Token.Category.SCOPE_END, Token.Category.STATEMENT_SEPARATOR):
node.expression = None
else:
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('X', 'Х', 'exception', 'исключение'):
node = ASTException()
next_token()
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('X.try', 'Х.контроль', 'exception.try', 'исключение.контроль'):
node = ASTExceptionTry()
next_token()
new_scope(node)
elif token.value(source) in ('X.catch', 'Х.перехват', 'exception.catch', 'исключение.перехват'):
node = ASTExceptionCatch()
if peek_token().category != Token.Category.SCOPE_BEGIN:
if peek_token().value(source) == '.':
next_token()
node.exception_object_type = expected_name('exception object type name').replace(':', '::')
if token.value(source) == ':':
next_token()
node.exception_object_type += '::' + token.value(source)
next_token()
if token.category == Token.Category.NAME:
node.exception_object_name = token.value(source)
next_token()
else:
next_token()
node.exception_object_type = ''
new_scope(node)
else:
raise Error('unrecognized statement started with keyword', token)
elif token.value(source) == '^':
node = ASTLoopBreak()
node.token = token
node.loop_level = 1
next_token()
while token.value(source) == '^':
node.loop_level += 1
next_token()
if token.value(source) not in ('L.break', 'Ц.прервать', 'loop.break', 'цикл.прервать'):
raise Error('expected `L.break`', token)
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.category == Token.Category.SCOPE_END:
next_token()
if token.category == Token.Category.STATEMENT_SEPARATOR and token.end == len(source): # Token.Category.EOF
next_token()
assert(token is None)
return
elif token.category == Token.Category.STATEMENT_SEPARATOR: # this `if` was added in revision 105[‘Almost complete work on tests/python_to_cpp/pqmarkup.txt’] in order to support `hor_col_align = S instr[j .< j + 2] {‘<<’ {‘left’}; ‘>>’ {‘right’}; ‘><’ {‘center’}; ‘<>’ {‘justify’}}` [there was no STATEMENT_SEPARATOR after this line of code]
next_token()
if token is not None:
assert(token.category != Token.Category.STATEMENT_SEPARATOR)
continue
elif ((token.value(source) in ('V', 'П', 'var', 'перем') and peek_token().value(source) == '(') # ) # this is `V (a, b) = ...`
or (token.value(source) == '-' and
peek_token().value(source) in ('V', 'П', 'var', 'перем') and peek_token(2).value(source) == '(')): # this is `-V (a, b) = ...`
node = ASTTupleInitialization()
if token.value(source) == '-':
node.is_const = True
next_token()
next_token()
next_token()
while True:
assert(token.category == Token.Category.NAME)
name = tokensn.token_str()
node.dest_vars.append(name)
scope.add_name(name, node)
next_token()
if token.value(source) == ')':
break
advance(',')
next_token()
advance('=')
node.set_expression(expression())
if node.expression.function_call and node.expression.children[0].symbol.id == '.' \
and len(node.expression.children[0].children) == 2 \
and (node.expression.children[0].children[1].token_str() in ('split', 'split_py') # `V (name, ...) = ....split(...)` ~> `(V name, V ...) = ....split(...)` -> `...assign_from_tuple(name, ...);` (because `auto [name, ...] = ....split(...);` does not working)
or (node.expression.children[0].children[1].token_str() == 'map' # for `V (w, h) = lines[1].split_py().map(i -> Int(i))`
and node.expression.children[0].children[0].function_call)
and node.expression.children[0].children[0].children[0].symbol.id == '.'
and len(node.expression.children[0].children[0].children[0].children) == 2
and node.expression.children[0].children[0].children[0].children[1].token_str() in ('split', 'split_py')):
# n = node
# node = ASTTupleAssignment()
# for dv in n.dest_vars:
# node.dest_vars.append((dv, True))
# node.set_expression(n.expression)
node.bind_array = True
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif is_tuple_assignment(): # this is `(a, b) = ...` or `(a, V b) = ...` or `(V a, b) = ...`
node = ASTTupleAssignment()
next_token()
while True:
if token.category != Token.Category.NAME:
raise Error('expected variable name', token)
add_var = False
if token.value(source) in ('V', 'П', 'var', 'перем'):
add_var = True
next_token()
assert(token.category == Token.Category.NAME)
name = tokensn.token_str()
node.dest_vars.append((name, add_var))
if add_var:
scope.add_name(name, node)
next_token() # (
if token.value(source) == ')':
break
advance(',')
next_token()
advance('=')
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
else:
node_expression = expression()
if node_expression.symbol.id == '.' and node_expression.children[1].token.category == Token.Category.SCOPE_BEGIN: # this is a "with"-statement
node = ASTWith()
node.set_expression(node_expression.children[0])
new_scope(node)
else:
if node_expression.symbol.id == '&' and node_expression.children[0].token.category == Token.Category.NAME and node_expression.children[1].token.category == Token.Category.NAME: # this is a reference declaration (e.g. `Symbol& symbol`)
node = ASTVariableDeclaration()
node.is_reference = True
node.vars = [node_expression.children[1].token_str()]
node.type = node_expression.children[0].token_str()
node.type_token = node_expression.token
node.type_args = []
scope.add_name(node.vars[0], node)
elif token.category == Token.Category.NAME and tokens[tokeni-1].category != Token.Category.SCOPE_END:
var_name = tokensn.token_str()
next_token()
if token.value(source) == '=':
next_token()
node = ASTVariableInitialization()
node.set_expression(expression())
if node_expression.token.value(source) not in ('V', 'П', 'var', 'перем'):
if node_expression.token.value(source) in ('V?', 'П?', 'var?', 'перем?'):
node.is_ptr = True
node.nullable = True
else:
id = scope.find(node_expression.token_str())
if id is not None and len(id.ast_nodes) != 0:
if type(id.ast_nodes[0]) == ASTTypeDefinition and (id.ast_nodes[0].has_virtual_functions or id.ast_nodes[0].has_pointers_to_the_same_type):
node.is_ptr = True
elif node.expression.function_call and node.expression.children[0].token.category == Token.Category.NAME and node.expression.children[0].token_str()[0].isupper(): # for `V animal = Sheep(); animal.say()` -> `...; animal->say();`
id = scope.find(node.expression.children[0].token_str())
if not (id is not None and len(id.ast_nodes) != 0):
raise Error('identifier `' + node.expression.children[0].token_str() + '` is not found', node.expression.children[0].token)
if type(id.ast_nodes[0]) == ASTTypeDefinition: # support for functions beginning with an uppercase letter (e.g. Extract_Min)
if id.ast_nodes[0].has_virtual_functions or id.ast_nodes[0].has_pointers_to_the_same_type:
node.is_ptr = True
# elif id.ast_nodes[0].has_pointers_to_the_same_type:
# node.is_shared_ptr = True
node.vars = [var_name]
else:
node = ASTVariableDeclaration()
id = scope.find(node_expression.token_str().rstrip('?'))
if id is not None:
assert(len(id.ast_nodes) == 1)
if type(id.ast_nodes[0]) not in (ASTTypeDefinition, ASTTypeEnum):
raise Error('identifier is of type `' + type(id.ast_nodes[0]).__name__ + '` (should be ASTTypeDefinition or ASTTypeEnum)', node_expression.token) # this error was in line `String sitem` because of `F String()`
if type(id.ast_nodes[0]) == ASTTypeDefinition:
if id.ast_nodes[0].has_virtual_functions or id.ast_nodes[0].has_pointers_to_the_same_type:
node.is_ptr = True
# elif id.ast_nodes[0].has_pointers_to_the_same_type:
# node.is_shared_ptr = True
node.vars = [var_name]
while token.value(source) == ',':
node.vars.append(expected_name('variable name'))
node.type = node_expression.token.value(source)
if node.type == '-' and len(node_expression.children) == 1:
node.is_const = True
node_expression = node_expression.children[0]
node.type = node_expression.token.value(source)
node.type_token = node_expression.token
node.type_args = []
if node.type == '[': # ]
if node_expression.is_dict:
assert(len(node_expression.children) == 1)
node.type = 'Dict'
node.type_args = [node_expression.children[0].children[0].to_type_str(), node_expression.children[0].children[1].to_type_str()]
elif node_expression.is_list:
assert(len(node_expression.children) == 1)
node.type = 'Array'
node.type_args = [node_expression.children[0].to_type_str()]
else:
assert(node_expression.is_type)
node.type = node_expression.children[0].token.value(source)
for i in range(1, len(node_expression.children)):
node.type_args.append(node_expression.children[i].to_type_str())
elif node.type == '(': # )
if len(node_expression.children) == 1 and node_expression.children[0].symbol.id == '->':
node.function_pointer = True
c0 = node_expression.children[0]
assert(c0.children[1].token.category == Token.Category.NAME or c0.children[1].token_str() in ('N', 'Н', 'null', 'нуль'))
node.type = c0.children[1].token_str() # return value type
if c0.children[0].token.category == Token.Category.NAME:
node.type_args.append(c0.children[0].token_str())
else:
assert(c0.children[0].symbol.id == '(') # )
for child in c0.children[0].children:
assert(child.token.category == Token.Category.NAME)
node.type_args.append(child.token_str())
else: # this is a tuple
for child in node_expression.children:
node.type_args.append(child.to_type_str())
node.type = '(' + ', '.join(node.type_args) + ')'
node.type_args.clear()
elif node.type == '.':
node.type = node_expression.to_str()
if not (node.type[0].isupper() or node.type[0] == '(' or node.type in ('var', 'перем')): # )
raise Error('type name must starts with an upper case letter', node.type_token)
for var in node.vars:
scope.add_name(var, node)
if type(this_node) == ASTTypeDefinition and this_node.type_name == node.type.rstrip('?'):
this_node.has_pointers_to_the_same_type = True
node.is_ptr = True # node.is_shared_ptr = True
else:
node = ASTExpression()
node.set_expression(node_expression)
if isinstance(this_node, ASTTypeDefinition) and node_expression.symbol.id == '=': # fix error ‘identifier `disInter` is not found in `r`’ in '9.yopyra.py'
scope.add_name(node_expression.children[0].token_str(), node)
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.SCOPE_END) or tokens[tokeni-1].category == Token.Category.SCOPE_END):
raise Error('expected end of statement', token)
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
if access_specifier_private:
node.access_specifier_public = 0
access_specifier_private = False
node.parent = this_node
this_node.children.append(node)
return
tokens = []
source = ''
tokeni = -1
token = Token(0, 0, Token.Category.STATEMENT_SEPARATOR)
#scope = Scope(None)
#tokensn = SymbolNode(token)
file_name = ''
importing_module = False
def token_to_str(token_str_override, token_category = Token.Category.STRING_LITERAL):
return SymbolNode(Token(0, 0, token_category), token_str_override).to_str()
builtins_scope = Scope(None)
scope = builtins_scope
global_scope : Scope
tokensn = SymbolNode(token)
f = ASTFunctionDefinition([('object', token_to_str('‘’'), ''), ('end', token_to_str(R'"\n"'), 'String'), ('flush', token_to_str('0B', Token.Category.CONSTANT), 'Bool')])
f.first_named_only_argument = 1
builtins_scope.add_function('print', f)
f = ASTFunctionDefinition([('object', token_to_str('‘’'), ''), ('sep', token_to_str('‘ ’'), 'String'),
('end', token_to_str(R'"\n"'), 'String'), ('flush', token_to_str('0B', Token.Category.CONSTANT), 'Bool')])
f.first_named_only_argument = 1
builtins_scope.add_function('print_elements', f)
builtins_scope.add_function('input', ASTFunctionDefinition([('prompt', token_to_str('‘’'), 'String')], 'String'))
builtins_scope.add_function('assert', ASTFunctionDefinition([('expression', '', 'Bool'), ('message', token_to_str('‘’'), 'String')]))
builtins_scope.add_function('exit', ASTFunctionDefinition([('arg', '0', '')]))
builtins_scope.add_function('swap', ASTFunctionDefinition([('a', '', '', '&'), ('b', '', '', '&')]))
builtins_scope.add_function('zip', ASTFunctionDefinition([('iterable1', '', ''), ('iterable2', '', ''), ('iterable3', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('all', ASTFunctionDefinition([('iterable', '', '')]))
builtins_scope.add_function('any', ASTFunctionDefinition([('iterable', '', '')]))
builtins_scope.add_function('cart_product', ASTFunctionDefinition([('iterable1', '', ''), ('iterable2', '', ''), ('iterable3', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('multiloop', ASTFunctionDefinition([('iterable1', '', ''), ('iterable2', '', ''), ('function', '', ''), ('optional', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('multiloop_filtered', ASTFunctionDefinition([('iterable1', '', ''), ('iterable2', '', ''), ('filter_function', '', ''), ('function', '', ''), ('optional', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('sum', ASTFunctionDefinition([('iterable', '', '')]))
builtins_scope.add_function('product', ASTFunctionDefinition([('iterable', '', '')]))
builtins_scope.add_function('enumerate', ASTFunctionDefinition([('iterable', '', ''), ('start', '0', 'Int')]))
builtins_scope.add_function('sorted', ASTFunctionDefinition([('iterable', '', ''), ('key', token_to_str('N', Token.Category.CONSTANT), ''), ('reverse', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
builtins_scope.add_function('tuple_sorted', ASTFunctionDefinition([('tuple', '', ''), ('key', token_to_str('N', Token.Category.CONSTANT), ''), ('reverse', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
builtins_scope.add_function('reversed', ASTFunctionDefinition([('iterable', '', '')]))
builtins_scope.add_function('min', ASTFunctionDefinition([('arg1', '', ''), ('arg2', token_to_str('N', Token.Category.CONSTANT), ''), ('arg3', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('max', ASTFunctionDefinition([('arg1', '', ''), ('arg2', token_to_str('N', Token.Category.CONSTANT), ''), ('arg3', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('divmod', ASTFunctionDefinition([('x', '', ''), ('y', '', '')]))
builtins_scope.add_function('factorial', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('gcd', ASTFunctionDefinition([('a', '', ''), ('b', '', '')]))
builtins_scope.add_function('hex', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('bin', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('copy', ASTFunctionDefinition([('object', '', '')]))
builtins_scope.add_function('move', ASTFunctionDefinition([('object', '', '')]))
builtins_scope.add_function('hash', ASTFunctionDefinition([('object', '', '')]))
builtins_scope.add_function('rotl', ASTFunctionDefinition([('value', '', 'Int'), ('shift', '', 'Int')]))
builtins_scope.add_function('rotr', ASTFunctionDefinition([('value', '', 'Int'), ('shift', '', 'Int')]))
builtins_scope.add_function('bsr', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('bsf', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('bit_length', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('round', ASTFunctionDefinition([('number', '', 'Float'), ('ndigits', '0', '')]))
builtins_scope.add_function('sleep', ASTFunctionDefinition([('secs', '', 'Float')]))
builtins_scope.add_function('ceil', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('floor', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('trunc', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('fract', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('wrap', ASTFunctionDefinition([('x', '', 'Float'), ('min_value', '', 'Float'), ('max_value', '', 'Float')]))
builtins_scope.add_function('abs', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('exp', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('log', ASTFunctionDefinition([('x', '', 'Float'), ('base', '0', 'Float')]))
builtins_scope.add_function('log2', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('log10', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('pow', ASTFunctionDefinition([('x', '', 'Float'), ('y', '', 'Float')]))
builtins_scope.add_function('sqrt', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('acos', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('asin', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('atan', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('atan2', ASTFunctionDefinition([('x', '', 'Float'), ('y', '', 'Float')]))
builtins_scope.add_function('cos', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('sin', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('tan', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('degrees', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('radians', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('dot', ASTFunctionDefinition([('v1', '', ''), ('v2', '', '')]))
builtins_scope.add_function('cross', ASTFunctionDefinition([('v1', '', ''), ('v2', '', '')]))
builtins_scope.add_function('perp', ASTFunctionDefinition([('v', '', '')]))
builtins_scope.add_function('sqlen', ASTFunctionDefinition([('v', '', '')]))
builtins_scope.add_function('length', ASTFunctionDefinition([('v', '', '')]))
builtins_scope.add_function('normalize', ASTFunctionDefinition([('v', '', '')]))
builtins_scope.add_function('conjugate', ASTFunctionDefinition([('c', '', '')]))
builtins_scope.add_function('ValueError', ASTFunctionDefinition([('s', '', 'String')]))
builtins_scope.add_function('IndexError', ASTFunctionDefinition([('index', '', 'Int')]))
def add_builtin_global_var(var_name, var_type, var_type_args = []):
var = ASTVariableDeclaration()
var.vars = [var_name]
var.type = var_type
var.type_args = var_type_args
builtins_scope.add_name(var_name, var)
add_builtin_global_var('argv', 'Array', ['String'])
add_builtin_global_var('stdin', 'File')
add_builtin_global_var('stdout', 'File')
add_builtin_global_var('stderr', 'File')
builtins_scope.add_name('Char', ASTTypeDefinition([ASTFunctionDefinition([('code', '', 'Int')])]))
char_scope = Scope(None)
char_scope.add_name('is_digit', ASTFunctionDefinition([]))
builtins_scope.ids['Char'].ast_nodes[0].scope = char_scope
builtins_scope.add_name('File', ASTTypeDefinition([ASTFunctionDefinition([('name', '', 'String'), ('mode', token_to_str('‘r’'), 'String'), ('encoding', token_to_str('‘utf-8’'), 'String')])]))
file_scope = Scope(None)
file_scope.add_name('read_bytes', ASTFunctionDefinition([]))
file_scope.add_name('write_bytes', ASTFunctionDefinition([('bytes', '', '[Byte]')]))
file_scope.add_name('read', ASTFunctionDefinition([('size', token_to_str('N', Token.Category.CONSTANT), 'Int?')]))
file_scope.add_name('write', ASTFunctionDefinition([('s', '', 'String')]))
file_scope.add_name('read_lines', ASTFunctionDefinition([('keep_newline', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
file_scope.add_name('read_line', ASTFunctionDefinition([('keep_newline', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
file_scope.add_name('flush', ASTFunctionDefinition([]))
file_scope.add_name('close', ASTFunctionDefinition([]))
builtins_scope.ids['File'].ast_nodes[0].scope = file_scope
for type_ in cpp_type_from_11l:
builtins_scope.add_name(type_, ASTTypeDefinition([ASTFunctionDefinition([('object', token_to_str('‘’'), '')])]))
f = ASTFunctionDefinition([('x', '', ''), ('radix', '10', 'Int')])
f.first_named_only_argument = 1
builtins_scope.ids['Int'].ast_nodes[0] = ASTTypeDefinition([f])
string_scope = Scope(None)
str_last_member_var_decl = ASTVariableDeclaration()
str_last_member_var_decl.type = 'Char'
string_scope.add_name('last', str_last_member_var_decl)
string_scope.add_name('starts_with', ASTFunctionDefinition([('prefix', '', 'String')]))
string_scope.add_name('ends_with', ASTFunctionDefinition([('suffix', '', 'String')]))
string_scope.add_name('split', ASTFunctionDefinition([('delim', '', 'String'), ('limit', token_to_str('N', Token.Category.CONSTANT), 'Int?'), ('group_delimiters', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
string_scope.add_name('split_py', ASTFunctionDefinition([]))
string_scope.add_name('rtrim', ASTFunctionDefinition([('s', '', 'String'), ('limit', token_to_str('N', Token.Category.CONSTANT), 'Int?')]))
string_scope.add_name('ltrim', ASTFunctionDefinition([('s', '', 'String'), ('limit', token_to_str('N', Token.Category.CONSTANT), 'Int?')]))
string_scope.add_name('trim', ASTFunctionDefinition([('s', '', 'String')]))
string_scope.add_name('find', ASTFunctionDefinition([('s', '', 'String')]))
string_scope.add_name('findi', ASTFunctionDefinition([('s', '', 'String'), ('start', '0', 'Int')]))
string_scope.add_name('rfindi', ASTFunctionDefinition([('s', '', 'String'), ('start', '0', 'Int'), ('end', token_to_str('N', Token.Category.CONSTANT), 'Int?')]))
string_scope.add_name('count', ASTFunctionDefinition([('s', '', 'String')]))
string_scope.add_name('replace', ASTFunctionDefinition([('old', '', 'String'), ('new', '', 'String')]))
string_scope.add_name('lowercase', ASTFunctionDefinition([]))
string_scope.add_name('uppercase', ASTFunctionDefinition([]))
string_scope.add_name('zfill', ASTFunctionDefinition([('width', '', 'Int')]))
string_scope.add_name('center', ASTFunctionDefinition([('width', '', 'Int'), ('fillchar', token_to_str('‘ ’'), 'Char')]))
string_scope.add_name('ljust', ASTFunctionDefinition([('width', '', 'Int'), ('fillchar', token_to_str('‘ ’'), 'Char')]))
string_scope.add_name('rjust', ASTFunctionDefinition([('width', '', 'Int'), ('fillchar', token_to_str('‘ ’'), 'Char')]))
string_scope.add_name('format', ASTFunctionDefinition([('arg', token_to_str('N', Token.Category.CONSTANT), '')] * 32))
string_scope.add_name('map', ASTFunctionDefinition([('function', '', '(Char -> T)')]))
builtins_scope.ids['String'].ast_nodes[0].scope = string_scope
array_scope = Scope(None)
arr_last_member_var_decl = ASTVariableDeclaration()
arr_last_member_var_decl.type = 'T'
array_scope.add_name('last', arr_last_member_var_decl)
array_scope.add_name('append', ASTFunctionDefinition([('x', '', '')]))
array_scope.add_name('extend', ASTFunctionDefinition([('t', '', '')]))
array_scope.add_name('remove', ASTFunctionDefinition([('x', '', '')]))
array_scope.add_name('count', ASTFunctionDefinition([('x', '', '')]))
array_scope.add_name('index', ASTFunctionDefinition([('x', '', ''), ('i', '0', 'Int')]))
array_scope.add_name('pop', ASTFunctionDefinition([('i', '-1', 'Int')]))
array_scope.add_name('insert', ASTFunctionDefinition([('i', '', 'Int'), ('x', '', '')]))
array_scope.add_name('reverse', ASTFunctionDefinition([]))
array_scope.add_name('reverse_range', ASTFunctionDefinition([('range', '', 'Range')]))
array_scope.add_name('next_permutation', ASTFunctionDefinition([]))
array_scope.add_name('clear', ASTFunctionDefinition([]))
array_scope.add_name('drop', ASTFunctionDefinition([]))
array_scope.add_name('map', ASTFunctionDefinition([('f', '', '')]))
array_scope.add_name('filter', ASTFunctionDefinition([('f', '', '')]))
array_scope.add_name('join', ASTFunctionDefinition([('sep', '', 'String')]))
array_scope.add_name('sort', ASTFunctionDefinition([('key', token_to_str('N', Token.Category.CONSTANT), ''), ('reverse', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
builtins_scope.ids['Array'].ast_nodes[0].scope = array_scope
dict_scope = Scope(None)
dict_scope.add_name('find', ASTFunctionDefinition([('k', '', '')]))
dict_scope.add_name('keys', ASTFunctionDefinition([]))
dict_scope.add_name('values', ASTFunctionDefinition([]))
builtins_scope.ids['Dict'].ast_nodes[0].scope = dict_scope
builtins_scope.ids['DefaultDict'].ast_nodes[0].scope = dict_scope
set_scope = Scope(None)
set_scope.add_name('intersection', ASTFunctionDefinition([('other', '', 'Set')]))
set_scope.add_name('difference', ASTFunctionDefinition([('other', '', 'Set')]))
set_scope.add_name('symmetric_difference', ASTFunctionDefinition([('other', '', 'Set')]))
set_scope.add_name('is_subset', ASTFunctionDefinition([('other', '', 'Set')]))
set_scope.add_name('add', ASTFunctionDefinition([('elem', '', '')]))
set_scope.add_name('discard', ASTFunctionDefinition([('elem', '', '')]))
set_scope.add_name('map', ASTFunctionDefinition([('f', '', '')]))
builtins_scope.ids['Set'].ast_nodes[0].scope = set_scope
deque_scope = Scope(None)
deque_scope.add_name('append', ASTFunctionDefinition([('x', '', '')]))
deque_scope.add_name('pop_left', ASTFunctionDefinition([]))
builtins_scope.ids['Deque'].ast_nodes[0].scope = deque_scope
module_scope = Scope(None)
builtin_modules['math'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('get_temp_dir', ASTFunctionDefinition([]))
module_scope.add_function('list_dir', ASTFunctionDefinition([('path', token_to_str('‘.’'), 'String')]))
module_scope.add_function('walk_dir', ASTFunctionDefinition([('path', token_to_str('‘.’'), 'String'), ('dir_filter', token_to_str('N', Token.Category.CONSTANT), '(String -> Bool)?'), ('files_only', token_to_str('1B', Token.Category.CONSTANT), 'Bool')]))
module_scope.add_function('is_dir', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('is_file', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('is_symlink', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('file_size', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('create_dir', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('create_dirs', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('remove_file', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('remove_dir', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('remove_all', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('rename', ASTFunctionDefinition([('old_path', '', 'String'), ('new_path', '', 'String')]))
builtin_modules['fs'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('join', ASTFunctionDefinition([('path1', '', 'String'), ('path2', '', 'String')]))
module_scope.add_function('base_name', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('dir_name', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('absolute', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('relative', ASTFunctionDefinition([('path', '', 'String'), ('base', '', 'String')]))
module_scope.add_function('split_ext', ASTFunctionDefinition([('path', '', 'String')]))
builtin_modules['fs::path'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('', ASTFunctionDefinition([('command', '', 'String')]))
module_scope.add_function('getenv', ASTFunctionDefinition([('name', '', 'String'), ('default', token_to_str('‘’'), 'String')]))
module_scope.add_function('setenv', ASTFunctionDefinition([('name', '', 'String'), ('value', '', 'String')]))
builtin_modules['os'] = Module(module_scope)
builtins_scope.add_name('Time', ASTTypeDefinition([ASTFunctionDefinition([('year', '0', 'Int'), ('month', '1', 'Int'), ('day', '1', 'Int'), ('hour', '0', 'Int'), ('minute', '0', 'Int'), ('second', '0', 'Float')])]))
time_scope = Scope(None)
time_scope.add_name('unix_time', ASTFunctionDefinition([]))
time_scope.add_name('strftime', ASTFunctionDefinition([('format', '', 'String')]))
time_scope.add_name('format', ASTFunctionDefinition([('format', '', 'String')]))
builtins_scope.ids['Time'].ast_nodes[0].scope = time_scope
f = ASTFunctionDefinition([('days', '0', 'Float'), ('hours', '0', 'Float'), ('minutes', '0', 'Float'), ('seconds', '0', 'Float'), ('milliseconds', '0', 'Float'), ('microseconds', '0', 'Float'), ('weeks', '0', 'Float')])
f.first_named_only_argument = 0
builtins_scope.add_name('TimeDelta', ASTTypeDefinition([f]))
time_delta_scope = Scope(None)
time_delta_scope.add_name('days', ASTFunctionDefinition([]))
builtins_scope.ids['TimeDelta'].ast_nodes[0].scope = time_delta_scope
module_scope = Scope(None)
module_scope.add_function('perf_counter', ASTFunctionDefinition([]))
module_scope.add_function('today', ASTFunctionDefinition([]))
module_scope.add_function('from_unix_time', ASTFunctionDefinition([('unix_time', '', 'Float')]))
module_scope.add_function('strptime', ASTFunctionDefinition([('datetime_string', '', 'String'), ('format', '', 'String')]))
builtin_modules['time'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('', ASTFunctionDefinition([('pattern', '', 'String')]))
builtin_modules['re'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('', ASTFunctionDefinition([('stop', '1', 'Float')]))
module_scope.add_function('seed', ASTFunctionDefinition([('s', '', 'Int')]))
module_scope.add_function('shuffle', ASTFunctionDefinition([('container', '', '', '&')]))
module_scope.add_function('choice', ASTFunctionDefinition([('container', '', '')]))
builtin_modules['random'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('push', ASTFunctionDefinition([('array', '', '', '&'), ('item', '', '')]))
module_scope.add_function('pop', ASTFunctionDefinition([('array', '', '', '&')]))
module_scope.add_function('heapify', ASTFunctionDefinition([('array', '', '', '&')]))
builtin_modules['minheap'] = Module(module_scope)
builtin_modules['maxheap'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('to_object', ASTFunctionDefinition([('json_str', '', 'String'), ('obj', '', '', '&')]))
module_scope.add_function('from_object', ASTFunctionDefinition([('obj', '', ''), ('indent', '4', '')]))
builtin_modules['json'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('to_object', ASTFunctionDefinition([('eldf_str', '', 'String'), ('obj', '', '', '&')]))
module_scope.add_function('from_object', ASTFunctionDefinition([('obj', '', ''), ('indent', '4', 'Int')]))
module_scope.add_function('from_json', ASTFunctionDefinition([('json_str', '', 'String')]))
module_scope.add_function('to_json', ASTFunctionDefinition([('eldf_str', '', 'String')]))
module_scope.add_function('reparse', ASTFunctionDefinition([('eldf_str', '', 'String')]))
module_scope.add_function('test_parse', ASTFunctionDefinition([('eldf_str', '', 'String')]))
builtin_modules['eldf'] = Module(module_scope)
def parse_and_to_str(tokens_, source_, file_name_, importing_module_ = False, append_main = False, suppress_error_please_wrap_in_copy = False): # option suppress_error_please_wrap_in_copy is needed to simplify conversion of large Python source into C++
if len(tokens_) == 0: return ASTProgram().to_str()
global tokens, source, tokeni, token, break_label_index, scope, global_scope, tokensn, file_name, importing_module, modules
prev_tokens = tokens
prev_source = source
prev_tokeni = tokeni
prev_token = token
# prev_scope = scope
prev_tokensn = tokensn
prev_file_name = file_name
prev_importing_module = importing_module
prev_break_label_index = break_label_index
tokens = tokens_ + [Token(len(source_), len(source_), Token.Category.STATEMENT_SEPARATOR)]
source = source_
tokeni = -1
token = None
break_label_index = -1
scope = Scope(None)
if not importing_module_:
global_scope = scope
scope.parent = builtins_scope
file_name = file_name_
importing_module = importing_module_
prev_modules = modules
modules = {}
next_token()
p = ASTProgram()
parse_internal(p)
if len(modules):
p.beginning_extra = "\n".join(map(lambda m: 'namespace ' + m.replace('::', ' { namespace ') + " {\n#include \"" + m.replace('::', '/') + ".hpp\"\n}" + '}'*m.count('::'), modules)) + "\n\n"
found_reference_to_argv = False
def find_reference_to_argv(node):
def f(e : SymbolNode):
if len(e.children) == 1 and e.symbol.id == ':' and e.children[0].token_str() == 'argv':
nonlocal found_reference_to_argv
found_reference_to_argv = True
return
for child in e.children:
if child is not None:
f(child)
node.walk_expressions(f)
node.walk_children(find_reference_to_argv)
find_reference_to_argv(p)
if found_reference_to_argv:
if type(p.children[-1]) != ASTMain:
raise Error("`sys.argv`->`:argv` can be used only after `if __name__ == '__main__':`->`:start:`", tokens[-1])
p.children[-1].found_reference_to_argv = True
p.beginning_extra += "Array<String> argv;\n\n"
s = p.to_str() # call `to_str()` moved here [from outside] because it accesses global variables `source` (via `token.value(source)`) and `tokens` (via `tokens[ti]`)
if append_main and type(p.children[-1]) != ASTMain:
s += "\nint main()\n{\n}\n"
tokens = prev_tokens
source = prev_source
tokeni = prev_tokeni
token = prev_token
# scope = prev_scope
tokensn = prev_tokensn
file_name = prev_file_name
importing_module = prev_importing_module
break_label_index = prev_break_label_index
modules = prev_modules
return s
| 11l | /11l-2021.3-py3-none-any.whl/_11l_to_cpp/parse.py | parse.py |
R"""
После данной обработки отступы перестают играть роль — границу `scope` всегда определяют фигурные скобки.
Также здесь выполняется склеивание строк, и таким образом границу statement\утверждения задаёт либо символ `;`,
либо символ новой строки (при условии, что перед ним не стоит символ `…`!).
===============================================================================================================
Ошибки:
---------------------------------------------------------------------------------------------------------------
Error: `if/else/fn/loop/switch/type` scope is empty.
---------------------------------------------------------------------------------------------------------------
Существуют операторы, которые всегда требуют нового scope\блока, который можно обозначить двумя способами:
1. Начать следующую строку с отступом относительно предыдущей, например:
if condition\условие
scope\блок
2. Заключить блок\scope в фигурные скобки:
if condition\условие {scope\блок}
Примечание. При использовании второго способа блок\scope может иметь произвольный уровень отступа:
if condition\условие
{
scope\блок
}
---------------------------------------------------------------------------------------------------------------
Error: `if/else/fn/loop/switch/type` scope is empty, after applied implied line joining: ```...```
---------------------------------------------------------------------------------------------------------------
Сообщение об ошибке аналогично предыдущему, но выделено в отдельное сообщение об ошибке, так как может
возникать по вине ошибочного срабатывания автоматического склеивания строк (и показывается оно тогда, когда
было произведено склеивание строк в месте данной ошибки).
---------------------------------------------------------------------------------------------------------------
Error: mixing tabs and spaces in indentation: `...`
---------------------------------------------------------------------------------------------------------------
В одной строке для отступа используется смесь пробелов и символов табуляции.
Выберите что-либо одно (желательно сразу для всего файла): либо пробелы для отступа, либо табуляцию.
Примечание: внутри строковых литералов, в комментариях, а также внутри строк кода можно смешивать пробелы и
табуляцию. Эта ошибка генерируется только при проверке отступов (отступ — последовательность символов пробелов
или табуляции от самого начала строки до первого символа отличного от пробела и табуляции).
---------------------------------------------------------------------------------------------------------------
Error: inconsistent indentations: ```...```
---------------------------------------------------------------------------------------------------------------
В текущей строке кода для отступа используются пробелы, а в предыдущей строке — табуляция (либо наоборот).
[[[
Сообщение было предназначено для несколько другой ошибки: для любых двух соседних строк, если взять отступ
одной из них, то другой отступ должен начинаться с него же {если отступ текущей строки отличается от отступа
предыдущей, то:
1. Когда отступ текущей строки начинается на отступ предыдущей строки, это INDENT.
2. Когда отступ предыдущей строки начинается на отступ текущей строки, это DEDENT.
}. Например:
if a:
SSTABif b:
SSTABTABi = 0
SSTABSi = 0
Последняя пара строк не удовлетворяет этому требованию, так как ни строка ‘SSTABTAB’ не начинается на строку
‘SSTABS’, ни ‘SSTABS’ не начинается на ‘SSTABTAB’.
Эта проверка имела бы смысл в случае разрешения смешения пробелов и табуляции для отступа в пределах одной
строки (а это разрешено в Python). Но я решил отказаться от этой идеи, а лучшего текста сообщения для этой
ошибки не придумал.
]]]
---------------------------------------------------------------------------------------------------------------
Error: unindent does not match any outer indentation level
---------------------------------------------------------------------------------------------------------------
[-Добавить описание ошибки.-]
===============================================================================================================
"""
from enum import IntEnum
from typing import List, Tuple
Char = str
keywords = ['V', 'C', 'I', 'E', 'F', 'L', 'N', 'R', 'S', 'T', 'X',
'П', 'С', 'Е', 'И', 'Ф', 'Ц', 'Н', 'Р', 'В', 'Т', 'Х',
'var', 'in', 'if', 'else', 'fn', 'loop', 'null', 'return', 'switch', 'type', 'exception',
'перем', 'С', 'если', 'иначе', 'фн', 'цикл', 'нуль', 'вернуть', 'выбрать', 'тип', 'исключение']
#keywords.remove('C'); keywords.remove('С'); keywords.remove('in') # it is more convenient to consider C/in as an operator, not a keyword (however, this line is not necessary)
# new_scope_keywords = ['else', 'fn', 'if', 'loop', 'switch', 'type']
# Решил отказаться от учёта new_scope_keywords на уровне лексического анализатора из-за loop.break и case в switch
empty_list_of_str : List[str] = []
binary_operators : List[List[str]] = [empty_list_of_str, [str('+'), '-', '*', '/', '%', '^', '&', '|', '<', '>', '=', '?'], ['<<', '>>', '<=', '>=', '==', '!=', '+=', '-=', '*=', '/=', '%=', '&=', '|=', '^=', '->', '..', '.<', '.+', '<.', 'I/', 'Ц/', 'C ', 'С '], ['<<=', '>>=', '‘’=', '[+]', '[&]', '[|]', '(+)', '<.<', 'I/=', 'Ц/=', 'in ', '!C ', '!С '], ['[+]=', '[&]=', '[|]=', '(+)=', '!in ']]
unary_operators : List[List[str]] = [empty_list_of_str, [str('!')], ['++', '--'], ['(-)']]
sorted_operators = sorted(binary_operators[1] + binary_operators[2] + binary_operators[3] + binary_operators[4] + unary_operators[1] + unary_operators[2] + unary_operators[3], key = lambda x: len(x), reverse = True)
binary_operators[1].remove('^') # for `^L.break` support
binary_operators[2].remove('..') # for `L(n) 1..`
class Error(Exception):
message : str
pos : int
end : int
def __init__(self, message, pos):
self.message = message
self.pos = pos
self.end = pos
class Token:
class Category(IntEnum): # why ‘Category’: >[https://docs.python.org/3/reference/lexical_analysis.html#other-tokens]:‘the following categories of tokens exist’
NAME = 0 # or IDENTIFIER
KEYWORD = 1
CONSTANT = 2
DELIMITER = 3 # SEPARATOR = 3
OPERATOR = 4
NUMERIC_LITERAL = 5
STRING_LITERAL = 6
STRING_CONCATENATOR = 7 # special token inserted between adjacent string literal and some identifier
SCOPE_BEGIN = 8 # similar to ‘INDENT token in Python’[https://docs.python.org/3/reference/lexical_analysis.html][-1]
SCOPE_END = 9 # similar to ‘DEDENT token in Python’[-1]
STATEMENT_SEPARATOR = 10
start : int
end : int
category : Category
def __init__(self, start, end, category):
self.start = start
self.end = end
self.category = category
def __repr__(self):
return str(self.start)
def value(self, source):
return source[self.start:self.end]
def to_str(self, source):
return 'Token('+str(self.category)+', "'+self.value(source)+'")'
def tokenize(source : str, implied_scopes : List[Tuple[Char, int]] = None, line_continuations : List[int] = None, comments : List[Tuple[int, int]] = None):
tokens : List[Token] = []
indentation_levels : List[Tuple[int, bool]] = []
nesting_elements : List[Tuple[Char, int]] = [] # логически этот стек можно объединить с indentation_levels, но так немного удобнее (конкретно: для проверок `nesting_elements[-1][0] != ...`)
i = 0
begin_of_line = True
indentation_tabs : bool
prev_linestart : int
def skip_multiline_comment():
nonlocal i, source, comments
comment_start = i
lbr = source[i+1]
rbr = {"‘": "’", "(": ")", "{": "}", "[": "]"}[lbr]
i += 2
nesting_level = 1
while True:
ch = source[i]
i += 1
if ch == lbr:
nesting_level += 1
elif ch == rbr:
nesting_level -= 1
if nesting_level == 0:
break
if i == len(source):
raise Error('there is no corresponding opening parenthesis/bracket/brace/qoute for `' + lbr + '`', comment_start+1)
if comments is not None:
comments.append((comment_start, i))
while i < len(source):
if begin_of_line: # at the beginning of each line, the line's indentation level is compared to the last indentation_levels [:1]
begin_of_line = False
linestart = i
tabs = False
spaces = False
while i < len(source):
if source[i] == ' ':
spaces = True
elif source[i] == "\t":
tabs = True
else:
break
i += 1
if i == len(source): # end of source
break
ii = i
if source[i:i+2] in (R'\‘', R'\(', R'\{', R'\['): # ]})’
skip_multiline_comment()
while i < len(source) and source[i] in " \t": # skip whitespace characters
i += 1
if i == len(source): # end of source
break
if source[i] in "\r\n" or source[i:i+2] in ('//', R'\\'): # lines with only whitespace and/or comments do not affect the indentation
continue
if source[i] in "{}": # Indentation level of lines starting with { or } is ignored
continue
if len(tokens) \
and tokens[-1].category == Token.Category.STRING_CONCATENATOR \
and source[i] in '"\'‘': # ’ and not source[i+1:i+2] in ({'"':'"', '‘':'’'}[source[i]],):
if line_continuations is not None:
line_continuations.append(tokens[-1].end)
if source[i:i+2] in ('""', '‘’'):
i += 2
continue
if len(tokens) \
and tokens[-1].category == Token.Category.STRING_LITERAL \
and source[i:i+2] in ('""', '‘’'):
if line_continuations is not None:
line_continuations.append(tokens[-1].end)
tokens.append(Token(i, i, Token.Category.STRING_CONCATENATOR))
i += 2
continue
if (len(tokens)
and tokens[-1].category == Token.Category.OPERATOR
and tokens[-1].value(source) in binary_operators[tokens[-1].end - tokens[-1].start] # ‘Every line of code which ends with any binary operator should be joined with the following line of code.’:[https://github.com/JuliaLang/julia/issues/2097#issuecomment-339924750][-339924750]<
and source[tokens[-1].end-4:tokens[-1].end] != '-> &'): # for `F symbol(id, bp = 0) -> &`
if line_continuations is not None:
line_continuations.append(tokens[-1].end)
continue
# if not (len(indentation_levels) and indentation_levels[-1][0] == -1): # сразу после символа `{` это [:правило] не действует ...а хотя не могу подобрать пример, который бы показывал необходимость такой проверки, а потому оставлю этот if закомментированным # }
if ((source[i ] in binary_operators[1]
or source[i:i+2] in binary_operators[2]
or source[i:i+3] in binary_operators[3]
or source[i:i+4] in binary_operators[4]) # [правило:] ‘Every line of code which begins with any binary operator should be joined with the previous line of code.’:[-339924750]<
and not (source[i ] in unary_operators[1] # Rude fix for:
or source[i:i+2] in unary_operators[2] # a=b
or source[i:i+3] in unary_operators[3]) # ++i // Plus symbol at the beginning here should not be treated as binary + operator, so there is no implied line joining
and (source[i] not in ('&', '-') or source[i+1:i+2] == ' ')): # Символы `&` и `-` обрабатываются по-особенному — склеивание строк происходит только если после одного из этих символов стоит пробел
if len(tokens) == 0:
raise Error('source can not starts with a binary operator', i)
if line_continuations is not None:
line_continuations.append(tokens[-1].end)
continue
if source[i:i+2] == R'\.': # // Support for constructions like: ||| You need just to add `\` at the each line starting from dot:
if len(tokens): # \\ result = abc.method1() ||| result = abc.method1()
i += 1 # \\ .method2() ||| \.method2()
#else: # with `if len(tokens): i += 1` there is no need for this else branch
# raise Error('unexpected character `\`')
if line_continuations is not None:
line_continuations.append(tokens[-1].end)
continue
if tabs and spaces:
next_line_pos = source.find("\n", i)
raise Error('mixing tabs and spaces in indentation: `' + source[linestart:i].replace(' ', 'S').replace("\t", 'TAB') + source[i:next_line_pos if next_line_pos != -1 else len(source)] + '`', i)
indentation_level = ii - linestart
if len(indentation_levels) and indentation_levels[-1][0] == -1: # сразу после символа `{` идёт новый произвольный отступ (понижение уровня отступа может быть полезно, если вдруг отступ оказался слишком большой), который действует вплоть до парного символа `}`
indentation_levels[-1] = (indentation_level, indentation_levels[-1][1]) #indentation_levels[-1][0] = indentation_level # || maybe this is unnecessary (actually it is necessary, see test "fn f()\n{\na = 1") // }
indentation_tabs = tabs
else:
prev_indentation_level = indentation_levels[-1][0] if len(indentation_levels) else 0
if indentation_level > 0 and prev_indentation_level > 0 and indentation_tabs != tabs:
e = i + 1
while e < len(source) and source[e] not in "\r\n":
e += 1
raise Error("inconsistent indentations:\n```\n" + prev_indentation_level*('TAB' if indentation_tabs else 'S') + source[prev_linestart:linestart]
+ (ii-linestart)*('TAB' if tabs else 'S') + source[ii:e] + "\n```", ii)
prev_linestart = ii
if indentation_level == prev_indentation_level: # [1:] [-1]:‘If it is equal, nothing happens.’ :)(: [:2]
if len(tokens) and tokens[-1].category != Token.Category.SCOPE_END:
tokens.append(Token(linestart-1, linestart, Token.Category.STATEMENT_SEPARATOR))
elif indentation_level > prev_indentation_level: # [2:] [-1]:‘If it is larger, it is pushed on the stack, and one INDENT token is generated.’ [:3]
if prev_indentation_level == 0: # len(indentation_levels) == 0 or indentation_levels[-1][0] == 0:
indentation_tabs = tabs # первоначальная/новая установка символа для отступа (либо табуляция, либо пробелы) производится только от нулевого уровня отступа
indentation_levels.append((indentation_level, False))
tokens.append(Token(linestart, ii, Token.Category.SCOPE_BEGIN))
if implied_scopes is not None:
implied_scopes.append((Char('{'), tokens[-2].end + (1 if source[tokens[-2].end] in " \n" else 0)))
else: # [3:] [-1]:‘If it is smaller, it ~‘must’ be one of the numbers occurring on the stack; all numbers on the stack that are larger are popped off, and for each number popped off a DEDENT token is generated.’ [:4]
while True:
if indentation_levels[-1][1]:
raise Error('too much unindent, what is this unindent intended for?', ii)
indentation_levels.pop()
tokens.append(Token(ii, ii, Token.Category.SCOPE_END))
if implied_scopes is not None:
implied_scopes.append((Char('}'), ii))
level = indentation_levels[-1][0] if len(indentation_levels) else 0 #level, explicit_scope_via_curly_braces = indentation_levels[-1] if len(indentation_levels) else [0, False]
if level == indentation_level:
break
if level < indentation_level:
raise Error('unindent does not match any outer indentation level', ii)
ch = source[i]
if ch in " \t":
i += 1 # just skip whitespace characters
elif ch in "\r\n":
#if newline_chars is not None: # rejected this code as it does not count newline characters inside comments and string literals
# newline_chars.append(i)
i += 1
if ch == "\r" and source[i:i+1] == "\n":
i += 1
if len(nesting_elements) == 0 or nesting_elements[-1][0] not in '([': # если мы внутри скобок, то начинать новую строку не нужно # ])
begin_of_line = True
elif (ch == '/' and source[i+1:i+2] == '/' ) \
or (ch == '\\' and source[i+1:i+2] == '\\'): # single-line comment
comment_start = i
i += 2
while i < len(source) and source[i] not in "\r\n":
i += 1
if comments is not None:
comments.append((comment_start, i))
elif ch == '\\' and source[i+1:i+2] in "‘({[": # multi-line comment # ]})’
skip_multiline_comment()
else:
def is_hexadecimal_digit(ch):
return '0' <= ch <= '9' or 'A' <= ch <= 'F' or 'a' <= ch <= 'f' or ch in 'абсдефАБСДЕФ'
operator_s = ''
# if ch in 'CС' and not (source[i+1:i+2].isalpha() or source[i+1:i+2].isdigit()): # without this check [and if 'C' is in binary_operators] when identifier starts with `C` (for example `Circle`), then this first letter of identifier is mistakenly considered as an operator
# operator_s = ch
# else:
for op in sorted_operators:
if source[i:i+len(op)] == op:
operator_s = op
break
lexem_start = i
i += 1
category : Token.Category
if operator_s != '':
i = lexem_start + len(operator_s)
if source[i-1] == ' ': # for correct handling of operator 'C '/'in ' in external tools (e.g. keyletters_to_keywords.py)
i -= 1
category = Token.Category.OPERATOR
elif ch.isalpha() or ch in ('_', '@'): # this is NAME/IDENTIFIER or KEYWORD
if ch == '@':
while i < len(source) and source[i] == '@':
i += 1
if i < len(source) and source[i] == '=':
i += 1
while i < len(source):
ch = source[i]
if not (ch.isalpha() or ch in '_?:' or '0' <= ch <= '9'):
break
i += 1
# Tokenize `fs:path:dirname` to ['fs:path', ':', 'dirname']
j = i - 1
while j > lexem_start:
if source[j] == ':':
i = j
break
j -= 1
if source[i:i+1] == '/' and source[i-1:i] in 'IЦ':
if source[i-2:i-1] == ' ':
category = Token.Category.OPERATOR
else:
raise Error('please clarify your intention by putting space character before or after `I`', i-1)
elif source[i:i+1] == "'": # this is a named argument, a raw string or a hexadecimal number
i += 1
if source[i:i+1] == ' ': # this is a named argument
category = Token.Category.NAME
elif source[i:i+1] in ('‘', "'"): # ’ # this is a raw string
i -= 1
category = Token.Category.NAME
else: # this is a hexadecimal number
while i < len(source) and (is_hexadecimal_digit(source[i]) or source[i] == "'"):
i += 1
if not (source[lexem_start+4:lexem_start+5] == "'" or source[i-3:i-2] == "'" or source[i-2:i-1] == "'"):
raise Error('digit separator in this hexadecimal number is located in the wrong place', lexem_start)
category = Token.Category.NUMERIC_LITERAL
elif source[lexem_start:i] in keywords:
if source[lexem_start:i] in ('V', 'П', 'var', 'перем'): # it is more convenient to consider V/var as [type] name, not a keyword
category = Token.Category.NAME
if source[i:i+1] == '&':
i += 1
elif source[lexem_start:i] in ('N', 'Н', 'null', 'нуль'):
category = Token.Category.CONSTANT
else:
category = Token.Category.KEYWORD
if source[i:i+1] == '.': # this is composite keyword like `L.break`
i += 1
while i < len(source) and (source[i].isalpha() or source[i] in '_.'):
i += 1
if source[lexem_start:i] in ('L.index', 'Ц.индекс', 'loop.index', 'цикл.индекс'): # for correct STRING_CONCATENATOR insertion
category = Token.Category.NAME
else:
category = Token.Category.NAME
elif '0' <= ch <= '9': # this is NUMERIC_LITERAL or CONSTANT 0B or 1B
if ch in '01' and source[i:i+1] in ('B', 'В') and not (is_hexadecimal_digit(source[i+1:i+2]) or source[i+1:i+2] == "'"):
i += 1
category = Token.Category.CONSTANT
else:
is_hex = False
while i < len(source) and is_hexadecimal_digit(source[i]):
if not ('0' <= source[i] <= '9'):
if source[i] in 'eE' and source[i+1:i+2] in ('-', '+'): # fix `1e-10`
break
is_hex = True
i += 1
next_digit_separator = 0
is_oct_or_bin = False
if i < len(source) and source[i] == "'":
if i - lexem_start in (2, 1): # special handling for 12'345/1'234 (чтобы это не считалось short/ultrashort hexadecimal number)
j = i + 1
while j < len(source) and is_hexadecimal_digit(source[j]):
if not ('0' <= source[j] <= '9'):
is_hex = True
j += 1
next_digit_separator = j - 1 - i
elif i - lexem_start == 4: # special handling for 1010'1111b (чтобы это не считалось hexadecimal number)
j = i + 1
while j < len(source) and ((is_hexadecimal_digit(source[j]) and not source[j] in 'bд') or source[j] == "'"): # I know, checking for `in 'bд'` is hacky
j += 1
if j < len(source) and source[j] in 'oоbд':
is_oct_or_bin = True
if i < len(source) and source[i] == "'" and ((i - lexem_start == 4 and not is_oct_or_bin) or (i - lexem_start in (2, 1) and (next_digit_separator != 3 or is_hex))): # this is a hexadecimal number
if i - lexem_start == 2: # this is a short hexadecimal number
while True:
i += 1
if i + 2 > len(source) or not is_hexadecimal_digit(source[i]) or not is_hexadecimal_digit(source[i+1]):
raise Error('wrong short hexadecimal number', lexem_start)
i += 2
if i < len(source) and is_hexadecimal_digit(source[i]):
raise Error('expected end of short hexadecimal number', i)
if source[i:i+1] != "'":
break
elif i - lexem_start == 1: # this is an ultrashort hexadecimal number
i += 1
if i + 1 > len(source) or not is_hexadecimal_digit(source[i]):
raise Error('wrong ultrashort hexadecimal number', lexem_start)
i += 1
if i < len(source) and is_hexadecimal_digit(source[i]):
raise Error('expected end of ultrashort hexadecimal number', i)
else:
i += 1
while i < len(source) and is_hexadecimal_digit(source[i]):
i += 1
if (i - lexem_start) % 5 == 4 and i < len(source):
if source[i] != "'":
if not is_hexadecimal_digit(source[i]):
break
raise Error('here should be a digit separator in hexadecimal number', i)
i += 1
if i < len(source) and source[i] == "'":
raise Error('digit separator in hexadecimal number is located in the wrong place', i)
if (i - lexem_start) % 5 != 4:
raise Error('after this digit separator there should be 4 digits in hexadecimal number', source.rfind("'", 0, i))
else:
while i < len(source) and ('0' <= source[i] <= '9' or source[i] in "'.eE"):
if source[i:i+2] in ('..', '.<', '.+'):
break
if source[i] in 'eE':
if source[i+1:i+2] in '-+':
i += 1
i += 1
if source[i:i+1] in ('o', 'о', 'b', 'д', 's', 'i'):
i += 1
elif "'" in source[lexem_start:i] and not '.' in source[lexem_start:i]: # float numbers do not checked for a while
number = source[lexem_start:i].replace("'", '')
number_with_separators = ''
j = len(number)
while j > 3:
number_with_separators = "'" + number[j-3:j] + number_with_separators
j -= 3
number_with_separators = number[0:j] + number_with_separators
if source[lexem_start:i] != number_with_separators:
raise Error('digit separator in this number is located in the wrong place (should be: '+ number_with_separators +')', lexem_start)
category = Token.Category.NUMERIC_LITERAL
elif ch == "'" and source[i:i+1] == ',': # this is a named-only arguments mark
i += 1
category = Token.Category.DELIMITER
elif ch == '"':
if source[i] == '"' \
and tokens[-1].category == Token.Category.STRING_CONCATENATOR \
and tokens[-2].category == Token.Category.STRING_LITERAL \
and tokens[-2].value(source)[0] == '‘': # ’ // for cases like r = abc‘some big ...’""
i += 1 # \\ ‘... string’
continue # [(
startqpos = i - 1
if len(tokens) and tokens[-1].end == startqpos and ((tokens[-1].category == Token.Category.NAME and tokens[-1].value(source)[-1] != "'") or tokens[-1].value(source) in (')', ']')):
tokens.append(Token(lexem_start, lexem_start, Token.Category.STRING_CONCATENATOR))
while True:
if i == len(source):
raise Error('unclosed string literal', startqpos)
ch = source[i]
i += 1
if ch == '\\':
if i == len(source):
continue
i += 1
elif ch == '"':
break
if source[i:i+1].isalpha() or source[i:i+1] in ('_', '@', ':', '‘', '('): # )’
tokens.append(Token(lexem_start, i, Token.Category.STRING_LITERAL))
tokens.append(Token(i, i, Token.Category.STRING_CONCATENATOR))
continue
category = Token.Category.STRING_LITERAL
elif ch in "‘'":
if source[i] == '’' \
and tokens[-1].category == Token.Category.STRING_CONCATENATOR \
and tokens[-2].category == Token.Category.STRING_LITERAL \
and tokens[-2].value(source)[0] == '"': # // for cases like r = abc"some big ..."‘’
i += 1 # \\ ‘... string’
continue # ‘[(
if len(tokens) and tokens[-1].end == i - 1 and ((tokens[-1].category == Token.Category.NAME and tokens[-1].value(source)[-1] != "'") or tokens[-1].value(source) in (')', ']')):
tokens.append(Token(lexem_start, lexem_start, Token.Category.STRING_CONCATENATOR))
if source[i] == '’': # for cases like `a‘’b`
i += 1
continue
i -= 1
while i < len(source) and source[i] == "'":
i += 1
if source[i:i+1] != '‘': # ’
raise Error('expected left single quotation mark', i)
startqpos = i
i += 1
nesting_level = 1
while True:
if i == len(source):
raise Error('unpaired left single quotation mark', startqpos)
ch = source[i]
i += 1
if ch == "‘":
nesting_level += 1
elif ch == "’":
nesting_level -= 1
if nesting_level == 0:
break
while i < len(source) and source[i] == "'":
i += 1
if source[i:i+1].isalpha() or source[i:i+1] in ('_', '@', ':', '"', '('): # )
tokens.append(Token(lexem_start, i, Token.Category.STRING_LITERAL))
tokens.append(Token(i, i, Token.Category.STRING_CONCATENATOR))
continue
category = Token.Category.STRING_LITERAL
elif ch == '{':
indentation_levels.append((-1, True))
nesting_elements.append((Char('{'), lexem_start)) # }
category = Token.Category.SCOPE_BEGIN
elif ch == '}':
if len(nesting_elements) == 0 or nesting_elements[-1][0] != '{':
raise Error('there is no corresponding opening brace for `}`', lexem_start)
nesting_elements.pop()
while indentation_levels[-1][1] != True:
tokens.append(Token(lexem_start, lexem_start, Token.Category.SCOPE_END))
if implied_scopes is not None: # {
implied_scopes.append((Char('}'), lexem_start))
indentation_levels.pop()
assert(indentation_levels.pop()[1] == True)
category = Token.Category.SCOPE_END
elif ch == ';':
category = Token.Category.STATEMENT_SEPARATOR
elif ch in (',', '.', ':'):
category = Token.Category.DELIMITER
elif ch in '([':
if source[lexem_start:lexem_start+3] == '(.)':
i += 2
category = Token.Category.NAME
else:
nesting_elements.append((ch, lexem_start))
category = Token.Category.DELIMITER
elif ch in '])': # ([
if len(nesting_elements) == 0 or nesting_elements[-1][0] != {']':'[', ')':'('}[ch]: # ])
raise Error('there is no corresponding opening parenthesis/bracket for `' + ch + '`', lexem_start)
nesting_elements.pop()
category = Token.Category.DELIMITER
else:
raise Error('unexpected character `' + ch + '`', lexem_start)
tokens.append(Token(lexem_start, i, category))
if len(nesting_elements):
raise Error('there is no corresponding closing parenthesis/bracket/brace for `' + nesting_elements[-1][0] + '`', nesting_elements[-1][1])
# [4:] [-1]:‘At the end of the file, a DEDENT token is generated for each number remaining on the stack that is larger than zero.’
while len(indentation_levels):
assert(indentation_levels[-1][1] != True)
tokens.append(Token(i, i, Token.Category.SCOPE_END))
if implied_scopes is not None: # {
implied_scopes.append((Char('}'), i-1 if source[-1] == "\n" else i))
indentation_levels.pop()
return tokens
| 11l | /11l-2021.3-py3-none-any.whl/_11l_to_cpp/tokenizer.py | tokenizer.py |
from django.apps import AppConfig
class X11XWagtailBlogConfig(AppConfig):
default_auto_field = "django.db.models.BigAutoField"
name = "x11x_wagtail_blog"
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/apps.py | apps.py |
from django.shortcuts import render
# Create your views here.
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/views.py | views.py |
from io import BytesIO
from django.core.files.images import ImageFile
from faker.providers import BaseProvider
from x11x_wagtail_blog.models import AboutTheAuthor
class X11XWagtailBlogProvider(BaseProvider):
"""
Provider for the wonderful faker library. Add `X11XWagtailBlogProvider` to a standard faker to generate data for your test
code.
>>> from faker import Faker
>>> fake = Faker()
>>> fake.add_provider(X11XWagtailBlogProvider)
>>> fake.avatar_image_content() # doctest: +NORMALIZE_QUOTES
b'\\x89PNG...
"""
def avatar_image_content(self, *, size=(32, 32)) -> bytes:
"""
Generate an avatar image of the given size. By default, the image
will be a PNG 32 pixels by 32 pixels.
The use of the image generation functions require the PIL library to be installed.
:param tuple[int, int] size: The width and height of the image to generate.
:return bytes: Returns the binary content of the PNG.
>>> fake.avatar_image_content(size=(4, 4)) # doctest: +NORMALIZE_QUOTES
b'\\x89PNG...
"""
return self.generator.image(
size=size,
image_format="png",
)
def avatar_image_file(self) -> ImageFile:
"""
Generates a `django.core.files.images.ImageFile` that can be assigned to a user's profile.
The use of the image generation functions require the PIL library to be installed.
>>> fake.avatar_image_file()
<ImageFile: ....png>
"""
return ImageFile(
BytesIO(self.avatar_image_content()),
self.generator.file_name(extension="png"),
)
def about_the_author(self, author) -> AboutTheAuthor:
"""
Generates an AboutTheAuthor snippet.
"""
return AboutTheAuthor(
author=author,
body=self.generator.paragraph(),
)
def title_image_content(self, *, size=(2, 2)) -> bytes:
"""
Generates image content suitable for the 'title_image'. Unless ``size`` is given, a 2x2 pixel image will be generated.
>>> fake.title_image_content() # doctest: +NORMALIZE_QUOTES
b'\\x89PNG...
:param tuple[int, int] size: The width and height of the image to generate.
:return bytes: Returns the content of the title image.
"""
return self.generator.image(
size=size,
image_format="png",
)
def title_image_file(self, *, name=None) -> ImageFile:
"""
Generates a `django.core.files.images.ImageFile` that can be assigned to a user's profile.
>>> fake.title_image_file(name="this-name.png")
<ImageFile: this-name.png>
:param str name: The name of the image file to generate.
:return ImageFile: Returns an `ImageFile`
"""
name = name or self.generator.file_name(extension="png")
return ImageFile(
BytesIO(self.title_image_content()),
name,
)
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/fakers.py | fakers.py |
"""
11x Wagtail Blog
================
``11x-wagtail-blog`` is a wagtail app implementing basic blog features for a wagtail site. This project started as an
implementation of the blogging features of ``11x.engineering``, but since it is intended to be used as the first series
of articles, it has been open sourced and published here. It is intended to demonstrate how to develop a fully featured
package published to PyPI.
Quick Start
===========
To install::
pip install 11x-wagtail-blog
Add ``x11x_wagtail_blog`` to your ``INSTALLED_APPS``::
INSTALLED_APPS = [
...,
'x11x_wagtail_blog',
...,
]
Since this package only gives you the common features of every blogging application, you will need to define your own page
models and derive them from `ExtensibleArticlePage`::
>>> from x11x_wagtail_blog.models import ExtensibleArticlePage
>>> from wagtail.admin.panels import FieldPanel
>>> from wagtail.blocks import TextBlock
>>> from wagtail.fields import StreamField
>>> class MyArticlePage(ExtensibleArticlePage):
... body = StreamField([
... ("text", TextBlock()),
... ], use_json_field=True)
...
... content_panels = ExtensibleArticlePage.with_body_panels([
... FieldPanel("body"),
... ])
This can be done in any valid Wagtail app.
Next, generate your migrations as usual::
python manage.py makemigrations
python manage.py migrate
You will have to define a template. The default template used is ``x11x_wagtail_blog/article_page.html``, but you should
override the ``get_template()`` method to return your own template.
The fields available on every blog page can be found in :class:`x11x_wagtail_blog.models.ExtensibleArticlePage`.
.. code-block:: html
<!DOCTYPE html>
<html>
<head>...</head>
<body>
<h1>{{ self.title }}</h1>
{% include_block self.body %}
<h2>About the authors</h2>
{% for author in self.authors %}
{% include "myblog/about_the_author_section.html" with author=author.value %}
{% endfor %}
<h2>Related Articles</h2>
<ul>
{% for article in self.related_articles %}
<li><a href="{% pageurl article %}">{{ article.title }}</a></li>
{% endfor %}
</ul>
</body>
</html>
"""
__version__ = "0.2.0"
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/__init__.py | __init__.py |
from django.conf import settings
from django.db import models
from django.utils import timezone
from modelcluster.fields import ParentalKey
from wagtail.admin.panels import FieldPanel, InlinePanel
from wagtail.fields import StreamField, RichTextField
from wagtail.models import Page
from wagtail.snippets.blocks import SnippetChooserBlock
from wagtail.snippets.models import register_snippet
_RICH_TEXT_SUMMARY_FEATURES = getattr(settings, "X11X_WAGTAIL_BLOG_SUMMARY_FEATURES", ["bold", "italic", "code", "superscript", "subscript", "strikethrough"])
@register_snippet
class AboutTheAuthor(models.Model):
"""
A snippet holding the content of an 'About the Author' section for particular authors.
These snippets are intended to be organized by the various authors of a website. Individual users
may have several 'about' blurbs that they can choose depending on what a particular article calls
for.
"""
author = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.RESTRICT,
editable=True,
blank=False,
related_name="about_the_author_snippets",
)
"A reference to the author this snippet is about."
body = RichTextField()
"A paragraph or two describing the associated author."
panels = [
FieldPanel("author"),
FieldPanel("body"),
]
def __str__(self):
return str(self.author)
class RelatedArticles(models.Model):
"""
You should never have to instantiate ``RelatedArticles`` directly. This is a
model to implement the m2m relationship between articles.
"""
related_to = ParentalKey("ExtensibleArticlePage", verbose_name="Article", related_name="related_article_to")
related_from = ParentalKey("ExtensibleArticlePage", verbose_name="Article", related_name="related_article_from")
class ExtensibleArticlePage(Page):
"""
`ExtensibleArticlePage` is the base class for blog articles. Inherit from `ExtensibleArticlePage` when
and add your own ``body`` element. `ExtensibleArticlePage` are NOT creatable through the wagtail admin.
"""
date = models.DateTimeField(default=timezone.now, null=False, blank=False, editable=True)
"Date to appear in the article subheading."
summary = RichTextField(features=_RICH_TEXT_SUMMARY_FEATURES, default="", blank=True, null=False)
"The article's summary. `summary` will show up in index pages."
title_image = models.ForeignKey(
"wagtailimages.Image",
on_delete=models.RESTRICT,
related_name="+",
null=True,
blank=True,
)
"The image to use in the title header or section of the article."
authors = StreamField(
[
("about_the_authors", SnippetChooserBlock(AboutTheAuthor)),
],
default=list,
use_json_field=True,
blank=True,
)
"About the author sections to include with the article.."
is_creatable = False
settings_panels = Page.settings_panels + [
FieldPanel("date"),
FieldPanel("owner"),
]
pre_body_content_panels = Page.content_panels + [
FieldPanel("title_image"),
FieldPanel("summary"),
]
"Admin `FieldPanels` intended to be displayed BEFORE a ``body`` field."
post_body_content_panels = [
FieldPanel("authors"),
InlinePanel(
"related_article_from",
label="Related Articles",
panels=[FieldPanel("related_to")]
)
]
"Admin `FieldPanel` s intended to be displayed AFTER a ``body`` field."
def has_authors(self):
"""
Returns ``True`` if this article has one or more 'about the authors' snippet. ``False`` otherwise.
"""
return len(self.authors) > 0
@classmethod
def with_body_panels(cls, panels):
"""
A helper method that concatenates all the admin panels of this class with the admin panels intended to enter content
of the main body.
:param panels: Panels intended to show up under the "Title" and "Summary" sections, but before
the 'trailing' sections.
"""
return cls.pre_body_content_panels + panels + cls.post_body_content_panels
def get_template(self, request, *args, **kwargs):
"""
Returns the default template. This method will likely be removed in the (very) near future.
This method may be overridden (like all wagtail pages) to return the intended template.
:deprecated:
"""
return getattr(settings, "X11X_WAGTAIL_BLOG_ARTICLE_TEMPLATE", "x11x_wagtail_blog/article_page.html")
def has_related_articles(self):
"""
Returns `True` if this page has related articles associated with it. Returns ``False`` otherwise.
"""
return self.related_article_from.all().count() > 0
@property
def related_articles(self):
"""
An iterable of related articles related to this one.
"""
return [to.related_to for to in self.related_article_from.all()]
@related_articles.setter
def related_articles(self, value):
"""
Sets the articles related to this one.
:param list[ExtensibleArticlePage] value: A list of related articles.
"""
self.related_article_from = [
RelatedArticles(
related_from=self,
related_to=v
) for v in value
]
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/models.py | models.py |
# Generated by Django 4.2.3 on 2023-07-11 00:07
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import modelcluster.fields
import wagtail.fields
import wagtail.snippets.blocks
import x11x_wagtail_blog.models
class Migration(migrations.Migration):
initial = True
dependencies = [
("wagtailcore", "0083_workflowcontenttype"),
("wagtailimages", "0025_alter_image_file_alter_rendition_file"),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name="ExtensibleArticlePage",
fields=[
(
"page_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="wagtailcore.page",
),
),
("date", models.DateTimeField(default=django.utils.timezone.now)),
("summary", wagtail.fields.RichTextField(blank=True, default="")),
(
"authors",
wagtail.fields.StreamField(
[
(
"about_the_authors",
wagtail.snippets.blocks.SnippetChooserBlock(x11x_wagtail_blog.models.AboutTheAuthor),
)
],
blank=True,
default=list,
use_json_field=True,
),
),
(
"title_image",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.RESTRICT,
related_name="+",
to="wagtailimages.image",
),
),
],
options={
"abstract": False,
},
bases=("wagtailcore.page",),
),
migrations.CreateModel(
name="RelatedArticles",
fields=[
("id", models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
(
"related_from",
modelcluster.fields.ParentalKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="related_article_from",
to="x11x_wagtail_blog.extensiblearticlepage",
verbose_name="Article",
),
),
(
"related_to",
modelcluster.fields.ParentalKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="related_article_to",
to="x11x_wagtail_blog.extensiblearticlepage",
verbose_name="Article",
),
),
],
),
migrations.CreateModel(
name="AboutTheAuthor",
fields=[
("id", models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
("body", wagtail.fields.RichTextField()),
(
"author",
models.ForeignKey(
on_delete=django.db.models.deletion.RESTRICT,
related_name="about_the_author_snippets",
to=settings.AUTH_USER_MODEL,
),
),
],
),
]
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/migrations/0001_initial.py | 0001_initial.py |
from django import template
from django.core.exceptions import ObjectDoesNotExist
from x11x_wagtail_blog.models import AboutTheAuthor
register = template.Library()
@register.inclusion_tag("x11x_wagtail_blog/about_the_author.html")
def about_the_author(snippet: AboutTheAuthor, *, heading="h4"):
# Deprecated, do not use.
try:
avatar = snippet.author.wagtail_userprofile.avatar
except ObjectDoesNotExist:
avatar = None
return {
"author_full_name": snippet.author.first_name + " " + snippet.author.last_name,
"body": snippet.body,
"avatar": avatar,
"heading": heading,
}
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/templatetags/x11x_wagtail_blog.py | x11x_wagtail_blog.py |
from django.utils import timezone
from django.test import override_settings
from faker import Faker
from wagtail.test.utils import WagtailPageTestCase
from wagtail.models import Page
from wagtail.images import get_image_model
from wagtail.users.models import UserProfile
from x11x_wagtail_blog.fakers import X11XWagtailBlogProvider
from x11x_wagtail_blog.tests.testing_models.fakers import TestingModelProvider
from x11x_wagtail_blog.tests.testing_models.models import TestingArticlePage
Image = get_image_model()
fake = Faker()
fake.add_provider(X11XWagtailBlogProvider)
fake.add_provider(TestingModelProvider)
@override_settings(
X11X_WAGTAIL_BLOG_ARTICLE_TEMPLATE="x11x_wagtail_blog/tests/testing_models/testing_article_page.html"
)
class TestArticlePages(WagtailPageTestCase):
def setUp(self):
super().setUp()
self.home = Page.objects.get(slug="home")
def test_blog_articles_have_the_basic_fields(self):
content = fake.paragraph()
title = fake.sentence().title()
username = fake.user_name()
publishing_date = timezone.make_aware(fake.date_time())
summary_text = fake.sentence()
author = self.create_user(
username,
first_name=fake.first_name(),
last_name=fake.last_name(),
)
page = TestingArticlePage(
title=title,
body=[("text", content)],
owner=author,
summary=summary_text,
date=publishing_date
)
self.publish(page)
response = self.client.get(page.full_url)
self.assertContains(response, content)
self.assertContains(response, author.first_name)
self.assertContains(response, author.last_name)
self.assertContains(response, str(publishing_date.year))
self.assertContains(response, str(publishing_date.month))
self.assertContains(response, str(publishing_date.day))
self.assertTemplateUsed(
response,
"x11x_wagtail_blog/tests/testing_models/testing_article_page.html",
)
def test_model_has_authors_returns_fals_when_not_configured_with_authors(self):
page = fake.testing_article_page()
self.publish(page)
self.assertFalse(page.has_authors())
def test_model_has_authors_returns_true_when_configured_with_authors(self):
author = self.create_user("username")
snippet = fake.about_the_author(author)
snippet.save()
page = fake.testing_article_page()
page.authors = [("about_the_authors", snippet)]
self.publish(page)
self.assertTrue(page.has_authors())
def test_blog_has_title_image(self):
author = self.create_user("username")
image_base_name = "test-image"
image_extension = "png"
header_image = Image.objects.create(
title=fake.word(),
file=fake.title_image_file(
name=f"{image_base_name}.{image_extension}",
)
)
page = fake.testing_article_page(owner=author)
page.title_image = header_image
self.publish(page)
response = self.client.get(page.full_url)
self.assertContains(response, page.title_image.title)
self.assertContains(response, image_base_name)
self.assertContains(response, image_extension)
def test_related_articles_are_rendered_properly(self):
owner = self.create_user("username")
related_page_a = fake.testing_article_page(owner=owner)
related_page_b = fake.testing_article_page(owner=owner)
self.home.add_child(instance=related_page_a)
self.home.add_child(instance=related_page_b)
page = TestingArticlePage(
title="Page",
body=[("text", "Content")],
owner=owner,
)
page.related_articles = [related_page_a, related_page_b]
self.publish(page)
response = self.client.get(page.full_url)
for related_page in [related_page_a, related_page_b]:
self.assertContains(response, f"<a href=\"{related_page.url}\">{related_page.title}</a>")
def test_about_the_author_content(self):
owner = self.create_user("username")
owner.wagtail_userprofile = UserProfile()
owner.wagtail_userprofile.avatar = fake.avatar_image_file()
owner.wagtail_userprofile.save()
snippet = fake.about_the_author(owner)
snippet.save()
page = fake.testing_article_page(owner=owner)
page.authors = [("about_the_authors", snippet)]
self.publish(page)
response = self.client.get(page.url)
self.assertContains(response, snippet.body)
self.assertContains(response, owner.wagtail_userprofile.avatar.url)
def test_model_is_extensible(self):
owner = self.create_user("username")
content = fake.sentence()
page = TestingArticlePage(
title="Page",
body=[("text", content)],
owner=owner,
)
self.publish(page)
response = self.client.get(page.full_url)
self.assertContains(response, content)
def publish(self, page):
self.home.add_child(instance=page)
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/tests/test_extensible_article_page.py | test_extensible_article_page.py |
import doctest
from django.test import TestCase
from faker import Faker
import x11x_wagtail_blog
import x11x_wagtail_blog.models
import x11x_wagtail_blog.fakers
NORMALIZE_QUOTES = doctest.register_optionflag("NORMALIZE_QUOTES")
class QuoteNormalizingOutputChecker(doctest.OutputChecker):
def check_output(self, want: str, got: str, optionflags: int) -> bool:
if optionflags & NORMALIZE_QUOTES:
want = want.replace('"', "'")
got = got.replace('"', "'")
return super().check_output(want, got, optionflags)
def run_doctests(m, optionflags=0, extraglobs=None):
finder = doctest.DocTestFinder()
runner = doctest.DocTestRunner(checker=QuoteNormalizingOutputChecker(), optionflags=optionflags)
for test in finder.find(m, m.__name__, extraglobs=extraglobs):
runner.run(test)
runner.summarize()
return doctest.TestResults(runner.failures, runner.tries)
class DocTests(TestCase):
def test_docstrings(self):
results = doctest.testmod(x11x_wagtail_blog)
self.assertEqual(results.failed, 0)
results = doctest.testmod(x11x_wagtail_blog.models)
self.assertEqual(results.failed, 0)
fake = Faker()
fake.add_provider(x11x_wagtail_blog.fakers.X11XWagtailBlogProvider)
results = run_doctests(
x11x_wagtail_blog.fakers,
extraglobs={
"fake": fake,
},
optionflags=doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE | doctest.IGNORE_EXCEPTION_DETAIL,
)
self.assertEqual(results.failed, 0)
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/tests/test_docs.py | test_docs.py |
from django.apps import AppConfig
class X11XWagtailBlogTestingModelsConfig(AppConfig):
default_auto_field = "django.db.models.BigAutoField"
name = "x11x_wagtail_blog.tests.testing_models"
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/tests/testing_models/apps.py | apps.py |
from django.utils.timezone import make_aware
from faker.providers import BaseProvider
from x11x_wagtail_blog.tests.testing_models.models import TestingArticlePage
class TestingModelProvider(BaseProvider):
def testing_article_page(self, *, owner=None) -> TestingArticlePage:
return TestingArticlePage(
title=self.generator.sentence().title(),
summary=self.generator.sentence(),
body=[("text", self.generator.paragraph())],
date=make_aware(self.generator.date_time()),
owner=owner,
)
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/tests/testing_models/fakers.py | fakers.py |
from wagtail.blocks import TextBlock
from wagtail.fields import StreamField
from x11x_wagtail_blog.models import ExtensibleArticlePage
class TestingArticlePage(ExtensibleArticlePage):
body = StreamField([
("text", TextBlock())
], use_json_field=True)
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/tests/testing_models/models.py | models.py |
# Generated by Django 4.2.3 on 2023-07-10 23:54
from django.db import migrations, models
import django.db.models.deletion
import wagtail.blocks
import wagtail.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
("x11x_wagtail_blog", "0001_initial"),
]
operations = [
migrations.CreateModel(
name="TestingArticlePage",
fields=[
(
"extensiblearticlepage_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="x11x_wagtail_blog.extensiblearticlepage",
),
),
("body", wagtail.fields.StreamField([("text", wagtail.blocks.TextBlock())], use_json_field=True)),
],
options={
"abstract": False,
},
bases=("x11x_wagtail_blog.extensiblearticlepage",),
),
]
| 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/tests/testing_models/migrations/0001_initial.py | 0001_initial.py |
from setuptools import setup
setup(name='12_distributions',
version='0.1',
description='Gaussian distributions',
packages=['12_distributions'],
zip_safe=False)
| 12-distributions | /12_distributions-0.1.tar.gz/12_distributions-0.1/setup.py | setup.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | 12-distributions | /12_distributions-0.1.tar.gz/12_distributions-0.1/12_distributions/Gaussiandistribution.py | Gaussiandistribution.py |
class Distribution:
def __init__(self, mu=0, sigma=1):
""" Generic distribution class for calculating and
visualizing a probability distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
self.mean = mu
self.stdev = sigma
self.data = []
def read_data_file(self, file_name):
"""Function to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
Args:
file_name (string): name of a file to read from
Returns:
None
"""
with open(file_name) as file:
data_list = []
line = file.readline()
while line:
data_list.append(int(line))
line = file.readline()
file.close()
self.data = data_list
| 12-distributions | /12_distributions-0.1.tar.gz/12_distributions-0.1/12_distributions/Generaldistribution.py | Generaldistribution.py |
from .Gaussiandistribution import Gaussian
from .Binomialdistribution import Binomial
| 12-distributions | /12_distributions-0.1.tar.gz/12_distributions-0.1/12_distributions/__init__.py | __init__.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) number of trials
TODO: Fill out all functions below
"""
def __init__(self, prob=.5, size=20):
self.n = size
self.p = prob
Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.p * self.n
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev = math.sqrt(self.n * self.p * (1 - self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = 1.0 * sum(self.data) / len(self.data)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n])
plt.title('Bar Chart of Data')
plt.xlabel('outcome')
plt.ylabel('count')
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k)))
b = (self.p ** k) * (1 - self.p) ** (self.n - k)
return a * b
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x = []
y = []
# calculate the x values to visualize
for i in range(self.n + 1):
x.append(i)
y.append(self.pdf(i))
# make the plots
plt.bar(x, y)
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
result = Binomial()
result.n = self.n + other.n
result.p = self.p
result.calculate_mean()
result.calculate_stdev()
return result
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}, p {}, n {}".\
format(self.mean, self.stdev, self.p, self.n) | 12-distributions | /12_distributions-0.1.tar.gz/12_distributions-0.1/12_distributions/Binomialdistribution.py | Binomialdistribution.py |
from setuptools import setup
setup(name='12@test',
version='0.1',
description='Gaussian distributions',
packages=['distributions'],
zip_safe=False)
| 12-test | /12@test-0.1.tar.gz/12@test-0.1/setup.py | setup.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | 12-test | /12@test-0.1.tar.gz/12@test-0.1/distributions/Gaussiandistribution.py | Gaussiandistribution.py |
class Distribution:
def __init__(self, mu=0, sigma=1):
""" Generic distribution class for calculating and
visualizing a probability distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
self.mean = mu
self.stdev = sigma
self.data = []
def read_data_file(self, file_name):
"""Function to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
Args:
file_name (string): name of a file to read from
Returns:
None
"""
with open(file_name) as file:
data_list = []
line = file.readline()
while line:
data_list.append(int(line))
line = file.readline()
file.close()
self.data = data_list
| 12-test | /12@test-0.1.tar.gz/12@test-0.1/distributions/Generaldistribution.py | Generaldistribution.py |
from .Gaussiandistribution import Gaussian
from .Binomialdistribution import Binomial
| 12-test | /12@test-0.1.tar.gz/12@test-0.1/distributions/__init__.py | __init__.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) number of trials
TODO: Fill out all functions below
"""
def __init__(self, prob=.5, size=20):
self.n = size
self.p = prob
Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.p * self.n
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev = math.sqrt(self.n * self.p * (1 - self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = 1.0 * sum(self.data) / len(self.data)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n])
plt.title('Bar Chart of Data')
plt.xlabel('outcome')
plt.ylabel('count')
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k)))
b = (self.p ** k) * (1 - self.p) ** (self.n - k)
return a * b
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x = []
y = []
# calculate the x values to visualize
for i in range(self.n + 1):
x.append(i)
y.append(self.pdf(i))
# make the plots
plt.bar(x, y)
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
result = Binomial()
result.n = self.n + other.n
result.p = self.p
result.calculate_mean()
result.calculate_stdev()
return result
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}, p {}, n {}".\
format(self.mean, self.stdev, self.p, self.n) | 12-test | /12@test-0.1.tar.gz/12@test-0.1/distributions/Binomialdistribution.py | Binomialdistribution.py |

# TensorFlow Research Models
This directory contains code implementations and pre-trained models of published research papers.
The research models are maintained by their respective authors.
## Table of Contents
- [TensorFlow Research Models](#tensorflow-research-models)
- [Table of Contents](#table-of-contents)
- [Modeling Libraries and Models](#modeling-libraries-and-models)
- [Models and Implementations](#models-and-implementations)
- [Computer Vision](#computer-vision)
- [Natural Language Processing](#natural-language-processing)
- [Audio and Speech](#audio-and-speech)
- [Reinforcement Learning](#reinforcement-learning)
- [Others](#others)
- [Old Models and Implementations in TensorFlow 1](#old-models-and-implementations-in-tensorflow-1)
- [Contributions](#contributions)
## Modeling Libraries and Models
| Directory | Name | Description | Maintainer(s) |
|-----------|------|-------------|---------------|
| [object_detection](object_detection) | TensorFlow Object Detection API | A framework that makes it easy to construct, train and deploy object detection models<br /><br />A collection of object detection models pre-trained on the COCO dataset, the Kitti dataset, the Open Images dataset, the AVA v2.1 dataset, and the iNaturalist Species Detection Dataset| jch1, tombstone, pkulzc |
| [slim](slim) | TensorFlow-Slim Image Classification Model Library | A lightweight high-level API of TensorFlow for defining, training and evaluating image classification models <br />• Inception V1/V2/V3/V4<br />• Inception-ResNet-v2<br />• ResNet V1/V2<br />• VGG 16/19<br />• MobileNet V1/V2/V3<br />• NASNet-A_Mobile/Large<br />• PNASNet-5_Large/Mobile | sguada, marksandler2 |
## Models and Implementations
### Computer Vision
| Directory | Paper(s) | Conference | Maintainer(s) |
|-----------|----------|------------|---------------|
| [attention_ocr](attention_ocr) | [Attention-based Extraction of Structured Information from Street View Imagery](https://arxiv.org/abs/1704.03549) | ICDAR 2017 | xavigibert |
| [autoaugment](autoaugment) | [1] [AutoAugment](https://arxiv.org/abs/1805.09501)<br />[2] [Wide Residual Networks](https://arxiv.org/abs/1605.07146)<br />[3] [Shake-Shake regularization](https://arxiv.org/abs/1705.07485)<br />[4] [ShakeDrop Regularization for Deep Residual Learning](https://arxiv.org/abs/1802.02375) | [1] CVPR 2019<br />[2] BMVC 2016<br /> [3] ICLR 2017<br /> [4] ICLR 2018 | barretzoph |
| [deeplab](deeplab) | [1] [DeepLabv1: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs](https://arxiv.org/abs/1412.7062)<br />[2] [DeepLabv2: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs](https://arxiv.org/abs/1606.00915)<br />[3] [DeepLabv3: Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)<br />[4] [DeepLabv3+: Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1802.02611)<br />| [1] ICLR 2015 <br />[2] TPAMI 2017 <br />[4] ECCV 2018 | aquariusjay, yknzhu |
| [delf](delf) | [1] DELF (DEep Local Features): [Large-Scale Image Retrieval with Attentive Deep Local Features](https://arxiv.org/abs/1612.06321)<br />[2] [Detect-to-Retrieve: Efficient Regional Aggregation for Image Search](https://arxiv.org/abs/1812.01584)<br />[3] DELG (DEep Local and Global features): [Unifying Deep Local and Global Features for Image Search](https://arxiv.org/abs/2001.05027)<br />[4] GLDv2: [Google Landmarks Dataset v2 -- A Large-Scale Benchmark for Instance-Level Recognition and Retrieval](https://arxiv.org/abs/2004.01804) | [1] ICCV 2017<br />[2] CVPR 2019<br />[4] CVPR 2020 | andrefaraujo |
| [lstm_object_detection](lstm_object_detection) | [Mobile Video Object Detection with Temporally-Aware Feature Maps](https://arxiv.org/abs/1711.06368) | CVPR 2018 | yinxiaoli, yongzhe2160, lzyuan |
| [marco](marco) | MARCO: [Classification of crystallization outcomes using deep convolutional neural networks](https://arxiv.org/abs/1803.10342) | | vincentvanhoucke |
| [vid2depth](vid2depth) | [Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints](https://arxiv.org/abs/1802.05522) | CVPR 2018 | rezama |
### Natural Language Processing
| Directory | Paper(s) | Conference | Maintainer(s) |
|-----------|----------|------------|---------------|
| [adversarial_text](adversarial_text) | [1] [Adversarial Training Methods for Semi-Supervised Text](https://arxiv.org/abs/1605.07725) Classification<br />[2] [Semi-supervised Sequence Learning](https://arxiv.org/abs/1511.01432) | [1] ICLR 2017<br />[2] NIPS 2015 | rsepassi, a-dai |
| [cvt_text](cvt_text) | [Semi-Supervised Sequence Modeling with Cross-View Training](https://arxiv.org/abs/1809.08370) | EMNLP 2018 | clarkkev, lmthang |
### Audio and Speech
| Directory | Paper(s) | Conference | Maintainer(s) |
|-----------|----------|------------|---------------|
| [audioset](audioset) | [1] [Audio Set: An ontology and human-labeled dataset for audio events](https://research.google/pubs/pub45857/)<br />[2] [CNN Architectures for Large-Scale Audio Classification](https://research.google/pubs/pub45611/) | ICASSP 2017 | plakal, dpwe |
| [deep_speech](deep_speech) | [Deep Speech 2](https://arxiv.org/abs/1512.02595) | ICLR 2016 | yhliang2018 |
### Reinforcement Learning
| Directory | Paper(s) | Conference | Maintainer(s) |
|-----------|----------|------------|---------------|
| [efficient-hrl](efficient-hrl) | [1] [Data-Efficient Hierarchical Reinforcement Learning](https://arxiv.org/abs/1805.08296)<br />[2] [Near-Optimal Representation Learning for Hierarchical Reinforcement Learning](https://arxiv.org/abs/1810.01257) | [1] NIPS 2018<br /> [2] ICLR 2019 | ofirnachum |
| [pcl_rl](pcl_rl) | [1] [Improving Policy Gradient by Exploring Under-appreciated Rewards](https://arxiv.org/abs/1611.09321)<br />[2] [Bridging the Gap Between Value and Policy Based Reinforcement Learning](https://arxiv.org/abs/1702.08892)<br />[3] [Trust-PCL: An Off-Policy Trust Region Method for Continuous Control](https://arxiv.org/abs/1707.01891) | [1] ICLR 2017<br />[2] NIPS 2017<br />[3] ICLR 2018 | ofirnachum |
### Others
| Directory | Paper(s) | Conference | Maintainer(s) |
|-----------|----------|------------|---------------|
| [lfads](lfads) | [LFADS - Latent Factor Analysis via Dynamical Systems](https://arxiv.org/abs/1608.06315) | | jazcollins, sussillo |
| [rebar](rebar) | [REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models](https://arxiv.org/abs/1703.07370) | NIPS 2017 | gjtucker |
### Old Models and Implementations in TensorFlow 1
:warning: If you are looking for old models, please visit the [Archive branch](https://github.com/tensorflow/models/tree/archive/research).
---
## Contributions
If you want to contribute, please review the [contribution guidelines](https://github.com/tensorflow/models/wiki/How-to-contribute).
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/README.md | README.md |
"""Setup script for object_detection with TF2.0."""
import os
from setuptools import find_packages
from setuptools import setup
# Note: adding apache-beam to required packages causes conflict with
# tf-models-offical requirements. These packages request for incompatible
# oauth2client package.
REQUIRED_PACKAGES = [
# Required for apache-beam with PY3
'avro-python3',
'apache-beam',
'pillow',
'lxml',
'matplotlib',
'Cython',
'contextlib2',
'tf-slim',
'six',
'pycocotools',
'lvis',
'scipy',
'pandas',
'tf-models-official'
]
setup(
name='123_object_detection',
version='0.1',
install_requires=REQUIRED_PACKAGES,
include_package_data=True,
packages=(
[p for p in find_packages() if p.startswith('object_detection')] +
find_packages(where=os.path.join('.', 'slim'))),
package_dir={
'datasets': os.path.join('slim', 'datasets'),
'nets': os.path.join('slim', 'nets'),
'preprocessing': os.path.join('slim', 'preprocessing'),
'deployment': os.path.join('slim', 'deployment'),
'scripts': os.path.join('slim', 'scripts'),
},
description='Tensorflow Object Detection Library',
python_requires='>3.6',
)
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/setup.py | setup.py |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Build and train mobilenet_v1 with options for quantization."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from tensorflow.contrib import quantize as contrib_quantize
from datasets import dataset_factory
from nets import mobilenet_v1
from preprocessing import preprocessing_factory
flags = tf.app.flags
flags.DEFINE_string('master', '', 'Session master')
flags.DEFINE_integer('task', 0, 'Task')
flags.DEFINE_integer('ps_tasks', 0, 'Number of ps')
flags.DEFINE_integer('batch_size', 64, 'Batch size')
flags.DEFINE_integer('num_classes', 1001, 'Number of classes to distinguish')
flags.DEFINE_integer('number_of_steps', None,
'Number of training steps to perform before stopping')
flags.DEFINE_integer('image_size', 224, 'Input image resolution')
flags.DEFINE_float('depth_multiplier', 1.0, 'Depth multiplier for mobilenet')
flags.DEFINE_bool('quantize', False, 'Quantize training')
flags.DEFINE_string('fine_tune_checkpoint', '',
'Checkpoint from which to start finetuning.')
flags.DEFINE_string('checkpoint_dir', '',
'Directory for writing training checkpoints and logs')
flags.DEFINE_string('dataset_dir', '', 'Location of dataset')
flags.DEFINE_integer('log_every_n_steps', 100, 'Number of steps per log')
flags.DEFINE_integer('save_summaries_secs', 100,
'How often to save summaries, secs')
flags.DEFINE_integer('save_interval_secs', 100,
'How often to save checkpoints, secs')
FLAGS = flags.FLAGS
_LEARNING_RATE_DECAY_FACTOR = 0.94
def get_learning_rate():
if FLAGS.fine_tune_checkpoint:
# If we are fine tuning a checkpoint we need to start at a lower learning
# rate since we are farther along on training.
return 1e-4
else:
return 0.045
def get_quant_delay():
if FLAGS.fine_tune_checkpoint:
# We can start quantizing immediately if we are finetuning.
return 0
else:
# We need to wait for the model to train a bit before we quantize if we are
# training from scratch.
return 250000
def imagenet_input(is_training):
"""Data reader for imagenet.
Reads in imagenet data and performs pre-processing on the images.
Args:
is_training: bool specifying if train or validation dataset is needed.
Returns:
A batch of images and labels.
"""
if is_training:
dataset = dataset_factory.get_dataset('imagenet', 'train',
FLAGS.dataset_dir)
else:
dataset = dataset_factory.get_dataset('imagenet', 'validation',
FLAGS.dataset_dir)
provider = slim.dataset_data_provider.DatasetDataProvider(
dataset,
shuffle=is_training,
common_queue_capacity=2 * FLAGS.batch_size,
common_queue_min=FLAGS.batch_size)
[image, label] = provider.get(['image', 'label'])
image_preprocessing_fn = preprocessing_factory.get_preprocessing(
'mobilenet_v1', is_training=is_training)
image = image_preprocessing_fn(image, FLAGS.image_size, FLAGS.image_size)
images, labels = tf.train.batch([image, label],
batch_size=FLAGS.batch_size,
num_threads=4,
capacity=5 * FLAGS.batch_size)
labels = slim.one_hot_encoding(labels, FLAGS.num_classes)
return images, labels
def build_model():
"""Builds graph for model to train with rewrites for quantization.
Returns:
g: Graph with fake quantization ops and batch norm folding suitable for
training quantized weights.
train_tensor: Train op for execution during training.
"""
g = tf.Graph()
with g.as_default(), tf.device(
tf.train.replica_device_setter(FLAGS.ps_tasks)):
inputs, labels = imagenet_input(is_training=True)
with slim.arg_scope(mobilenet_v1.mobilenet_v1_arg_scope(is_training=True)):
logits, _ = mobilenet_v1.mobilenet_v1(
inputs,
is_training=True,
depth_multiplier=FLAGS.depth_multiplier,
num_classes=FLAGS.num_classes)
tf.losses.softmax_cross_entropy(labels, logits)
# Call rewriter to produce graph with fake quant ops and folded batch norms
# quant_delay delays start of quantization till quant_delay steps, allowing
# for better model accuracy.
if FLAGS.quantize:
contrib_quantize.create_training_graph(quant_delay=get_quant_delay())
total_loss = tf.losses.get_total_loss(name='total_loss')
# Configure the learning rate using an exponential decay.
num_epochs_per_decay = 2.5
imagenet_size = 1271167
decay_steps = int(imagenet_size / FLAGS.batch_size * num_epochs_per_decay)
learning_rate = tf.train.exponential_decay(
get_learning_rate(),
tf.train.get_or_create_global_step(),
decay_steps,
_LEARNING_RATE_DECAY_FACTOR,
staircase=True)
opt = tf.train.GradientDescentOptimizer(learning_rate)
train_tensor = slim.learning.create_train_op(
total_loss,
optimizer=opt)
slim.summaries.add_scalar_summary(total_loss, 'total_loss', 'losses')
slim.summaries.add_scalar_summary(learning_rate, 'learning_rate', 'training')
return g, train_tensor
def get_checkpoint_init_fn():
"""Returns the checkpoint init_fn if the checkpoint is provided."""
if FLAGS.fine_tune_checkpoint:
variables_to_restore = slim.get_variables_to_restore()
global_step_reset = tf.assign(
tf.train.get_or_create_global_step(), 0)
# When restoring from a floating point model, the min/max values for
# quantized weights and activations are not present.
# We instruct slim to ignore variables that are missing during restoration
# by setting ignore_missing_vars=True
slim_init_fn = slim.assign_from_checkpoint_fn(
FLAGS.fine_tune_checkpoint,
variables_to_restore,
ignore_missing_vars=True)
def init_fn(sess):
slim_init_fn(sess)
# If we are restoring from a floating point model, we need to initialize
# the global step to zero for the exponential decay to result in
# reasonable learning rates.
sess.run(global_step_reset)
return init_fn
else:
return None
def train_model():
"""Trains mobilenet_v1."""
g, train_tensor = build_model()
with g.as_default():
slim.learning.train(
train_tensor,
FLAGS.checkpoint_dir,
is_chief=(FLAGS.task == 0),
master=FLAGS.master,
log_every_n_steps=FLAGS.log_every_n_steps,
graph=g,
number_of_steps=FLAGS.number_of_steps,
save_summaries_secs=FLAGS.save_summaries_secs,
save_interval_secs=FLAGS.save_interval_secs,
init_fn=get_checkpoint_init_fn(),
global_step=tf.train.get_global_step())
def main(unused_arg):
train_model()
if __name__ == '__main__':
tf.app.run(main)
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet_v1_train.py | mobilenet_v1_train.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the definition of the Inception Resnet V2 architecture.
As described in http://arxiv.org/abs/1602.07261.
Inception-v4, Inception-ResNet and the Impact of Residual Connections
on Learning
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
def block35(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
"""Builds the 35x35 resnet block."""
with tf.variable_scope(scope, 'Block35', [net], reuse=reuse):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 32, 1, scope='Conv2d_1x1')
with tf.variable_scope('Branch_1'):
tower_conv1_0 = slim.conv2d(net, 32, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1_0, 32, 3, scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
tower_conv2_0 = slim.conv2d(net, 32, 1, scope='Conv2d_0a_1x1')
tower_conv2_1 = slim.conv2d(tower_conv2_0, 48, 3, scope='Conv2d_0b_3x3')
tower_conv2_2 = slim.conv2d(tower_conv2_1, 64, 3, scope='Conv2d_0c_3x3')
mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_1, tower_conv2_2])
up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None,
activation_fn=None, scope='Conv2d_1x1')
scaled_up = up * scale
if activation_fn == tf.nn.relu6:
# Use clip_by_value to simulate bandpass activation.
scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0)
net += scaled_up
if activation_fn:
net = activation_fn(net)
return net
def block17(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
"""Builds the 17x17 resnet block."""
with tf.variable_scope(scope, 'Block17', [net], reuse=reuse):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 192, 1, scope='Conv2d_1x1')
with tf.variable_scope('Branch_1'):
tower_conv1_0 = slim.conv2d(net, 128, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1_0, 160, [1, 7],
scope='Conv2d_0b_1x7')
tower_conv1_2 = slim.conv2d(tower_conv1_1, 192, [7, 1],
scope='Conv2d_0c_7x1')
mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_2])
up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None,
activation_fn=None, scope='Conv2d_1x1')
scaled_up = up * scale
if activation_fn == tf.nn.relu6:
# Use clip_by_value to simulate bandpass activation.
scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0)
net += scaled_up
if activation_fn:
net = activation_fn(net)
return net
def block8(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
"""Builds the 8x8 resnet block."""
with tf.variable_scope(scope, 'Block8', [net], reuse=reuse):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 192, 1, scope='Conv2d_1x1')
with tf.variable_scope('Branch_1'):
tower_conv1_0 = slim.conv2d(net, 192, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1_0, 224, [1, 3],
scope='Conv2d_0b_1x3')
tower_conv1_2 = slim.conv2d(tower_conv1_1, 256, [3, 1],
scope='Conv2d_0c_3x1')
mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_2])
up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None,
activation_fn=None, scope='Conv2d_1x1')
scaled_up = up * scale
if activation_fn == tf.nn.relu6:
# Use clip_by_value to simulate bandpass activation.
scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0)
net += scaled_up
if activation_fn:
net = activation_fn(net)
return net
def inception_resnet_v2_base(inputs,
final_endpoint='Conv2d_7b_1x1',
output_stride=16,
align_feature_maps=False,
scope=None,
activation_fn=tf.nn.relu):
"""Inception model from http://arxiv.org/abs/1602.07261.
Constructs an Inception Resnet v2 network from inputs to the given final
endpoint. This method can construct the network up to the final inception
block Conv2d_7b_1x1.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
final_endpoint: specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3', 'MaxPool_5a_3x3',
'Mixed_5b', 'Mixed_6a', 'PreAuxLogits', 'Mixed_7a', 'Conv2d_7b_1x1']
output_stride: A scalar that specifies the requested ratio of input to
output spatial resolution. Only supports 8 and 16.
align_feature_maps: When true, changes all the VALID paddings in the network
to SAME padding so that the feature maps are aligned.
scope: Optional variable_scope.
activation_fn: Activation function for block scopes.
Returns:
tensor_out: output tensor corresponding to the final_endpoint.
end_points: a set of activations for external use, for example summaries or
losses.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
or if the output_stride is not 8 or 16, or if the output_stride is 8 and
we request an end point after 'PreAuxLogits'.
"""
if output_stride != 8 and output_stride != 16:
raise ValueError('output_stride must be 8 or 16.')
padding = 'SAME' if align_feature_maps else 'VALID'
end_points = {}
def add_and_check_final(name, net):
end_points[name] = net
return name == final_endpoint
with tf.variable_scope(scope, 'InceptionResnetV2', [inputs]):
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
# 149 x 149 x 32
net = slim.conv2d(inputs, 32, 3, stride=2, padding=padding,
scope='Conv2d_1a_3x3')
if add_and_check_final('Conv2d_1a_3x3', net): return net, end_points
# 147 x 147 x 32
net = slim.conv2d(net, 32, 3, padding=padding,
scope='Conv2d_2a_3x3')
if add_and_check_final('Conv2d_2a_3x3', net): return net, end_points
# 147 x 147 x 64
net = slim.conv2d(net, 64, 3, scope='Conv2d_2b_3x3')
if add_and_check_final('Conv2d_2b_3x3', net): return net, end_points
# 73 x 73 x 64
net = slim.max_pool2d(net, 3, stride=2, padding=padding,
scope='MaxPool_3a_3x3')
if add_and_check_final('MaxPool_3a_3x3', net): return net, end_points
# 73 x 73 x 80
net = slim.conv2d(net, 80, 1, padding=padding,
scope='Conv2d_3b_1x1')
if add_and_check_final('Conv2d_3b_1x1', net): return net, end_points
# 71 x 71 x 192
net = slim.conv2d(net, 192, 3, padding=padding,
scope='Conv2d_4a_3x3')
if add_and_check_final('Conv2d_4a_3x3', net): return net, end_points
# 35 x 35 x 192
net = slim.max_pool2d(net, 3, stride=2, padding=padding,
scope='MaxPool_5a_3x3')
if add_and_check_final('MaxPool_5a_3x3', net): return net, end_points
# 35 x 35 x 320
with tf.variable_scope('Mixed_5b'):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 96, 1, scope='Conv2d_1x1')
with tf.variable_scope('Branch_1'):
tower_conv1_0 = slim.conv2d(net, 48, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1_0, 64, 5,
scope='Conv2d_0b_5x5')
with tf.variable_scope('Branch_2'):
tower_conv2_0 = slim.conv2d(net, 64, 1, scope='Conv2d_0a_1x1')
tower_conv2_1 = slim.conv2d(tower_conv2_0, 96, 3,
scope='Conv2d_0b_3x3')
tower_conv2_2 = slim.conv2d(tower_conv2_1, 96, 3,
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
tower_pool = slim.avg_pool2d(net, 3, stride=1, padding='SAME',
scope='AvgPool_0a_3x3')
tower_pool_1 = slim.conv2d(tower_pool, 64, 1,
scope='Conv2d_0b_1x1')
net = tf.concat(
[tower_conv, tower_conv1_1, tower_conv2_2, tower_pool_1], 3)
if add_and_check_final('Mixed_5b', net): return net, end_points
# TODO(alemi): Register intermediate endpoints
net = slim.repeat(net, 10, block35, scale=0.17,
activation_fn=activation_fn)
# 17 x 17 x 1088 if output_stride == 8,
# 33 x 33 x 1088 if output_stride == 16
use_atrous = output_stride == 8
with tf.variable_scope('Mixed_6a'):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 384, 3, stride=1 if use_atrous else 2,
padding=padding,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
tower_conv1_0 = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1_0, 256, 3,
scope='Conv2d_0b_3x3')
tower_conv1_2 = slim.conv2d(tower_conv1_1, 384, 3,
stride=1 if use_atrous else 2,
padding=padding,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
tower_pool = slim.max_pool2d(net, 3, stride=1 if use_atrous else 2,
padding=padding,
scope='MaxPool_1a_3x3')
net = tf.concat([tower_conv, tower_conv1_2, tower_pool], 3)
if add_and_check_final('Mixed_6a', net): return net, end_points
# TODO(alemi): register intermediate endpoints
with slim.arg_scope([slim.conv2d], rate=2 if use_atrous else 1):
net = slim.repeat(net, 20, block17, scale=0.10,
activation_fn=activation_fn)
if add_and_check_final('PreAuxLogits', net): return net, end_points
if output_stride == 8:
# TODO(gpapan): Properly support output_stride for the rest of the net.
raise ValueError('output_stride==8 is only supported up to the '
'PreAuxlogits end_point for now.')
# 8 x 8 x 2080
with tf.variable_scope('Mixed_7a'):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1')
tower_conv_1 = slim.conv2d(tower_conv, 384, 3, stride=2,
padding=padding,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
tower_conv1 = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1, 288, 3, stride=2,
padding=padding,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
tower_conv2 = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1')
tower_conv2_1 = slim.conv2d(tower_conv2, 288, 3,
scope='Conv2d_0b_3x3')
tower_conv2_2 = slim.conv2d(tower_conv2_1, 320, 3, stride=2,
padding=padding,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_3'):
tower_pool = slim.max_pool2d(net, 3, stride=2,
padding=padding,
scope='MaxPool_1a_3x3')
net = tf.concat(
[tower_conv_1, tower_conv1_1, tower_conv2_2, tower_pool], 3)
if add_and_check_final('Mixed_7a', net): return net, end_points
# TODO(alemi): register intermediate endpoints
net = slim.repeat(net, 9, block8, scale=0.20, activation_fn=activation_fn)
net = block8(net, activation_fn=None)
# 8 x 8 x 1536
net = slim.conv2d(net, 1536, 1, scope='Conv2d_7b_1x1')
if add_and_check_final('Conv2d_7b_1x1', net): return net, end_points
raise ValueError('final_endpoint (%s) not recognized', final_endpoint)
def inception_resnet_v2(inputs, num_classes=1001, is_training=True,
dropout_keep_prob=0.8,
reuse=None,
scope='InceptionResnetV2',
create_aux_logits=True,
activation_fn=tf.nn.relu):
"""Creates the Inception Resnet V2 model.
Args:
inputs: a 4-D tensor of size [batch_size, height, width, 3].
Dimension batch_size may be undefined. If create_aux_logits is false,
also height and width may be undefined.
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
is_training: whether is training or not.
dropout_keep_prob: float, the fraction to keep before final layer.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
create_aux_logits: Whether to include the auxilliary logits.
activation_fn: Activation function for conv2d.
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the non-dropped-out input to the logits layer (if num_classes is 0 or
None).
end_points: the set of end_points from the inception model.
"""
end_points = {}
with tf.variable_scope(
scope, 'InceptionResnetV2', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_resnet_v2_base(inputs, scope=scope,
activation_fn=activation_fn)
if create_aux_logits and num_classes:
with tf.variable_scope('AuxLogits'):
aux = end_points['PreAuxLogits']
aux = slim.avg_pool2d(aux, 5, stride=3, padding='VALID',
scope='Conv2d_1a_3x3')
aux = slim.conv2d(aux, 128, 1, scope='Conv2d_1b_1x1')
aux = slim.conv2d(aux, 768, aux.get_shape()[1:3],
padding='VALID', scope='Conv2d_2a_5x5')
aux = slim.flatten(aux)
aux = slim.fully_connected(aux, num_classes, activation_fn=None,
scope='Logits')
end_points['AuxLogits'] = aux
with tf.variable_scope('Logits'):
# TODO(sguada,arnoegw): Consider adding a parameter global_pool which
# can be set to False to disable pooling here (as in resnet_*()).
kernel_size = net.get_shape()[1:3]
if kernel_size.is_fully_defined():
net = slim.avg_pool2d(net, kernel_size, padding='VALID',
scope='AvgPool_1a_8x8')
else:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if not num_classes:
return net, end_points
net = slim.flatten(net)
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='Dropout')
end_points['PreLogitsFlatten'] = net
logits = slim.fully_connected(net, num_classes, activation_fn=None,
scope='Logits')
end_points['Logits'] = logits
end_points['Predictions'] = tf.nn.softmax(logits, name='Predictions')
return logits, end_points
inception_resnet_v2.default_image_size = 299
def inception_resnet_v2_arg_scope(
weight_decay=0.00004,
batch_norm_decay=0.9997,
batch_norm_epsilon=0.001,
activation_fn=tf.nn.relu,
batch_norm_updates_collections=tf.GraphKeys.UPDATE_OPS,
batch_norm_scale=False):
"""Returns the scope with the default parameters for inception_resnet_v2.
Args:
weight_decay: the weight decay for weights variables.
batch_norm_decay: decay for the moving average of batch_norm momentums.
batch_norm_epsilon: small float added to variance to avoid dividing by zero.
activation_fn: Activation function for conv2d.
batch_norm_updates_collections: Collection for the update ops for
batch norm.
batch_norm_scale: If True, uses an explicit `gamma` multiplier to scale the
activations in the batch normalization layer.
Returns:
a arg_scope with the parameters needed for inception_resnet_v2.
"""
# Set weight_decay for weights in conv2d and fully_connected layers.
with slim.arg_scope([slim.conv2d, slim.fully_connected],
weights_regularizer=slim.l2_regularizer(weight_decay),
biases_regularizer=slim.l2_regularizer(weight_decay)):
batch_norm_params = {
'decay': batch_norm_decay,
'epsilon': batch_norm_epsilon,
'updates_collections': batch_norm_updates_collections,
'fused': None, # Use fused batch norm if possible.
'scale': batch_norm_scale,
}
# Set activation_fn and parameters for batch_norm.
with slim.arg_scope([slim.conv2d], activation_fn=activation_fn,
normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params) as scope:
return scope
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_resnet_v2.py | inception_resnet_v2.py |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the definition for Gated Separable 3D network (S3D-G).
The network architecture is proposed by:
Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu and Kevin Murphy,
Rethinking Spatiotemporal Feature Learning For Video Understanding.
https://arxiv.org/abs/1712.04851.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import i3d_utils
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
conv3d_spatiotemporal = i3d_utils.conv3d_spatiotemporal
inception_block_v1_3d = i3d_utils.inception_block_v1_3d
arg_scope = slim.arg_scope
def s3dg_arg_scope(weight_decay=1e-7,
batch_norm_decay=0.999,
batch_norm_epsilon=0.001):
"""Defines default arg_scope for S3D-G.
Args:
weight_decay: The weight decay to use for regularizing the model.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
Returns:
sc: An arg_scope to use for the models.
"""
batch_norm_params = {
# Decay for the moving averages.
'decay': batch_norm_decay,
# epsilon to prevent 0s in variance.
'epsilon': batch_norm_epsilon,
# Turns off fused batch norm.
'fused': False,
# collection containing the moving mean and moving variance.
'variables_collections': {
'beta': None,
'gamma': None,
'moving_mean': ['moving_vars'],
'moving_variance': ['moving_vars'],
}
}
with arg_scope([slim.conv3d, conv3d_spatiotemporal],
weights_regularizer=slim.l2_regularizer(weight_decay),
activation_fn=tf.nn.relu,
normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params):
with arg_scope([conv3d_spatiotemporal], separable=True) as sc:
return sc
def self_gating(input_tensor, scope, data_format='NDHWC'):
"""Feature gating as used in S3D-G.
Transforms the input features by aggregating features from all
spatial and temporal locations, and applying gating conditioned
on the aggregated features. More details can be found at:
https://arxiv.org/abs/1712.04851
Args:
input_tensor: A 5-D float tensor of size [batch_size, num_frames,
height, width, channels].
scope: scope for `variable_scope`.
data_format: An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC".
The data format of the input and output data. With the default format
"NDHWC", the data is stored in the order of: [batch, in_depth, in_height,
in_width, in_channels]. Alternatively, the format could be "NCDHW", the
data storage order is:
[batch, in_channels, in_depth, in_height, in_width].
Returns:
A tensor with the same shape as input_tensor.
"""
index_c = data_format.index('C')
index_d = data_format.index('D')
index_h = data_format.index('H')
index_w = data_format.index('W')
input_shape = input_tensor.get_shape().as_list()
t = input_shape[index_d]
w = input_shape[index_w]
h = input_shape[index_h]
num_channels = input_shape[index_c]
spatiotemporal_average = slim.avg_pool3d(
input_tensor, [t, w, h],
stride=1,
data_format=data_format,
scope=scope + '/self_gating/avg_pool3d')
weights = slim.conv3d(
spatiotemporal_average,
num_channels, [1, 1, 1],
activation_fn=None,
normalizer_fn=None,
biases_initializer=None,
data_format=data_format,
weights_initializer=trunc_normal(0.01),
scope=scope + '/self_gating/transformer_W')
tile_multiples = [1, t, w, h]
tile_multiples.insert(index_c, 1)
weights = tf.tile(weights, tile_multiples)
weights = tf.nn.sigmoid(weights)
return tf.multiply(weights, input_tensor)
def s3dg_base(inputs,
first_temporal_kernel_size=3,
temporal_conv_startat='Conv2d_2c_3x3',
gating_startat='Conv2d_2c_3x3',
final_endpoint='Mixed_5c',
min_depth=16,
depth_multiplier=1.0,
data_format='NDHWC',
scope='InceptionV1'):
"""Defines the I3D/S3DG base architecture.
Note that we use the names as defined in Inception V1 to facilitate checkpoint
conversion from an image-trained Inception V1 checkpoint to I3D checkpoint.
Args:
inputs: A 5-D float tensor of size [batch_size, num_frames, height, width,
channels].
first_temporal_kernel_size: Specifies the temporal kernel size for the first
conv3d filter. A larger value slows down the model but provides little
accuracy improvement. The default is 7 in the original I3D and S3D-G but 3
gives better performance. Must be set to one of 1, 3, 5 or 7.
temporal_conv_startat: Specifies the first conv block to use 3D or separable
3D convs rather than 2D convs (implemented as [1, k, k] 3D conv). This is
used to construct the inverted pyramid models. 'Conv2d_2c_3x3' is the
first valid block to use separable 3D convs. If provided block name is
not present, all valid blocks will use separable 3D convs. Note that
'Conv2d_1a_7x7' cannot be made into a separable 3D conv, but can be made
into a 2D or 3D conv using the `first_temporal_kernel_size` option.
gating_startat: Specifies the first conv block to use self gating.
'Conv2d_2c_3x3' is the first valid block to use self gating. If provided
block name is not present, all valid blocks will use separable 3D convs.
final_endpoint: Specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e',
'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b', 'Mixed_5c']
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
data_format: An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC".
The data format of the input and output data. With the default format
"NDHWC", the data is stored in the order of: [batch, in_depth, in_height,
in_width, in_channels]. Alternatively, the format could be "NCDHW", the
data storage order is:
[batch, in_channels, in_depth, in_height, in_width].
scope: Optional variable_scope.
Returns:
A dictionary from components of the network to the corresponding activation.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values, or
if depth_multiplier <= 0.
"""
assert data_format in ['NDHWC', 'NCDHW']
end_points = {}
t = 1
# For inverted pyramid models, we start with gating switched off.
use_gating = False
self_gating_fn = None
def gating_fn(inputs, scope):
return self_gating(inputs, scope, data_format=data_format)
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
depth = lambda d: max(int(d * depth_multiplier), min_depth)
with tf.variable_scope(scope, 'InceptionV1', [inputs]):
with arg_scope([slim.conv3d], weights_initializer=trunc_normal(0.01)):
with arg_scope([slim.conv3d, slim.max_pool3d, conv3d_spatiotemporal],
stride=1,
data_format=data_format,
padding='SAME'):
# batch_size x 32 x 112 x 112 x 64
end_point = 'Conv2d_1a_7x7'
if first_temporal_kernel_size not in [1, 3, 5, 7]:
raise ValueError(
'first_temporal_kernel_size can only be 1, 3, 5 or 7.')
# Separable conv is slow when used at first conv layer.
net = conv3d_spatiotemporal(
inputs,
depth(64), [first_temporal_kernel_size, 7, 7],
stride=2,
separable=False,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 32 x 56 x 56 x 64
end_point = 'MaxPool_2a_3x3'
net = slim.max_pool3d(net, [1, 3, 3], stride=[1, 2, 2], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 32 x 56 x 56 x 64
end_point = 'Conv2d_2b_1x1'
net = slim.conv3d(net, depth(64), [1, 1, 1], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 32 x 56 x 56 x 192
end_point = 'Conv2d_2c_3x3'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = conv3d_spatiotemporal(net, depth(192), [t, 3, 3], scope=end_point)
if use_gating:
net = self_gating(net, scope=end_point, data_format=data_format)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 32 x 28 x 28 x 192
end_point = 'MaxPool_3a_3x3'
net = slim.max_pool3d(net, [1, 3, 3], stride=[1, 2, 2], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 32 x 28 x 28 x 256
end_point = 'Mixed_3b'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(64),
num_outputs_1_0a=depth(96),
num_outputs_1_0b=depth(128),
num_outputs_2_0a=depth(16),
num_outputs_2_0b=depth(32),
num_outputs_3_0b=depth(32),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'Mixed_3c'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(128),
num_outputs_1_0a=depth(128),
num_outputs_1_0b=depth(192),
num_outputs_2_0a=depth(32),
num_outputs_2_0b=depth(96),
num_outputs_3_0b=depth(64),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'MaxPool_4a_3x3'
net = slim.max_pool3d(net, [3, 3, 3], stride=[2, 2, 2], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 16 x 14 x 14 x 512
end_point = 'Mixed_4b'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(192),
num_outputs_1_0a=depth(96),
num_outputs_1_0b=depth(208),
num_outputs_2_0a=depth(16),
num_outputs_2_0b=depth(48),
num_outputs_3_0b=depth(64),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 16 x 14 x 14 x 512
end_point = 'Mixed_4c'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(160),
num_outputs_1_0a=depth(112),
num_outputs_1_0b=depth(224),
num_outputs_2_0a=depth(24),
num_outputs_2_0b=depth(64),
num_outputs_3_0b=depth(64),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 16 x 14 x 14 x 512
end_point = 'Mixed_4d'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(128),
num_outputs_1_0a=depth(128),
num_outputs_1_0b=depth(256),
num_outputs_2_0a=depth(24),
num_outputs_2_0b=depth(64),
num_outputs_3_0b=depth(64),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 16 x 14 x 14 x 528
end_point = 'Mixed_4e'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(112),
num_outputs_1_0a=depth(144),
num_outputs_1_0b=depth(288),
num_outputs_2_0a=depth(32),
num_outputs_2_0b=depth(64),
num_outputs_3_0b=depth(64),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 16 x 14 x 14 x 832
end_point = 'Mixed_4f'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(256),
num_outputs_1_0a=depth(160),
num_outputs_1_0b=depth(320),
num_outputs_2_0a=depth(32),
num_outputs_2_0b=depth(128),
num_outputs_3_0b=depth(128),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'MaxPool_5a_2x2'
net = slim.max_pool3d(net, [2, 2, 2], stride=[2, 2, 2], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 8 x 7 x 7 x 832
end_point = 'Mixed_5b'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(256),
num_outputs_1_0a=depth(160),
num_outputs_1_0b=depth(320),
num_outputs_2_0a=depth(32),
num_outputs_2_0b=depth(128),
num_outputs_3_0b=depth(128),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 8 x 7 x 7 x 1024
end_point = 'Mixed_5c'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(384),
num_outputs_1_0a=depth(192),
num_outputs_1_0b=depth(384),
num_outputs_2_0a=depth(48),
num_outputs_2_0b=depth(128),
num_outputs_3_0b=depth(128),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def s3dg(inputs,
num_classes=1000,
first_temporal_kernel_size=3,
temporal_conv_startat='Conv2d_2c_3x3',
gating_startat='Conv2d_2c_3x3',
final_endpoint='Mixed_5c',
min_depth=16,
depth_multiplier=1.0,
dropout_keep_prob=0.8,
is_training=True,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
data_format='NDHWC',
scope='InceptionV1'):
"""Defines the S3D-G architecture.
The default image size used to train this network is 224x224.
Args:
inputs: A 5-D float tensor of size [batch_size, num_frames, height, width,
channels].
num_classes: number of predicted classes.
first_temporal_kernel_size: Specifies the temporal kernel size for the first
conv3d filter. A larger value slows down the model but provides little
accuracy improvement. Must be set to one of 1, 3, 5 or 7.
temporal_conv_startat: Specifies the first conv block to use separable 3D
convs rather than 2D convs (implemented as [1, k, k] 3D conv). This is
used to construct the inverted pyramid models. 'Conv2d_2c_3x3' is the
first valid block to use separable 3D convs. If provided block name is
not present, all valid blocks will use separable 3D convs.
gating_startat: Specifies the first conv block to use self gating.
'Conv2d_2c_3x3' is the first valid block to use self gating. If provided
block name is not present, all valid blocks will use separable 3D convs.
final_endpoint: Specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e',
'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b', 'Mixed_5c']
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
dropout_keep_prob: the percentage of activation values that are retained.
is_training: whether is training or not.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape is [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
data_format: An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC".
The data format of the input and output data. With the default format
"NDHWC", the data is stored in the order of: [batch, in_depth, in_height,
in_width, in_channels]. Alternatively, the format could be "NCDHW", the
data storage order is:
[batch, in_channels, in_depth, in_height, in_width].
scope: Optional variable_scope.
Returns:
logits: the pre-softmax activations, a tensor of size
[batch_size, num_classes]
end_points: a dictionary from components of the network to the corresponding
activation.
"""
assert data_format in ['NDHWC', 'NCDHW']
# Final pooling and prediction
with tf.variable_scope(
scope, 'InceptionV1', [inputs, num_classes], reuse=reuse) as scope:
with arg_scope([slim.batch_norm, slim.dropout], is_training=is_training):
net, end_points = s3dg_base(
inputs,
first_temporal_kernel_size=first_temporal_kernel_size,
temporal_conv_startat=temporal_conv_startat,
gating_startat=gating_startat,
final_endpoint=final_endpoint,
min_depth=min_depth,
depth_multiplier=depth_multiplier,
data_format=data_format,
scope=scope)
with tf.variable_scope('Logits'):
if data_format.startswith('NC'):
net = tf.transpose(a=net, perm=[0, 2, 3, 4, 1])
kernel_size = i3d_utils.reduced_kernel_size_3d(net, [2, 7, 7])
net = slim.avg_pool3d(
net,
kernel_size,
stride=1,
data_format='NDHWC',
scope='AvgPool_0a_7x7')
net = slim.dropout(net, dropout_keep_prob, scope='Dropout_0b')
logits = slim.conv3d(
net,
num_classes, [1, 1, 1],
activation_fn=None,
normalizer_fn=None,
data_format='NDHWC',
scope='Conv2d_0c_1x1')
# Temporal average pooling.
logits = tf.reduce_mean(input_tensor=logits, axis=1)
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
s3dg.default_image_size = 224
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/s3dg.py | s3dg.py |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Utilities for building I3D network models."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
import tf_slim as slim
add_arg_scope = slim.add_arg_scope
layers = slim.layers
def center_initializer():
"""Centering Initializer for I3D.
This initializer allows identity mapping for temporal convolution at the
initialization, which is critical for a desired convergence behavior
for training a seprable I3D model.
The centering behavior of this initializer requires an odd-sized kernel,
typically set to 3.
Returns:
A weight initializer op used in temporal convolutional layers.
Raises:
ValueError: Input tensor data type has to be tf.float32.
ValueError: If input tensor is not a 5-D tensor.
ValueError: If input and output channel dimensions are different.
ValueError: If spatial kernel sizes are not 1.
ValueError: If temporal kernel size is even.
"""
def _initializer(shape, dtype=tf.float32, partition_info=None): # pylint: disable=unused-argument
"""Initializer op."""
if dtype != tf.float32 and dtype != tf.bfloat16:
raise ValueError(
'Input tensor data type has to be tf.float32 or tf.bfloat16.')
if len(shape) != 5:
raise ValueError('Input tensor has to be 5-D.')
if shape[3] != shape[4]:
raise ValueError('Input and output channel dimensions must be the same.')
if shape[1] != 1 or shape[2] != 1:
raise ValueError('Spatial kernel sizes must be 1 (pointwise conv).')
if shape[0] % 2 == 0:
raise ValueError('Temporal kernel size has to be odd.')
center_pos = int(shape[0] / 2)
init_mat = np.zeros(
[shape[0], shape[1], shape[2], shape[3], shape[4]], dtype=np.float32)
for i in range(0, shape[3]):
init_mat[center_pos, 0, 0, i, i] = 1.0
init_op = tf.constant(init_mat, dtype=dtype)
return init_op
return _initializer
@add_arg_scope
def conv3d_spatiotemporal(inputs,
num_outputs,
kernel_size,
stride=1,
padding='SAME',
activation_fn=None,
normalizer_fn=None,
normalizer_params=None,
weights_regularizer=None,
separable=False,
data_format='NDHWC',
scope=''):
"""A wrapper for conv3d to model spatiotemporal representations.
This allows switching between original 3D convolution and separable 3D
convolutions for spatial and temporal features respectively. On Kinetics,
seprable 3D convolutions yields better classification performance.
Args:
inputs: a 5-D tensor `[batch_size, depth, height, width, channels]`.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 3
`[kernel_depth, kernel_height, kernel_width]` of the filters. Can be an
int if all values are the same.
stride: a list of length 3 `[stride_depth, stride_height, stride_width]`.
Can be an int if all strides are the same.
padding: one of `VALID` or `SAME`.
activation_fn: activation function.
normalizer_fn: normalization function to use instead of `biases`.
normalizer_params: dictionary of normalization function parameters.
weights_regularizer: Optional regularizer for the weights.
separable: If `True`, use separable spatiotemporal convolutions.
data_format: An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC".
The data format of the input and output data. With the default format
"NDHWC", the data is stored in the order of: [batch, in_depth, in_height,
in_width, in_channels]. Alternatively, the format could be "NCDHW", the
data storage order is:
[batch, in_channels, in_depth, in_height, in_width].
scope: scope for `variable_scope`.
Returns:
A tensor representing the output of the (separable) conv3d operation.
"""
assert len(kernel_size) == 3
if separable and kernel_size[0] != 1:
spatial_kernel_size = [1, kernel_size[1], kernel_size[2]]
temporal_kernel_size = [kernel_size[0], 1, 1]
if isinstance(stride, list) and len(stride) == 3:
spatial_stride = [1, stride[1], stride[2]]
temporal_stride = [stride[0], 1, 1]
else:
spatial_stride = [1, stride, stride]
temporal_stride = [stride, 1, 1]
net = layers.conv3d(
inputs,
num_outputs,
spatial_kernel_size,
stride=spatial_stride,
padding=padding,
activation_fn=activation_fn,
normalizer_fn=normalizer_fn,
normalizer_params=normalizer_params,
weights_regularizer=weights_regularizer,
data_format=data_format,
scope=scope)
net = layers.conv3d(
net,
num_outputs,
temporal_kernel_size,
stride=temporal_stride,
padding=padding,
scope=scope + '/temporal',
activation_fn=activation_fn,
normalizer_fn=None,
data_format=data_format,
weights_initializer=center_initializer())
return net
else:
return layers.conv3d(
inputs,
num_outputs,
kernel_size,
stride=stride,
padding=padding,
activation_fn=activation_fn,
normalizer_fn=normalizer_fn,
normalizer_params=normalizer_params,
weights_regularizer=weights_regularizer,
data_format=data_format,
scope=scope)
@add_arg_scope
def inception_block_v1_3d(inputs,
num_outputs_0_0a,
num_outputs_1_0a,
num_outputs_1_0b,
num_outputs_2_0a,
num_outputs_2_0b,
num_outputs_3_0b,
temporal_kernel_size=3,
self_gating_fn=None,
data_format='NDHWC',
scope=''):
"""A 3D Inception v1 block.
This allows use of separable 3D convolutions and self-gating, as
described in:
Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu and Kevin Murphy,
Rethinking Spatiotemporal Feature Learning For Video Understanding.
https://arxiv.org/abs/1712.04851.
Args:
inputs: a 5-D tensor `[batch_size, depth, height, width, channels]`.
num_outputs_0_0a: integer, the number of output filters for Branch 0,
operation Conv2d_0a_1x1.
num_outputs_1_0a: integer, the number of output filters for Branch 1,
operation Conv2d_0a_1x1.
num_outputs_1_0b: integer, the number of output filters for Branch 1,
operation Conv2d_0b_3x3.
num_outputs_2_0a: integer, the number of output filters for Branch 2,
operation Conv2d_0a_1x1.
num_outputs_2_0b: integer, the number of output filters for Branch 2,
operation Conv2d_0b_3x3.
num_outputs_3_0b: integer, the number of output filters for Branch 3,
operation Conv2d_0b_1x1.
temporal_kernel_size: integer, the size of the temporal convolutional
filters in the conv3d_spatiotemporal blocks.
self_gating_fn: function which optionally performs self-gating.
Must have two arguments, `inputs` and `scope`, and return one output
tensor the same size as `inputs`. If `None`, no self-gating is
applied.
data_format: An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC".
The data format of the input and output data. With the default format
"NDHWC", the data is stored in the order of: [batch, in_depth, in_height,
in_width, in_channels]. Alternatively, the format could be "NCDHW", the
data storage order is:
[batch, in_channels, in_depth, in_height, in_width].
scope: scope for `variable_scope`.
Returns:
A 5-D tensor `[batch_size, depth, height, width, out_channels]`, where
`out_channels = num_outputs_0_0a + num_outputs_1_0b + num_outputs_2_0b
+ num_outputs_3_0b`.
"""
use_gating = self_gating_fn is not None
with tf.variable_scope(scope):
with tf.variable_scope('Branch_0'):
branch_0 = layers.conv3d(
inputs, num_outputs_0_0a, [1, 1, 1], scope='Conv2d_0a_1x1')
if use_gating:
branch_0 = self_gating_fn(branch_0, scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = layers.conv3d(
inputs, num_outputs_1_0a, [1, 1, 1], scope='Conv2d_0a_1x1')
branch_1 = conv3d_spatiotemporal(
branch_1, num_outputs_1_0b, [temporal_kernel_size, 3, 3],
scope='Conv2d_0b_3x3')
if use_gating:
branch_1 = self_gating_fn(branch_1, scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = layers.conv3d(
inputs, num_outputs_2_0a, [1, 1, 1], scope='Conv2d_0a_1x1')
branch_2 = conv3d_spatiotemporal(
branch_2, num_outputs_2_0b, [temporal_kernel_size, 3, 3],
scope='Conv2d_0b_3x3')
if use_gating:
branch_2 = self_gating_fn(branch_2, scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = layers.max_pool3d(inputs, [3, 3, 3], scope='MaxPool_0a_3x3')
branch_3 = layers.conv3d(
branch_3, num_outputs_3_0b, [1, 1, 1], scope='Conv2d_0b_1x1')
if use_gating:
branch_3 = self_gating_fn(branch_3, scope='Conv2d_0b_1x1')
index_c = data_format.index('C')
assert 1 <= index_c <= 4, 'Cannot identify channel dimension.'
output = tf.concat([branch_0, branch_1, branch_2, branch_3], index_c)
return output
def reduced_kernel_size_3d(input_tensor, kernel_size):
"""Define kernel size which is automatically reduced for small input.
If the shape of the input images is unknown at graph construction time this
function assumes that the input images are large enough.
Args:
input_tensor: input tensor of size
[batch_size, time, height, width, channels].
kernel_size: desired kernel size of length 3, corresponding to time,
height and width.
Returns:
a tensor with the kernel size.
"""
assert len(kernel_size) == 3
shape = input_tensor.get_shape().as_list()
assert len(shape) == 5
if None in shape[1:4]:
kernel_size_out = kernel_size
else:
kernel_size_out = [min(shape[1], kernel_size[0]),
min(shape[2], kernel_size[1]),
min(shape[3], kernel_size[2])]
return kernel_size_out
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/i3d_utils.py | i3d_utils.py |
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
"""Implementation of the Image-to-Image Translation model.
This network represents a port of the following work:
Image-to-Image Translation with Conditional Adversarial Networks
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou and Alexei A. Efros
Arxiv, 2017
https://phillipi.github.io/pix2pix/
A reference implementation written in Lua can be found at:
https://github.com/phillipi/pix2pix/blob/master/models.lua
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import functools
import tensorflow.compat.v1 as tf
import tf_slim as slim
def pix2pix_arg_scope():
"""Returns a default argument scope for isola_net.
Returns:
An arg scope.
"""
# These parameters come from the online port, which don't necessarily match
# those in the paper.
# TODO(nsilberman): confirm these values with Philip.
instance_norm_params = {
'center': True,
'scale': True,
'epsilon': 0.00001,
}
with slim.arg_scope(
[slim.conv2d, slim.conv2d_transpose],
normalizer_fn=slim.instance_norm,
normalizer_params=instance_norm_params,
weights_initializer=tf.random_normal_initializer(0, 0.02)) as sc:
return sc
def upsample(net, num_outputs, kernel_size, method='nn_upsample_conv'):
"""Upsamples the given inputs.
Args:
net: A `Tensor` of size [batch_size, height, width, filters].
num_outputs: The number of output filters.
kernel_size: A list of 2 scalars or a 1x2 `Tensor` indicating the scale,
relative to the inputs, of the output dimensions. For example, if kernel
size is [2, 3], then the output height and width will be twice and three
times the input size.
method: The upsampling method.
Returns:
An `Tensor` which was upsampled using the specified method.
Raises:
ValueError: if `method` is not recognized.
"""
net_shape = tf.shape(input=net)
height = net_shape[1]
width = net_shape[2]
if method == 'nn_upsample_conv':
net = tf.image.resize(
net, [kernel_size[0] * height, kernel_size[1] * width],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
net = slim.conv2d(net, num_outputs, [4, 4], activation_fn=None)
elif method == 'conv2d_transpose':
net = slim.conv2d_transpose(
net, num_outputs, [4, 4], stride=kernel_size, activation_fn=None)
else:
raise ValueError('Unknown method: [%s]' % method)
return net
class Block(
collections.namedtuple('Block', ['num_filters', 'decoder_keep_prob'])):
"""Represents a single block of encoder and decoder processing.
The Image-to-Image translation paper works a bit differently than the original
U-Net model. In particular, each block represents a single operation in the
encoder which is concatenated with the corresponding decoder representation.
A dropout layer follows the concatenation and convolution of the concatenated
features.
"""
pass
def _default_generator_blocks():
"""Returns the default generator block definitions.
Returns:
A list of generator blocks.
"""
return [
Block(64, 0.5),
Block(128, 0.5),
Block(256, 0.5),
Block(512, 0),
Block(512, 0),
Block(512, 0),
Block(512, 0),
]
def pix2pix_generator(net,
num_outputs,
blocks=None,
upsample_method='nn_upsample_conv',
is_training=False): # pylint: disable=unused-argument
"""Defines the network architecture.
Args:
net: A `Tensor` of size [batch, height, width, channels]. Note that the
generator currently requires square inputs (e.g. height=width).
num_outputs: The number of (per-pixel) outputs.
blocks: A list of generator blocks or `None` to use the default generator
definition.
upsample_method: The method of upsampling images, one of 'nn_upsample_conv'
or 'conv2d_transpose'
is_training: Whether or not we're in training or testing mode.
Returns:
A `Tensor` representing the model output and a dictionary of model end
points.
Raises:
ValueError: if the input heights do not match their widths.
"""
end_points = {}
blocks = blocks or _default_generator_blocks()
input_size = net.get_shape().as_list()
input_size[3] = num_outputs
upsample_fn = functools.partial(upsample, method=upsample_method)
encoder_activations = []
###########
# Encoder #
###########
with tf.variable_scope('encoder'):
with slim.arg_scope([slim.conv2d],
kernel_size=[4, 4],
stride=2,
activation_fn=tf.nn.leaky_relu):
for block_id, block in enumerate(blocks):
# No normalizer for the first encoder layers as per 'Image-to-Image',
# Section 5.1.1
if block_id == 0:
# First layer doesn't use normalizer_fn
net = slim.conv2d(net, block.num_filters, normalizer_fn=None)
elif block_id < len(blocks) - 1:
net = slim.conv2d(net, block.num_filters)
else:
# Last layer doesn't use activation_fn nor normalizer_fn
net = slim.conv2d(
net, block.num_filters, activation_fn=None, normalizer_fn=None)
encoder_activations.append(net)
end_points['encoder%d' % block_id] = net
###########
# Decoder #
###########
reversed_blocks = list(blocks)
reversed_blocks.reverse()
with tf.variable_scope('decoder'):
# Dropout is used at both train and test time as per 'Image-to-Image',
# Section 2.1 (last paragraph).
with slim.arg_scope([slim.dropout], is_training=True):
for block_id, block in enumerate(reversed_blocks):
if block_id > 0:
net = tf.concat([net, encoder_activations[-block_id - 1]], axis=3)
# The Relu comes BEFORE the upsample op:
net = tf.nn.relu(net)
net = upsample_fn(net, block.num_filters, [2, 2])
if block.decoder_keep_prob > 0:
net = slim.dropout(net, keep_prob=block.decoder_keep_prob)
end_points['decoder%d' % block_id] = net
with tf.variable_scope('output'):
# Explicitly set the normalizer_fn to None to override any default value
# that may come from an arg_scope, such as pix2pix_arg_scope.
logits = slim.conv2d(
net, num_outputs, [4, 4], activation_fn=None, normalizer_fn=None)
logits = tf.reshape(logits, input_size)
end_points['logits'] = logits
end_points['predictions'] = tf.tanh(logits)
return logits, end_points
def pix2pix_discriminator(net, num_filters, padding=2, pad_mode='REFLECT',
activation_fn=tf.nn.leaky_relu, is_training=False):
"""Creates the Image2Image Translation Discriminator.
Args:
net: A `Tensor` of size [batch_size, height, width, channels] representing
the input.
num_filters: A list of the filters in the discriminator. The length of the
list determines the number of layers in the discriminator.
padding: Amount of reflection padding applied before each convolution.
pad_mode: mode for tf.pad, one of "CONSTANT", "REFLECT", or "SYMMETRIC".
activation_fn: activation fn for slim.conv2d.
is_training: Whether or not the model is training or testing.
Returns:
A logits `Tensor` of size [batch_size, N, N, 1] where N is the number of
'patches' we're attempting to discriminate and a dictionary of model end
points.
"""
del is_training
end_points = {}
num_layers = len(num_filters)
def padded(net, scope):
if padding:
with tf.variable_scope(scope):
spatial_pad = tf.constant(
[[0, 0], [padding, padding], [padding, padding], [0, 0]],
dtype=tf.int32)
return tf.pad(tensor=net, paddings=spatial_pad, mode=pad_mode)
else:
return net
with slim.arg_scope([slim.conv2d],
kernel_size=[4, 4],
stride=2,
padding='valid',
activation_fn=activation_fn):
# No normalization on the input layer.
net = slim.conv2d(
padded(net, 'conv0'), num_filters[0], normalizer_fn=None, scope='conv0')
end_points['conv0'] = net
for i in range(1, num_layers - 1):
net = slim.conv2d(
padded(net, 'conv%d' % i), num_filters[i], scope='conv%d' % i)
end_points['conv%d' % i] = net
# Stride 1 on the last layer.
net = slim.conv2d(
padded(net, 'conv%d' % (num_layers - 1)),
num_filters[-1],
stride=1,
scope='conv%d' % (num_layers - 1))
end_points['conv%d' % (num_layers - 1)] = net
# 1-dim logits, stride 1, no activation, no normalization.
logits = slim.conv2d(
padded(net, 'conv%d' % num_layers),
1,
stride=1,
activation_fn=None,
normalizer_fn=None,
scope='conv%d' % num_layers)
end_points['logits'] = logits
end_points['predictions'] = tf.sigmoid(logits)
return logits, end_points
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/pix2pix.py | pix2pix.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for slim.inception_v4."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception
class InceptionTest(tf.test.TestCase):
def testBuildLogits(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v4(inputs, num_classes)
auxlogits = end_points['AuxLogits']
predictions = end_points['Predictions']
self.assertTrue(auxlogits.op.name.startswith('InceptionV4/AuxLogits'))
self.assertListEqual(auxlogits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue(predictions.op.name.startswith(
'InceptionV4/Logits/Predictions'))
self.assertListEqual(predictions.get_shape().as_list(),
[batch_size, num_classes])
def testBuildPreLogitsNetwork(self):
batch_size = 5
height, width = 299, 299
num_classes = None
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = inception.inception_v4(inputs, num_classes)
self.assertTrue(net.op.name.startswith('InceptionV4/Logits/AvgPool'))
self.assertListEqual(net.get_shape().as_list(), [batch_size, 1, 1, 1536])
self.assertFalse('Logits' in end_points)
self.assertFalse('Predictions' in end_points)
def testBuildWithoutAuxLogits(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, endpoints = inception.inception_v4(inputs, num_classes,
create_aux_logits=False)
self.assertFalse('AuxLogits' in endpoints)
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
def testAllEndPointsShapes(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v4(inputs, num_classes)
endpoints_shapes = {'Conv2d_1a_3x3': [batch_size, 149, 149, 32],
'Conv2d_2a_3x3': [batch_size, 147, 147, 32],
'Conv2d_2b_3x3': [batch_size, 147, 147, 64],
'Mixed_3a': [batch_size, 73, 73, 160],
'Mixed_4a': [batch_size, 71, 71, 192],
'Mixed_5a': [batch_size, 35, 35, 384],
# 4 x Inception-A blocks
'Mixed_5b': [batch_size, 35, 35, 384],
'Mixed_5c': [batch_size, 35, 35, 384],
'Mixed_5d': [batch_size, 35, 35, 384],
'Mixed_5e': [batch_size, 35, 35, 384],
# Reduction-A block
'Mixed_6a': [batch_size, 17, 17, 1024],
# 7 x Inception-B blocks
'Mixed_6b': [batch_size, 17, 17, 1024],
'Mixed_6c': [batch_size, 17, 17, 1024],
'Mixed_6d': [batch_size, 17, 17, 1024],
'Mixed_6e': [batch_size, 17, 17, 1024],
'Mixed_6f': [batch_size, 17, 17, 1024],
'Mixed_6g': [batch_size, 17, 17, 1024],
'Mixed_6h': [batch_size, 17, 17, 1024],
# Reduction-A block
'Mixed_7a': [batch_size, 8, 8, 1536],
# 3 x Inception-C blocks
'Mixed_7b': [batch_size, 8, 8, 1536],
'Mixed_7c': [batch_size, 8, 8, 1536],
'Mixed_7d': [batch_size, 8, 8, 1536],
# Logits and predictions
'AuxLogits': [batch_size, num_classes],
'global_pool': [batch_size, 1, 1, 1536],
'PreLogitsFlatten': [batch_size, 1536],
'Logits': [batch_size, num_classes],
'Predictions': [batch_size, num_classes]}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testBuildBaseNetwork(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = inception.inception_v4_base(inputs)
self.assertTrue(net.op.name.startswith(
'InceptionV4/Mixed_7d'))
self.assertListEqual(net.get_shape().as_list(), [batch_size, 8, 8, 1536])
expected_endpoints = [
'Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3', 'Mixed_3a',
'Mixed_4a', 'Mixed_5a', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d',
'Mixed_5e', 'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d',
'Mixed_6e', 'Mixed_6f', 'Mixed_6g', 'Mixed_6h', 'Mixed_7a',
'Mixed_7b', 'Mixed_7c', 'Mixed_7d']
self.assertItemsEqual(end_points.keys(), expected_endpoints)
for name, op in end_points.items():
self.assertTrue(op.name.startswith('InceptionV4/' + name))
def testBuildOnlyUpToFinalEndpoint(self):
batch_size = 5
height, width = 299, 299
all_endpoints = [
'Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3', 'Mixed_3a',
'Mixed_4a', 'Mixed_5a', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d',
'Mixed_5e', 'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d',
'Mixed_6e', 'Mixed_6f', 'Mixed_6g', 'Mixed_6h', 'Mixed_7a',
'Mixed_7b', 'Mixed_7c', 'Mixed_7d']
for index, endpoint in enumerate(all_endpoints):
with tf.Graph().as_default():
inputs = tf.random.uniform((batch_size, height, width, 3))
out_tensor, end_points = inception.inception_v4_base(
inputs, final_endpoint=endpoint)
self.assertTrue(out_tensor.op.name.startswith(
'InceptionV4/' + endpoint))
self.assertItemsEqual(all_endpoints[:index + 1], end_points.keys())
def testVariablesSetDevice(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
# Force all Variables to reside on the device.
with tf.variable_scope('on_cpu'), tf.device('/cpu:0'):
inception.inception_v4(inputs, num_classes)
with tf.variable_scope('on_gpu'), tf.device('/gpu:0'):
inception.inception_v4(inputs, num_classes)
for v in tf.get_collection(
tf.GraphKeys.GLOBAL_VARIABLES, scope='on_cpu'):
self.assertDeviceEqual(v.device, '/cpu:0')
for v in tf.get_collection(
tf.GraphKeys.GLOBAL_VARIABLES, scope='on_gpu'):
self.assertDeviceEqual(v.device, '/gpu:0')
def testHalfSizeImages(self):
batch_size = 5
height, width = 150, 150
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v4(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7d']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 3, 3, 1536])
def testGlobalPool(self):
batch_size = 1
height, width = 350, 400
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v4(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7d']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 9, 11, 1536])
def testGlobalPoolUnknownImageShape(self):
batch_size = 1
height, width = 350, 400
num_classes = 1000
with self.test_session() as sess:
inputs = tf.placeholder(tf.float32, (batch_size, None, None, 3))
logits, end_points = inception.inception_v4(
inputs, num_classes, create_aux_logits=False)
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7d']
images = tf.random.uniform((batch_size, height, width, 3))
sess.run(tf.global_variables_initializer())
logits_out, pre_pool_out = sess.run([logits, pre_pool],
{inputs: images.eval()})
self.assertTupleEqual(logits_out.shape, (batch_size, num_classes))
self.assertTupleEqual(pre_pool_out.shape, (batch_size, 9, 11, 1536))
def testUnknownBatchSize(self):
batch_size = 1
height, width = 299, 299
num_classes = 1000
with self.test_session() as sess:
inputs = tf.placeholder(tf.float32, (None, height, width, 3))
logits, _ = inception.inception_v4(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV4/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[None, num_classes])
images = tf.random.uniform((batch_size, height, width, 3))
sess.run(tf.global_variables_initializer())
output = sess.run(logits, {inputs: images.eval()})
self.assertEquals(output.shape, (batch_size, num_classes))
def testEvaluation(self):
batch_size = 2
height, width = 299, 299
num_classes = 1000
with self.test_session() as sess:
eval_inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = inception.inception_v4(eval_inputs,
num_classes,
is_training=False)
predictions = tf.argmax(input=logits, axis=1)
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (batch_size,))
def testTrainEvalWithReuse(self):
train_batch_size = 5
eval_batch_size = 2
height, width = 150, 150
num_classes = 1000
with self.test_session() as sess:
train_inputs = tf.random.uniform((train_batch_size, height, width, 3))
inception.inception_v4(train_inputs, num_classes)
eval_inputs = tf.random.uniform((eval_batch_size, height, width, 3))
logits, _ = inception.inception_v4(eval_inputs,
num_classes,
is_training=False,
reuse=True)
predictions = tf.argmax(input=logits, axis=1)
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (eval_batch_size,))
def testNoBatchNormScaleByDefault(self):
height, width = 299, 299
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with slim.arg_scope(inception.inception_v4_arg_scope()):
inception.inception_v4(inputs, num_classes, is_training=False)
self.assertEqual(tf.global_variables('.*/BatchNorm/gamma:0$'), [])
def testBatchNormScale(self):
height, width = 299, 299
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with slim.arg_scope(
inception.inception_v4_arg_scope(batch_norm_scale=True)):
inception.inception_v4(inputs, num_classes, is_training=False)
gamma_names = set(
v.op.name
for v in tf.global_variables('.*/BatchNorm/gamma:0$'))
self.assertGreater(len(gamma_names), 0)
for v in tf.global_variables('.*/BatchNorm/moving_mean:0$'):
self.assertIn(v.op.name[:-len('moving_mean')] + 'gamma', gamma_names)
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v4_test.py | inception_v4_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains building blocks for various versions of Residual Networks.
Residual networks (ResNets) were proposed in:
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Deep Residual Learning for Image Recognition. arXiv:1512.03385, 2015
More variants were introduced in:
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Identity Mappings in Deep Residual Networks. arXiv: 1603.05027, 2016
We can obtain different ResNet variants by changing the network depth, width,
and form of residual unit. This module implements the infrastructure for
building them. Concrete ResNet units and full ResNet networks are implemented in
the accompanying resnet_v1.py and resnet_v2.py modules.
Compared to https://github.com/KaimingHe/deep-residual-networks, in the current
implementation we subsample the output activations in the last residual unit of
each block, instead of subsampling the input activations in the first residual
unit of each block. The two implementations give identical results but our
implementation is more memory efficient.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import tensorflow.compat.v1 as tf
import tf_slim as slim
class Block(collections.namedtuple('Block', ['scope', 'unit_fn', 'args'])):
"""A named tuple describing a ResNet block.
Its parts are:
scope: The scope of the `Block`.
unit_fn: The ResNet unit function which takes as input a `Tensor` and
returns another `Tensor` with the output of the ResNet unit.
args: A list of length equal to the number of units in the `Block`. The list
contains one (depth, depth_bottleneck, stride) tuple for each unit in the
block to serve as argument to unit_fn.
"""
def subsample(inputs, factor, scope=None):
"""Subsamples the input along the spatial dimensions.
Args:
inputs: A `Tensor` of size [batch, height_in, width_in, channels].
factor: The subsampling factor.
scope: Optional variable_scope.
Returns:
output: A `Tensor` of size [batch, height_out, width_out, channels] with the
input, either intact (if factor == 1) or subsampled (if factor > 1).
"""
if factor == 1:
return inputs
else:
return slim.max_pool2d(inputs, [1, 1], stride=factor, scope=scope)
def conv2d_same(inputs, num_outputs, kernel_size, stride, rate=1, scope=None):
"""Strided 2-D convolution with 'SAME' padding.
When stride > 1, then we do explicit zero-padding, followed by conv2d with
'VALID' padding.
Note that
net = conv2d_same(inputs, num_outputs, 3, stride=stride)
is equivalent to
net = slim.conv2d(inputs, num_outputs, 3, stride=1, padding='SAME')
net = subsample(net, factor=stride)
whereas
net = slim.conv2d(inputs, num_outputs, 3, stride=stride, padding='SAME')
is different when the input's height or width is even, which is why we add the
current function. For more details, see ResnetUtilsTest.testConv2DSameEven().
Args:
inputs: A 4-D tensor of size [batch, height_in, width_in, channels].
num_outputs: An integer, the number of output filters.
kernel_size: An int with the kernel_size of the filters.
stride: An integer, the output stride.
rate: An integer, rate for atrous convolution.
scope: Scope.
Returns:
output: A 4-D tensor of size [batch, height_out, width_out, channels] with
the convolution output.
"""
if stride == 1:
return slim.conv2d(inputs, num_outputs, kernel_size, stride=1, rate=rate,
padding='SAME', scope=scope)
else:
kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1)
pad_total = kernel_size_effective - 1
pad_beg = pad_total // 2
pad_end = pad_total - pad_beg
inputs = tf.pad(
tensor=inputs,
paddings=[[0, 0], [pad_beg, pad_end], [pad_beg, pad_end], [0, 0]])
return slim.conv2d(inputs, num_outputs, kernel_size, stride=stride,
rate=rate, padding='VALID', scope=scope)
@slim.add_arg_scope
def stack_blocks_dense(net, blocks, output_stride=None,
store_non_strided_activations=False,
outputs_collections=None):
"""Stacks ResNet `Blocks` and controls output feature density.
First, this function creates scopes for the ResNet in the form of
'block_name/unit_1', 'block_name/unit_2', etc.
Second, this function allows the user to explicitly control the ResNet
output_stride, which is the ratio of the input to output spatial resolution.
This is useful for dense prediction tasks such as semantic segmentation or
object detection.
Most ResNets consist of 4 ResNet blocks and subsample the activations by a
factor of 2 when transitioning between consecutive ResNet blocks. This results
to a nominal ResNet output_stride equal to 8. If we set the output_stride to
half the nominal network stride (e.g., output_stride=4), then we compute
responses twice.
Control of the output feature density is implemented by atrous convolution.
Args:
net: A `Tensor` of size [batch, height, width, channels].
blocks: A list of length equal to the number of ResNet `Blocks`. Each
element is a ResNet `Block` object describing the units in the `Block`.
output_stride: If `None`, then the output will be computed at the nominal
network stride. If output_stride is not `None`, it specifies the requested
ratio of input to output spatial resolution, which needs to be equal to
the product of unit strides from the start up to some level of the ResNet.
For example, if the ResNet employs units with strides 1, 2, 1, 3, 4, 1,
then valid values for the output_stride are 1, 2, 6, 24 or None (which
is equivalent to output_stride=24).
store_non_strided_activations: If True, we compute non-strided (undecimated)
activations at the last unit of each block and store them in the
`outputs_collections` before subsampling them. This gives us access to
higher resolution intermediate activations which are useful in some
dense prediction problems but increases 4x the computation and memory cost
at the last unit of each block.
outputs_collections: Collection to add the ResNet block outputs.
Returns:
net: Output tensor with stride equal to the specified output_stride.
Raises:
ValueError: If the target output_stride is not valid.
"""
# The current_stride variable keeps track of the effective stride of the
# activations. This allows us to invoke atrous convolution whenever applying
# the next residual unit would result in the activations having stride larger
# than the target output_stride.
current_stride = 1
# The atrous convolution rate parameter.
rate = 1
for block in blocks:
with tf.variable_scope(block.scope, 'block', [net]) as sc:
block_stride = 1
for i, unit in enumerate(block.args):
if store_non_strided_activations and i == len(block.args) - 1:
# Move stride from the block's last unit to the end of the block.
block_stride = unit.get('stride', 1)
unit = dict(unit, stride=1)
with tf.variable_scope('unit_%d' % (i + 1), values=[net]):
# If we have reached the target output_stride, then we need to employ
# atrous convolution with stride=1 and multiply the atrous rate by the
# current unit's stride for use in subsequent layers.
if output_stride is not None and current_stride == output_stride:
net = block.unit_fn(net, rate=rate, **dict(unit, stride=1))
rate *= unit.get('stride', 1)
else:
net = block.unit_fn(net, rate=1, **unit)
current_stride *= unit.get('stride', 1)
if output_stride is not None and current_stride > output_stride:
raise ValueError('The target output_stride cannot be reached.')
# Collect activations at the block's end before performing subsampling.
net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net)
# Subsampling of the block's output activations.
if output_stride is not None and current_stride == output_stride:
rate *= block_stride
else:
net = subsample(net, block_stride)
current_stride *= block_stride
if output_stride is not None and current_stride > output_stride:
raise ValueError('The target output_stride cannot be reached.')
if output_stride is not None and current_stride != output_stride:
raise ValueError('The target output_stride cannot be reached.')
return net
def resnet_arg_scope(
weight_decay=0.0001,
batch_norm_decay=0.997,
batch_norm_epsilon=1e-5,
batch_norm_scale=True,
activation_fn=tf.nn.relu,
use_batch_norm=True,
batch_norm_updates_collections=tf.GraphKeys.UPDATE_OPS):
"""Defines the default ResNet arg scope.
TODO(gpapan): The batch-normalization related default values above are
appropriate for use in conjunction with the reference ResNet models
released at https://github.com/KaimingHe/deep-residual-networks. When
training ResNets from scratch, they might need to be tuned.
Args:
weight_decay: The weight decay to use for regularizing the model.
batch_norm_decay: The moving average decay when estimating layer activation
statistics in batch normalization.
batch_norm_epsilon: Small constant to prevent division by zero when
normalizing activations by their variance in batch normalization.
batch_norm_scale: If True, uses an explicit `gamma` multiplier to scale the
activations in the batch normalization layer.
activation_fn: The activation function which is used in ResNet.
use_batch_norm: Whether or not to use batch normalization.
batch_norm_updates_collections: Collection for the update ops for
batch norm.
Returns:
An `arg_scope` to use for the resnet models.
"""
batch_norm_params = {
'decay': batch_norm_decay,
'epsilon': batch_norm_epsilon,
'scale': batch_norm_scale,
'updates_collections': batch_norm_updates_collections,
'fused': None, # Use fused batch norm if possible.
}
with slim.arg_scope(
[slim.conv2d],
weights_regularizer=slim.l2_regularizer(weight_decay),
weights_initializer=slim.variance_scaling_initializer(),
activation_fn=activation_fn,
normalizer_fn=slim.batch_norm if use_batch_norm else None,
normalizer_params=batch_norm_params):
with slim.arg_scope([slim.batch_norm], **batch_norm_params):
# The following implies padding='SAME' for pool1, which makes feature
# alignment easier for dense prediction tasks. This is also used in
# https://github.com/facebook/fb.resnet.torch. However the accompanying
# code of 'Deep Residual Learning for Image Recognition' uses
# padding='VALID' for pool1. You can switch to that choice by setting
# slim.arg_scope([slim.max_pool2d], padding='VALID').
with slim.arg_scope([slim.max_pool2d], padding='SAME') as arg_sc:
return arg_sc
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/resnet_utils.py | resnet_utils.py |
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains a factory for building various models."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import tf_slim as slim
from nets import alexnet
from nets import cifarnet
from nets import i3d
from nets import inception
from nets import lenet
from nets import mobilenet_v1
from nets import overfeat
from nets import resnet_v1
from nets import resnet_v2
from nets import s3dg
from nets import vgg
from nets.mobilenet import mobilenet_v2
from nets.mobilenet import mobilenet_v3
from nets.nasnet import nasnet
from nets.nasnet import pnasnet
networks_map = {
'alexnet_v2': alexnet.alexnet_v2,
'cifarnet': cifarnet.cifarnet,
'overfeat': overfeat.overfeat,
'vgg_a': vgg.vgg_a,
'vgg_16': vgg.vgg_16,
'vgg_19': vgg.vgg_19,
'inception_v1': inception.inception_v1,
'inception_v2': inception.inception_v2,
'inception_v3': inception.inception_v3,
'inception_v4': inception.inception_v4,
'inception_resnet_v2': inception.inception_resnet_v2,
'i3d': i3d.i3d,
's3dg': s3dg.s3dg,
'lenet': lenet.lenet,
'resnet_v1_50': resnet_v1.resnet_v1_50,
'resnet_v1_101': resnet_v1.resnet_v1_101,
'resnet_v1_152': resnet_v1.resnet_v1_152,
'resnet_v1_200': resnet_v1.resnet_v1_200,
'resnet_v2_50': resnet_v2.resnet_v2_50,
'resnet_v2_101': resnet_v2.resnet_v2_101,
'resnet_v2_152': resnet_v2.resnet_v2_152,
'resnet_v2_200': resnet_v2.resnet_v2_200,
'mobilenet_v1': mobilenet_v1.mobilenet_v1,
'mobilenet_v1_075': mobilenet_v1.mobilenet_v1_075,
'mobilenet_v1_050': mobilenet_v1.mobilenet_v1_050,
'mobilenet_v1_025': mobilenet_v1.mobilenet_v1_025,
'mobilenet_v2': mobilenet_v2.mobilenet,
'mobilenet_v2_140': mobilenet_v2.mobilenet_v2_140,
'mobilenet_v2_035': mobilenet_v2.mobilenet_v2_035,
'mobilenet_v3_small': mobilenet_v3.small,
'mobilenet_v3_large': mobilenet_v3.large,
'mobilenet_v3_small_minimalistic': mobilenet_v3.small_minimalistic,
'mobilenet_v3_large_minimalistic': mobilenet_v3.large_minimalistic,
'mobilenet_edgetpu': mobilenet_v3.edge_tpu,
'mobilenet_edgetpu_075': mobilenet_v3.edge_tpu_075,
'nasnet_cifar': nasnet.build_nasnet_cifar,
'nasnet_mobile': nasnet.build_nasnet_mobile,
'nasnet_large': nasnet.build_nasnet_large,
'pnasnet_large': pnasnet.build_pnasnet_large,
'pnasnet_mobile': pnasnet.build_pnasnet_mobile,
}
arg_scopes_map = {
'alexnet_v2': alexnet.alexnet_v2_arg_scope,
'cifarnet': cifarnet.cifarnet_arg_scope,
'overfeat': overfeat.overfeat_arg_scope,
'vgg_a': vgg.vgg_arg_scope,
'vgg_16': vgg.vgg_arg_scope,
'vgg_19': vgg.vgg_arg_scope,
'inception_v1': inception.inception_v3_arg_scope,
'inception_v2': inception.inception_v3_arg_scope,
'inception_v3': inception.inception_v3_arg_scope,
'inception_v4': inception.inception_v4_arg_scope,
'inception_resnet_v2': inception.inception_resnet_v2_arg_scope,
'i3d': i3d.i3d_arg_scope,
's3dg': s3dg.s3dg_arg_scope,
'lenet': lenet.lenet_arg_scope,
'resnet_v1_50': resnet_v1.resnet_arg_scope,
'resnet_v1_101': resnet_v1.resnet_arg_scope,
'resnet_v1_152': resnet_v1.resnet_arg_scope,
'resnet_v1_200': resnet_v1.resnet_arg_scope,
'resnet_v2_50': resnet_v2.resnet_arg_scope,
'resnet_v2_101': resnet_v2.resnet_arg_scope,
'resnet_v2_152': resnet_v2.resnet_arg_scope,
'resnet_v2_200': resnet_v2.resnet_arg_scope,
'mobilenet_v1': mobilenet_v1.mobilenet_v1_arg_scope,
'mobilenet_v1_075': mobilenet_v1.mobilenet_v1_arg_scope,
'mobilenet_v1_050': mobilenet_v1.mobilenet_v1_arg_scope,
'mobilenet_v1_025': mobilenet_v1.mobilenet_v1_arg_scope,
'mobilenet_v2': mobilenet_v2.training_scope,
'mobilenet_v2_035': mobilenet_v2.training_scope,
'mobilenet_v2_140': mobilenet_v2.training_scope,
'mobilenet_v3_small': mobilenet_v3.training_scope,
'mobilenet_v3_large': mobilenet_v3.training_scope,
'mobilenet_v3_small_minimalistic': mobilenet_v3.training_scope,
'mobilenet_v3_large_minimalistic': mobilenet_v3.training_scope,
'mobilenet_edgetpu': mobilenet_v3.training_scope,
'mobilenet_edgetpu_075': mobilenet_v3.training_scope,
'nasnet_cifar': nasnet.nasnet_cifar_arg_scope,
'nasnet_mobile': nasnet.nasnet_mobile_arg_scope,
'nasnet_large': nasnet.nasnet_large_arg_scope,
'pnasnet_large': pnasnet.pnasnet_large_arg_scope,
'pnasnet_mobile': pnasnet.pnasnet_mobile_arg_scope,
}
def get_network_fn(name, num_classes, weight_decay=0.0, is_training=False):
"""Returns a network_fn such as `logits, end_points = network_fn(images)`.
Args:
name: The name of the network.
num_classes: The number of classes to use for classification. If 0 or None,
the logits layer is omitted and its input features are returned instead.
weight_decay: The l2 coefficient for the model weights.
is_training: `True` if the model is being used for training and `False`
otherwise.
Returns:
network_fn: A function that applies the model to a batch of images. It has
the following signature:
net, end_points = network_fn(images)
The `images` input is a tensor of shape [batch_size, height, width, 3 or
1] with height = width = network_fn.default_image_size. (The
permissibility and treatment of other sizes depends on the network_fn.)
The returned `end_points` are a dictionary of intermediate activations.
The returned `net` is the topmost layer, depending on `num_classes`:
If `num_classes` was a non-zero integer, `net` is a logits tensor
of shape [batch_size, num_classes].
If `num_classes` was 0 or `None`, `net` is a tensor with the input
to the logits layer of shape [batch_size, 1, 1, num_features] or
[batch_size, num_features]. Dropout has not been applied to this
(even if the network's original classification does); it remains for
the caller to do this or not.
Raises:
ValueError: If network `name` is not recognized.
"""
if name not in networks_map:
raise ValueError('Name of network unknown %s' % name)
func = networks_map[name]
@functools.wraps(func)
def network_fn(images, **kwargs):
arg_scope = arg_scopes_map[name](weight_decay=weight_decay)
with slim.arg_scope(arg_scope):
return func(images, num_classes=num_classes, is_training=is_training,
**kwargs)
if hasattr(func, 'default_image_size'):
network_fn.default_image_size = func.default_image_size
return network_fn
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/nets_factory.py | nets_factory.py |
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
"""Tests for pix2pix."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import pix2pix
class GeneratorTest(tf.test.TestCase):
def _reduced_default_blocks(self):
"""Returns the default blocks, scaled down to make test run faster."""
return [pix2pix.Block(b.num_filters // 32, b.decoder_keep_prob)
for b in pix2pix._default_generator_blocks()]
def test_output_size_nn_upsample_conv(self):
batch_size = 2
height, width = 256, 256
num_outputs = 4
images = tf.ones((batch_size, height, width, 3))
with slim.arg_scope(pix2pix.pix2pix_arg_scope()):
logits, _ = pix2pix.pix2pix_generator(
images, num_outputs, blocks=self._reduced_default_blocks(),
upsample_method='nn_upsample_conv')
with self.test_session() as session:
session.run(tf.global_variables_initializer())
np_outputs = session.run(logits)
self.assertListEqual([batch_size, height, width, num_outputs],
list(np_outputs.shape))
def test_output_size_conv2d_transpose(self):
batch_size = 2
height, width = 256, 256
num_outputs = 4
images = tf.ones((batch_size, height, width, 3))
with slim.arg_scope(pix2pix.pix2pix_arg_scope()):
logits, _ = pix2pix.pix2pix_generator(
images, num_outputs, blocks=self._reduced_default_blocks(),
upsample_method='conv2d_transpose')
with self.test_session() as session:
session.run(tf.global_variables_initializer())
np_outputs = session.run(logits)
self.assertListEqual([batch_size, height, width, num_outputs],
list(np_outputs.shape))
def test_block_number_dictates_number_of_layers(self):
batch_size = 2
height, width = 256, 256
num_outputs = 4
images = tf.ones((batch_size, height, width, 3))
blocks = [
pix2pix.Block(64, 0.5),
pix2pix.Block(128, 0),
]
with slim.arg_scope(pix2pix.pix2pix_arg_scope()):
_, end_points = pix2pix.pix2pix_generator(
images, num_outputs, blocks)
num_encoder_layers = 0
num_decoder_layers = 0
for end_point in end_points:
if end_point.startswith('encoder'):
num_encoder_layers += 1
elif end_point.startswith('decoder'):
num_decoder_layers += 1
self.assertEqual(num_encoder_layers, len(blocks))
self.assertEqual(num_decoder_layers, len(blocks))
class DiscriminatorTest(tf.test.TestCase):
def _layer_output_size(self, input_size, kernel_size=4, stride=2, pad=2):
return (input_size + pad * 2 - kernel_size) // stride + 1
def test_four_layers(self):
batch_size = 2
input_size = 256
output_size = self._layer_output_size(input_size)
output_size = self._layer_output_size(output_size)
output_size = self._layer_output_size(output_size)
output_size = self._layer_output_size(output_size, stride=1)
output_size = self._layer_output_size(output_size, stride=1)
images = tf.ones((batch_size, input_size, input_size, 3))
with slim.arg_scope(pix2pix.pix2pix_arg_scope()):
logits, end_points = pix2pix.pix2pix_discriminator(
images, num_filters=[64, 128, 256, 512])
self.assertListEqual([batch_size, output_size, output_size, 1],
logits.shape.as_list())
self.assertListEqual([batch_size, output_size, output_size, 1],
end_points['predictions'].shape.as_list())
def test_four_layers_no_padding(self):
batch_size = 2
input_size = 256
output_size = self._layer_output_size(input_size, pad=0)
output_size = self._layer_output_size(output_size, pad=0)
output_size = self._layer_output_size(output_size, pad=0)
output_size = self._layer_output_size(output_size, stride=1, pad=0)
output_size = self._layer_output_size(output_size, stride=1, pad=0)
images = tf.ones((batch_size, input_size, input_size, 3))
with slim.arg_scope(pix2pix.pix2pix_arg_scope()):
logits, end_points = pix2pix.pix2pix_discriminator(
images, num_filters=[64, 128, 256, 512], padding=0)
self.assertListEqual([batch_size, output_size, output_size, 1],
logits.shape.as_list())
self.assertListEqual([batch_size, output_size, output_size, 1],
end_points['predictions'].shape.as_list())
def test_four_layers_wrog_paddig(self):
batch_size = 2
input_size = 256
images = tf.ones((batch_size, input_size, input_size, 3))
with slim.arg_scope(pix2pix.pix2pix_arg_scope()):
with self.assertRaises(TypeError):
pix2pix.pix2pix_discriminator(
images, num_filters=[64, 128, 256, 512], padding=1.5)
def test_four_layers_negative_padding(self):
batch_size = 2
input_size = 256
images = tf.ones((batch_size, input_size, input_size, 3))
with slim.arg_scope(pix2pix.pix2pix_arg_scope()):
with self.assertRaises(ValueError):
pix2pix.pix2pix_discriminator(
images, num_filters=[64, 128, 256, 512], padding=-1)
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/pix2pix_test.py | pix2pix_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the definition for inception v2 classification network."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception_utils
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
def inception_v2_base(inputs,
final_endpoint='Mixed_5c',
min_depth=16,
depth_multiplier=1.0,
use_separable_conv=True,
data_format='NHWC',
include_root_block=True,
scope=None):
"""Inception v2 (6a2).
Constructs an Inception v2 network from inputs to the given final endpoint.
This method can construct the network up to the layer inception(5b) as
described in http://arxiv.org/abs/1502.03167.
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
final_endpoint: specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c', 'Mixed_4a',
'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e', 'Mixed_5a', 'Mixed_5b',
'Mixed_5c']. If include_root_block is False, ['Conv2d_1a_7x7',
'MaxPool_2a_3x3', 'Conv2d_2b_1x1', 'Conv2d_2c_3x3', 'MaxPool_3a_3x3'] will
not be available.
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
use_separable_conv: Use a separable convolution for the first layer
Conv2d_1a_7x7. If this is False, use a normal convolution instead.
data_format: Data format of the activations ('NHWC' or 'NCHW').
include_root_block: If True, include the convolution and max-pooling layers
before the inception modules. If False, excludes those layers.
scope: Optional variable_scope.
Returns:
tensor_out: output tensor corresponding to the final_endpoint.
end_points: a set of activations for external use, for example summaries or
losses.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
or depth_multiplier <= 0
"""
# end_points will collect relevant activations for external use, for example
# summaries or losses.
end_points = {}
# Used to find thinned depths for each layer.
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
depth = lambda d: max(int(d * depth_multiplier), min_depth)
if data_format != 'NHWC' and data_format != 'NCHW':
raise ValueError('data_format must be either NHWC or NCHW.')
if data_format == 'NCHW' and use_separable_conv:
raise ValueError(
'separable convolution only supports NHWC layout. NCHW data format can'
' only be used when use_separable_conv is False.'
)
concat_dim = 3 if data_format == 'NHWC' else 1
with tf.variable_scope(scope, 'InceptionV2', [inputs]):
with slim.arg_scope(
[slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1,
padding='SAME',
data_format=data_format):
net = inputs
if include_root_block:
# Note that sizes in the comments below assume an input spatial size of
# 224x224, however, the inputs can be of any size greater 32x32.
# 224 x 224 x 3
end_point = 'Conv2d_1a_7x7'
if use_separable_conv:
# depthwise_multiplier here is different from depth_multiplier.
# depthwise_multiplier determines the output channels of the initial
# depthwise conv (see docs for tf.nn.separable_conv2d), while
# depth_multiplier controls the # channels of the subsequent 1x1
# convolution. Must have
# in_channels * depthwise_multipler <= out_channels
# so that the separable convolution is not overparameterized.
depthwise_multiplier = min(int(depth(64) / 3), 8)
net = slim.separable_conv2d(
inputs,
depth(64), [7, 7],
depth_multiplier=depthwise_multiplier,
stride=2,
padding='SAME',
weights_initializer=trunc_normal(1.0),
scope=end_point)
else:
# Use a normal convolution instead of a separable convolution.
net = slim.conv2d(
inputs,
depth(64), [7, 7],
stride=2,
weights_initializer=trunc_normal(1.0),
scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
# 112 x 112 x 64
end_point = 'MaxPool_2a_3x3'
net = slim.max_pool2d(net, [3, 3], scope=end_point, stride=2)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
# 56 x 56 x 64
end_point = 'Conv2d_2b_1x1'
net = slim.conv2d(
net,
depth(64), [1, 1],
scope=end_point,
weights_initializer=trunc_normal(0.1))
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
# 56 x 56 x 64
end_point = 'Conv2d_2c_3x3'
net = slim.conv2d(net, depth(192), [3, 3], scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
# 56 x 56 x 192
end_point = 'MaxPool_3a_3x3'
net = slim.max_pool2d(net, [3, 3], scope=end_point, stride=2)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
# 28 x 28 x 192
# Inception module.
end_point = 'Mixed_3b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(64), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(32), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 28 x 28 x 256
end_point = 'Mixed_3c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(64), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 28 x 28 x 320
end_point = 'Mixed_4a'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(
net, depth(128), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_0 = slim.conv2d(branch_0, depth(160), [3, 3], stride=2,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(
branch_1, depth(96), [3, 3], scope='Conv2d_0b_3x3')
branch_1 = slim.conv2d(
branch_1, depth(96), [3, 3], stride=2, scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(
net, [3, 3], stride=2, scope='MaxPool_1a_3x3')
net = tf.concat(axis=concat_dim, values=[branch_0, branch_1, branch_2])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 14 x 14 x 576
end_point = 'Mixed_4b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(224), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(
branch_1, depth(96), [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(96), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(128), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(128), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(128), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 14 x 14 x 576
end_point = 'Mixed_4c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(96), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(128), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(96), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(128), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(128), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(128), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 14 x 14 x 576
end_point = 'Mixed_4d'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(128), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(160), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(128), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(160), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(160), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(96), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 14 x 14 x 576
end_point = 'Mixed_4e'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(96), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(128), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(192), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(160), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(192), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(192), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(96), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 14 x 14 x 576
end_point = 'Mixed_5a'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(
net, depth(128), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_0 = slim.conv2d(branch_0, depth(192), [3, 3], stride=2,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(192), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(256), [3, 3],
scope='Conv2d_0b_3x3')
branch_1 = slim.conv2d(branch_1, depth(256), [3, 3], stride=2,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(net, [3, 3], stride=2,
scope='MaxPool_1a_3x3')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 7 x 7 x 1024
end_point = 'Mixed_5b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(352), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(192), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(320), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(160), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(224), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(224), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(128), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 7 x 7 x 1024
end_point = 'Mixed_5c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(352), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(192), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(320), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(192), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(224), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(224), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(128), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def inception_v2(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.8,
min_depth=16,
depth_multiplier=1.0,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
scope='InceptionV2',
global_pool=False):
"""Inception v2 model for classification.
Constructs an Inception v2 network for classification as described in
http://arxiv.org/abs/1502.03167.
The default image size used to train this network is 224x224.
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
is_training: whether is training or not.
dropout_keep_prob: the percentage of activation values that are retained.
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape [B, C], if false logits is of
shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
global_pool: Optional boolean flag to control the avgpooling before the
logits layer. If false or unset, pooling is done with a fixed window
that reduces default-sized inputs to 1x1, while larger inputs lead to
larger outputs. If true, any input size is pooled down to 1x1.
Returns:
net: a Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the non-dropped-out input to the logits layer
if num_classes is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
or depth_multiplier <= 0
"""
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
# Final pooling and prediction
with tf.variable_scope(
scope, 'InceptionV2', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_v2_base(
inputs, scope=scope, min_depth=min_depth,
depth_multiplier=depth_multiplier)
with tf.variable_scope('Logits'):
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
else:
# Pooling with a fixed kernel size.
kernel_size = _reduced_kernel_size_for_small_input(net, [7, 7])
net = slim.avg_pool2d(net, kernel_size, padding='VALID',
scope='AvgPool_1a_{}x{}'.format(*kernel_size))
end_points['AvgPool_1a'] = net
if not num_classes:
return net, end_points
# 1 x 1 x 1024
net = slim.dropout(net, keep_prob=dropout_keep_prob, scope='Dropout_1b')
end_points['PreLogits'] = net
logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='Conv2d_1c_1x1')
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
inception_v2.default_image_size = 224
def _reduced_kernel_size_for_small_input(input_tensor, kernel_size):
"""Define kernel size which is automatically reduced for small input.
If the shape of the input images is unknown at graph construction time this
function assumes that the input images are is large enough.
Args:
input_tensor: input tensor of size [batch_size, height, width, channels].
kernel_size: desired kernel size of length 2: [kernel_height, kernel_width]
Returns:
a tensor with the kernel size.
TODO(jrru): Make this function work with unknown shapes. Theoretically, this
can be done with the code below. Problems are two-fold: (1) If the shape was
known, it will be lost. (2) inception.slim.ops._two_element_tuple cannot
handle tensors that define the kernel size.
shape = tf.shape(input_tensor)
return = tf.stack([tf.minimum(shape[1], kernel_size[0]),
tf.minimum(shape[2], kernel_size[1])])
"""
shape = input_tensor.get_shape().as_list()
if shape[1] is None or shape[2] is None:
kernel_size_out = kernel_size
else:
kernel_size_out = [min(shape[1], kernel_size[0]),
min(shape[2], kernel_size[1])]
return kernel_size_out
inception_v2_arg_scope = inception_utils.inception_arg_scope
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v2.py | inception_v2.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains common code shared by all inception models.
Usage of arg scope:
with slim.arg_scope(inception_arg_scope()):
logits, end_points = inception.inception_v3(images, num_classes,
is_training=is_training)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
def inception_arg_scope(
weight_decay=0.00004,
use_batch_norm=True,
batch_norm_decay=0.9997,
batch_norm_epsilon=0.001,
activation_fn=tf.nn.relu,
batch_norm_updates_collections=tf.GraphKeys.UPDATE_OPS,
batch_norm_scale=False):
"""Defines the default arg scope for inception models.
Args:
weight_decay: The weight decay to use for regularizing the model.
use_batch_norm: "If `True`, batch_norm is applied after each convolution.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
activation_fn: Activation function for conv2d.
batch_norm_updates_collections: Collection for the update ops for
batch norm.
batch_norm_scale: If True, uses an explicit `gamma` multiplier to scale the
activations in the batch normalization layer.
Returns:
An `arg_scope` to use for the inception models.
"""
batch_norm_params = {
# Decay for the moving averages.
'decay': batch_norm_decay,
# epsilon to prevent 0s in variance.
'epsilon': batch_norm_epsilon,
# collection containing update_ops.
'updates_collections': batch_norm_updates_collections,
# use fused batch norm if possible.
'fused': None,
'scale': batch_norm_scale,
}
if use_batch_norm:
normalizer_fn = slim.batch_norm
normalizer_params = batch_norm_params
else:
normalizer_fn = None
normalizer_params = {}
# Set weight_decay for weights in Conv and FC layers.
with slim.arg_scope([slim.conv2d, slim.fully_connected],
weights_regularizer=slim.l2_regularizer(weight_decay)):
with slim.arg_scope(
[slim.conv2d],
weights_initializer=slim.variance_scaling_initializer(),
activation_fn=activation_fn,
normalizer_fn=normalizer_fn,
normalizer_params=normalizer_params) as sc:
return sc
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_utils.py | inception_utils.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for slim.inception_resnet_v2."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception
class InceptionTest(tf.test.TestCase):
def testBuildLogits(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, endpoints = inception.inception_resnet_v2(inputs, num_classes)
self.assertTrue('AuxLogits' in endpoints)
auxlogits = endpoints['AuxLogits']
self.assertTrue(
auxlogits.op.name.startswith('InceptionResnetV2/AuxLogits'))
self.assertListEqual(auxlogits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue(logits.op.name.startswith('InceptionResnetV2/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
def testBuildWithoutAuxLogits(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, endpoints = inception.inception_resnet_v2(inputs, num_classes,
create_aux_logits=False)
self.assertTrue('AuxLogits' not in endpoints)
self.assertTrue(logits.op.name.startswith('InceptionResnetV2/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
def testBuildNoClasses(self):
batch_size = 5
height, width = 299, 299
num_classes = None
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
net, endpoints = inception.inception_resnet_v2(inputs, num_classes)
self.assertTrue('AuxLogits' not in endpoints)
self.assertTrue('Logits' not in endpoints)
self.assertTrue(
net.op.name.startswith('InceptionResnetV2/Logits/AvgPool'))
self.assertListEqual(net.get_shape().as_list(), [batch_size, 1, 1, 1536])
def testBuildEndPoints(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_resnet_v2(inputs, num_classes)
self.assertTrue('Logits' in end_points)
logits = end_points['Logits']
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('AuxLogits' in end_points)
aux_logits = end_points['AuxLogits']
self.assertListEqual(aux_logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Conv2d_7b_1x1']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 8, 8, 1536])
def testBuildBaseNetwork(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = inception.inception_resnet_v2_base(inputs)
self.assertTrue(net.op.name.startswith('InceptionResnetV2/Conv2d_7b_1x1'))
self.assertListEqual(net.get_shape().as_list(),
[batch_size, 8, 8, 1536])
expected_endpoints = ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3',
'MaxPool_5a_3x3', 'Mixed_5b', 'Mixed_6a',
'PreAuxLogits', 'Mixed_7a', 'Conv2d_7b_1x1']
self.assertItemsEqual(end_points.keys(), expected_endpoints)
def testBuildOnlyUptoFinalEndpoint(self):
batch_size = 5
height, width = 299, 299
endpoints = ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3',
'MaxPool_5a_3x3', 'Mixed_5b', 'Mixed_6a',
'PreAuxLogits', 'Mixed_7a', 'Conv2d_7b_1x1']
for index, endpoint in enumerate(endpoints):
with tf.Graph().as_default():
inputs = tf.random.uniform((batch_size, height, width, 3))
out_tensor, end_points = inception.inception_resnet_v2_base(
inputs, final_endpoint=endpoint)
if endpoint != 'PreAuxLogits':
self.assertTrue(out_tensor.op.name.startswith(
'InceptionResnetV2/' + endpoint))
self.assertItemsEqual(endpoints[:index + 1], end_points.keys())
def testBuildAndCheckAllEndPointsUptoPreAuxLogits(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_resnet_v2_base(
inputs, final_endpoint='PreAuxLogits')
endpoints_shapes = {'Conv2d_1a_3x3': [5, 149, 149, 32],
'Conv2d_2a_3x3': [5, 147, 147, 32],
'Conv2d_2b_3x3': [5, 147, 147, 64],
'MaxPool_3a_3x3': [5, 73, 73, 64],
'Conv2d_3b_1x1': [5, 73, 73, 80],
'Conv2d_4a_3x3': [5, 71, 71, 192],
'MaxPool_5a_3x3': [5, 35, 35, 192],
'Mixed_5b': [5, 35, 35, 320],
'Mixed_6a': [5, 17, 17, 1088],
'PreAuxLogits': [5, 17, 17, 1088]
}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testBuildAndCheckAllEndPointsUptoPreAuxLogitsWithAlignedFeatureMaps(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_resnet_v2_base(
inputs, final_endpoint='PreAuxLogits', align_feature_maps=True)
endpoints_shapes = {'Conv2d_1a_3x3': [5, 150, 150, 32],
'Conv2d_2a_3x3': [5, 150, 150, 32],
'Conv2d_2b_3x3': [5, 150, 150, 64],
'MaxPool_3a_3x3': [5, 75, 75, 64],
'Conv2d_3b_1x1': [5, 75, 75, 80],
'Conv2d_4a_3x3': [5, 75, 75, 192],
'MaxPool_5a_3x3': [5, 38, 38, 192],
'Mixed_5b': [5, 38, 38, 320],
'Mixed_6a': [5, 19, 19, 1088],
'PreAuxLogits': [5, 19, 19, 1088]
}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testBuildAndCheckAllEndPointsUptoPreAuxLogitsWithOutputStrideEight(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_resnet_v2_base(
inputs, final_endpoint='PreAuxLogits', output_stride=8)
endpoints_shapes = {'Conv2d_1a_3x3': [5, 149, 149, 32],
'Conv2d_2a_3x3': [5, 147, 147, 32],
'Conv2d_2b_3x3': [5, 147, 147, 64],
'MaxPool_3a_3x3': [5, 73, 73, 64],
'Conv2d_3b_1x1': [5, 73, 73, 80],
'Conv2d_4a_3x3': [5, 71, 71, 192],
'MaxPool_5a_3x3': [5, 35, 35, 192],
'Mixed_5b': [5, 35, 35, 320],
'Mixed_6a': [5, 33, 33, 1088],
'PreAuxLogits': [5, 33, 33, 1088]
}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testVariablesSetDevice(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
# Force all Variables to reside on the device.
with tf.variable_scope('on_cpu'), tf.device('/cpu:0'):
inception.inception_resnet_v2(inputs, num_classes)
with tf.variable_scope('on_gpu'), tf.device('/gpu:0'):
inception.inception_resnet_v2(inputs, num_classes)
for v in tf.get_collection(
tf.GraphKeys.GLOBAL_VARIABLES, scope='on_cpu'):
self.assertDeviceEqual(v.device, '/cpu:0')
for v in tf.get_collection(
tf.GraphKeys.GLOBAL_VARIABLES, scope='on_gpu'):
self.assertDeviceEqual(v.device, '/gpu:0')
def testHalfSizeImages(self):
batch_size = 5
height, width = 150, 150
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_resnet_v2(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionResnetV2/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Conv2d_7b_1x1']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 3, 3, 1536])
def testGlobalPool(self):
batch_size = 1
height, width = 330, 400
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_resnet_v2(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionResnetV2/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Conv2d_7b_1x1']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 8, 11, 1536])
def testGlobalPoolUnknownImageShape(self):
batch_size = 1
height, width = 330, 400
num_classes = 1000
with self.test_session() as sess:
inputs = tf.placeholder(tf.float32, (batch_size, None, None, 3))
logits, end_points = inception.inception_resnet_v2(
inputs, num_classes, create_aux_logits=False)
self.assertTrue(logits.op.name.startswith('InceptionResnetV2/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Conv2d_7b_1x1']
images = tf.random.uniform((batch_size, height, width, 3))
sess.run(tf.global_variables_initializer())
logits_out, pre_pool_out = sess.run([logits, pre_pool],
{inputs: images.eval()})
self.assertTupleEqual(logits_out.shape, (batch_size, num_classes))
self.assertTupleEqual(pre_pool_out.shape, (batch_size, 8, 11, 1536))
def testUnknownBatchSize(self):
batch_size = 1
height, width = 299, 299
num_classes = 1000
with self.test_session() as sess:
inputs = tf.placeholder(tf.float32, (None, height, width, 3))
logits, _ = inception.inception_resnet_v2(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionResnetV2/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[None, num_classes])
images = tf.random.uniform((batch_size, height, width, 3))
sess.run(tf.global_variables_initializer())
output = sess.run(logits, {inputs: images.eval()})
self.assertEquals(output.shape, (batch_size, num_classes))
def testEvaluation(self):
batch_size = 2
height, width = 299, 299
num_classes = 1000
with self.test_session() as sess:
eval_inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = inception.inception_resnet_v2(eval_inputs,
num_classes,
is_training=False)
predictions = tf.argmax(input=logits, axis=1)
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (batch_size,))
def testTrainEvalWithReuse(self):
train_batch_size = 5
eval_batch_size = 2
height, width = 150, 150
num_classes = 1000
with self.test_session() as sess:
train_inputs = tf.random.uniform((train_batch_size, height, width, 3))
inception.inception_resnet_v2(train_inputs, num_classes)
eval_inputs = tf.random.uniform((eval_batch_size, height, width, 3))
logits, _ = inception.inception_resnet_v2(eval_inputs,
num_classes,
is_training=False,
reuse=True)
predictions = tf.argmax(input=logits, axis=1)
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (eval_batch_size,))
def testNoBatchNormScaleByDefault(self):
height, width = 299, 299
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with slim.arg_scope(inception.inception_resnet_v2_arg_scope()):
inception.inception_resnet_v2(inputs, num_classes, is_training=False)
self.assertEqual(tf.global_variables('.*/BatchNorm/gamma:0$'), [])
def testBatchNormScale(self):
height, width = 299, 299
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with slim.arg_scope(
inception.inception_resnet_v2_arg_scope(batch_norm_scale=True)):
inception.inception_resnet_v2(inputs, num_classes, is_training=False)
gamma_names = set(
v.op.name
for v in tf.global_variables('.*/BatchNorm/gamma:0$'))
self.assertGreater(len(gamma_names), 0)
for v in tf.global_variables('.*/BatchNorm/moving_mean:0$'):
self.assertIn(v.op.name[:-len('moving_mean')] + 'gamma', gamma_names)
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_resnet_v2_test.py | inception_resnet_v2_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for slim.nets.resnet_v1."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import resnet_utils
from nets import resnet_v1
tf.disable_resource_variables()
def create_test_input(batch_size, height, width, channels):
"""Create test input tensor.
Args:
batch_size: The number of images per batch or `None` if unknown.
height: The height of each image or `None` if unknown.
width: The width of each image or `None` if unknown.
channels: The number of channels per image or `None` if unknown.
Returns:
Either a placeholder `Tensor` of dimension
[batch_size, height, width, channels] if any of the inputs are `None` or a
constant `Tensor` with the mesh grid values along the spatial dimensions.
"""
if None in [batch_size, height, width, channels]:
return tf.placeholder(tf.float32, (batch_size, height, width, channels))
else:
return tf.cast(
np.tile(
np.reshape(
np.reshape(np.arange(height), [height, 1]) +
np.reshape(np.arange(width), [1, width]),
[1, height, width, 1]), [batch_size, 1, 1, channels]),
dtype=tf.float32)
class ResnetUtilsTest(tf.test.TestCase):
def testSubsampleThreeByThree(self):
x = tf.reshape(tf.cast(tf.range(9), dtype=tf.float32), [1, 3, 3, 1])
x = resnet_utils.subsample(x, 2)
expected = tf.reshape(tf.constant([0, 2, 6, 8]), [1, 2, 2, 1])
with self.test_session():
self.assertAllClose(x.eval(), expected.eval())
def testSubsampleFourByFour(self):
x = tf.reshape(tf.cast(tf.range(16), dtype=tf.float32), [1, 4, 4, 1])
x = resnet_utils.subsample(x, 2)
expected = tf.reshape(tf.constant([0, 2, 8, 10]), [1, 2, 2, 1])
with self.test_session():
self.assertAllClose(x.eval(), expected.eval())
def testConv2DSameEven(self):
n, n2 = 4, 2
# Input image.
x = create_test_input(1, n, n, 1)
# Convolution kernel.
w = create_test_input(1, 3, 3, 1)
w = tf.reshape(w, [3, 3, 1, 1])
tf.get_variable('Conv/weights', initializer=w)
tf.get_variable('Conv/biases', initializer=tf.zeros([1]))
tf.get_variable_scope().reuse_variables()
y1 = slim.conv2d(x, 1, [3, 3], stride=1, scope='Conv')
y1_expected = tf.cast([[14, 28, 43, 26], [28, 48, 66, 37], [43, 66, 84, 46],
[26, 37, 46, 22]],
dtype=tf.float32)
y1_expected = tf.reshape(y1_expected, [1, n, n, 1])
y2 = resnet_utils.subsample(y1, 2)
y2_expected = tf.cast([[14, 43], [43, 84]], dtype=tf.float32)
y2_expected = tf.reshape(y2_expected, [1, n2, n2, 1])
y3 = resnet_utils.conv2d_same(x, 1, 3, stride=2, scope='Conv')
y3_expected = y2_expected
y4 = slim.conv2d(x, 1, [3, 3], stride=2, scope='Conv')
y4_expected = tf.cast([[48, 37], [37, 22]], dtype=tf.float32)
y4_expected = tf.reshape(y4_expected, [1, n2, n2, 1])
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
self.assertAllClose(y1.eval(), y1_expected.eval())
self.assertAllClose(y2.eval(), y2_expected.eval())
self.assertAllClose(y3.eval(), y3_expected.eval())
self.assertAllClose(y4.eval(), y4_expected.eval())
def testConv2DSameOdd(self):
n, n2 = 5, 3
# Input image.
x = create_test_input(1, n, n, 1)
# Convolution kernel.
w = create_test_input(1, 3, 3, 1)
w = tf.reshape(w, [3, 3, 1, 1])
tf.get_variable('Conv/weights', initializer=w)
tf.get_variable('Conv/biases', initializer=tf.zeros([1]))
tf.get_variable_scope().reuse_variables()
y1 = slim.conv2d(x, 1, [3, 3], stride=1, scope='Conv')
y1_expected = tf.cast(
[[14, 28, 43, 58, 34], [28, 48, 66, 84, 46], [43, 66, 84, 102, 55],
[58, 84, 102, 120, 64], [34, 46, 55, 64, 30]],
dtype=tf.float32)
y1_expected = tf.reshape(y1_expected, [1, n, n, 1])
y2 = resnet_utils.subsample(y1, 2)
y2_expected = tf.cast([[14, 43, 34], [43, 84, 55], [34, 55, 30]],
dtype=tf.float32)
y2_expected = tf.reshape(y2_expected, [1, n2, n2, 1])
y3 = resnet_utils.conv2d_same(x, 1, 3, stride=2, scope='Conv')
y3_expected = y2_expected
y4 = slim.conv2d(x, 1, [3, 3], stride=2, scope='Conv')
y4_expected = y2_expected
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
self.assertAllClose(y1.eval(), y1_expected.eval())
self.assertAllClose(y2.eval(), y2_expected.eval())
self.assertAllClose(y3.eval(), y3_expected.eval())
self.assertAllClose(y4.eval(), y4_expected.eval())
def _resnet_plain(self, inputs, blocks, output_stride=None, scope=None):
"""A plain ResNet without extra layers before or after the ResNet blocks."""
with tf.variable_scope(scope, values=[inputs]):
with slim.arg_scope([slim.conv2d], outputs_collections='end_points'):
net = resnet_utils.stack_blocks_dense(inputs, blocks, output_stride)
end_points = slim.utils.convert_collection_to_dict('end_points')
return net, end_points
def testEndPointsV1(self):
"""Test the end points of a tiny v1 bottleneck network."""
blocks = [
resnet_v1.resnet_v1_block(
'block1', base_depth=1, num_units=2, stride=2),
resnet_v1.resnet_v1_block(
'block2', base_depth=2, num_units=2, stride=1),
]
inputs = create_test_input(2, 32, 16, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_plain(inputs, blocks, scope='tiny')
expected = [
'tiny/block1/unit_1/bottleneck_v1/shortcut',
'tiny/block1/unit_1/bottleneck_v1/conv1',
'tiny/block1/unit_1/bottleneck_v1/conv2',
'tiny/block1/unit_1/bottleneck_v1/conv3',
'tiny/block1/unit_2/bottleneck_v1/conv1',
'tiny/block1/unit_2/bottleneck_v1/conv2',
'tiny/block1/unit_2/bottleneck_v1/conv3',
'tiny/block2/unit_1/bottleneck_v1/shortcut',
'tiny/block2/unit_1/bottleneck_v1/conv1',
'tiny/block2/unit_1/bottleneck_v1/conv2',
'tiny/block2/unit_1/bottleneck_v1/conv3',
'tiny/block2/unit_2/bottleneck_v1/conv1',
'tiny/block2/unit_2/bottleneck_v1/conv2',
'tiny/block2/unit_2/bottleneck_v1/conv3']
self.assertItemsEqual(expected, end_points.keys())
def _stack_blocks_nondense(self, net, blocks):
"""A simplified ResNet Block stacker without output stride control."""
for block in blocks:
with tf.variable_scope(block.scope, 'block', [net]):
for i, unit in enumerate(block.args):
with tf.variable_scope('unit_%d' % (i + 1), values=[net]):
net = block.unit_fn(net, rate=1, **unit)
return net
def testAtrousValuesBottleneck(self):
"""Verify the values of dense feature extraction by atrous convolution.
Make sure that dense feature extraction by stack_blocks_dense() followed by
subsampling gives identical results to feature extraction at the nominal
network output stride using the simple self._stack_blocks_nondense() above.
"""
block = resnet_v1.resnet_v1_block
blocks = [
block('block1', base_depth=1, num_units=2, stride=2),
block('block2', base_depth=2, num_units=2, stride=2),
block('block3', base_depth=4, num_units=2, stride=2),
block('block4', base_depth=8, num_units=2, stride=1),
]
nominal_stride = 8
# Test both odd and even input dimensions.
height = 30
width = 31
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
with slim.arg_scope([slim.batch_norm], is_training=False):
for output_stride in [1, 2, 4, 8, None]:
with tf.Graph().as_default():
with self.test_session() as sess:
tf.set_random_seed(0)
inputs = create_test_input(1, height, width, 3)
# Dense feature extraction followed by subsampling.
output = resnet_utils.stack_blocks_dense(inputs,
blocks,
output_stride)
if output_stride is None:
factor = 1
else:
factor = nominal_stride // output_stride
output = resnet_utils.subsample(output, factor)
# Make the two networks use the same weights.
tf.get_variable_scope().reuse_variables()
# Feature extraction at the nominal network rate.
expected = self._stack_blocks_nondense(inputs, blocks)
sess.run(tf.global_variables_initializer())
output, expected = sess.run([output, expected])
self.assertAllClose(output, expected, atol=1e-4, rtol=1e-4)
def testStridingLastUnitVsSubsampleBlockEnd(self):
"""Compares subsampling at the block's last unit or block's end.
Makes sure that the final output is the same when we use a stride at the
last unit of a block vs. we subsample activations at the end of a block.
"""
block = resnet_v1.resnet_v1_block
blocks = [
block('block1', base_depth=1, num_units=2, stride=2),
block('block2', base_depth=2, num_units=2, stride=2),
block('block3', base_depth=4, num_units=2, stride=2),
block('block4', base_depth=8, num_units=2, stride=1),
]
# Test both odd and even input dimensions.
height = 30
width = 31
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
with slim.arg_scope([slim.batch_norm], is_training=False):
for output_stride in [1, 2, 4, 8, None]:
with tf.Graph().as_default():
with self.test_session() as sess:
tf.set_random_seed(0)
inputs = create_test_input(1, height, width, 3)
# Subsampling at the last unit of the block.
output = resnet_utils.stack_blocks_dense(
inputs, blocks, output_stride,
store_non_strided_activations=False,
outputs_collections='output')
output_end_points = slim.utils.convert_collection_to_dict(
'output')
# Make the two networks use the same weights.
tf.get_variable_scope().reuse_variables()
# Subsample activations at the end of the blocks.
expected = resnet_utils.stack_blocks_dense(
inputs, blocks, output_stride,
store_non_strided_activations=True,
outputs_collections='expected')
expected_end_points = slim.utils.convert_collection_to_dict(
'expected')
sess.run(tf.global_variables_initializer())
# Make sure that the final output is the same.
output, expected = sess.run([output, expected])
self.assertAllClose(output, expected, atol=1e-4, rtol=1e-4)
# Make sure that intermediate block activations in
# output_end_points are subsampled versions of the corresponding
# ones in expected_end_points.
for i, block in enumerate(blocks[:-1:]):
output = output_end_points[block.scope]
expected = expected_end_points[block.scope]
atrous_activated = (output_stride is not None and
2 ** i >= output_stride)
if not atrous_activated:
expected = resnet_utils.subsample(expected, 2)
output, expected = sess.run([output, expected])
self.assertAllClose(output, expected, atol=1e-4, rtol=1e-4)
class ResnetCompleteNetworkTest(tf.test.TestCase):
"""Tests with complete small ResNet v1 networks."""
def _resnet_small(self,
inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
include_root_block=True,
spatial_squeeze=True,
reuse=None,
scope='resnet_v1_small'):
"""A shallow and thin ResNet v1 for faster tests."""
block = resnet_v1.resnet_v1_block
blocks = [
block('block1', base_depth=1, num_units=3, stride=2),
block('block2', base_depth=2, num_units=3, stride=2),
block('block3', base_depth=4, num_units=3, stride=2),
block('block4', base_depth=8, num_units=2, stride=1),
]
return resnet_v1.resnet_v1(inputs, blocks, num_classes,
is_training=is_training,
global_pool=global_pool,
output_stride=output_stride,
include_root_block=include_root_block,
spatial_squeeze=spatial_squeeze,
reuse=reuse,
scope=scope)
def testClassificationEndPoints(self):
global_pool = True
num_classes = 10
inputs = create_test_input(2, 224, 224, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
logits, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
spatial_squeeze=False,
scope='resnet')
self.assertTrue(logits.op.name.startswith('resnet/logits'))
self.assertListEqual(logits.get_shape().as_list(), [2, 1, 1, num_classes])
self.assertTrue('predictions' in end_points)
self.assertListEqual(end_points['predictions'].get_shape().as_list(),
[2, 1, 1, num_classes])
self.assertTrue('global_pool' in end_points)
self.assertListEqual(end_points['global_pool'].get_shape().as_list(),
[2, 1, 1, 32])
def testClassificationEndPointsWithNoBatchNormArgscope(self):
global_pool = True
num_classes = 10
inputs = create_test_input(2, 224, 224, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
logits, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
spatial_squeeze=False,
is_training=None,
scope='resnet')
self.assertTrue(logits.op.name.startswith('resnet/logits'))
self.assertListEqual(logits.get_shape().as_list(), [2, 1, 1, num_classes])
self.assertTrue('predictions' in end_points)
self.assertListEqual(end_points['predictions'].get_shape().as_list(),
[2, 1, 1, num_classes])
self.assertTrue('global_pool' in end_points)
self.assertListEqual(end_points['global_pool'].get_shape().as_list(),
[2, 1, 1, 32])
def testEndpointNames(self):
# Like ResnetUtilsTest.testEndPointsV1(), but for the public API.
global_pool = True
num_classes = 10
inputs = create_test_input(2, 224, 224, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
scope='resnet')
expected = ['resnet/conv1']
for block in range(1, 5):
for unit in range(1, 4 if block < 4 else 3):
for conv in range(1, 4):
expected.append('resnet/block%d/unit_%d/bottleneck_v1/conv%d' %
(block, unit, conv))
expected.append('resnet/block%d/unit_%d/bottleneck_v1' % (block, unit))
expected.append('resnet/block%d/unit_1/bottleneck_v1/shortcut' % block)
expected.append('resnet/block%d' % block)
expected.extend(['global_pool', 'resnet/logits', 'resnet/spatial_squeeze',
'predictions'])
self.assertItemsEqual(end_points.keys(), expected)
def testClassificationShapes(self):
global_pool = True
num_classes = 10
inputs = create_test_input(2, 224, 224, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
scope='resnet')
endpoint_to_shape = {
'resnet/block1': [2, 28, 28, 4],
'resnet/block2': [2, 14, 14, 8],
'resnet/block3': [2, 7, 7, 16],
'resnet/block4': [2, 7, 7, 32]}
for endpoint in endpoint_to_shape:
shape = endpoint_to_shape[endpoint]
self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
def testFullyConvolutionalEndpointShapes(self):
global_pool = False
num_classes = 10
inputs = create_test_input(2, 321, 321, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
spatial_squeeze=False,
scope='resnet')
endpoint_to_shape = {
'resnet/block1': [2, 41, 41, 4],
'resnet/block2': [2, 21, 21, 8],
'resnet/block3': [2, 11, 11, 16],
'resnet/block4': [2, 11, 11, 32]}
for endpoint in endpoint_to_shape:
shape = endpoint_to_shape[endpoint]
self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
def testRootlessFullyConvolutionalEndpointShapes(self):
global_pool = False
num_classes = 10
inputs = create_test_input(2, 128, 128, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
include_root_block=False,
spatial_squeeze=False,
scope='resnet')
endpoint_to_shape = {
'resnet/block1': [2, 64, 64, 4],
'resnet/block2': [2, 32, 32, 8],
'resnet/block3': [2, 16, 16, 16],
'resnet/block4': [2, 16, 16, 32]}
for endpoint in endpoint_to_shape:
shape = endpoint_to_shape[endpoint]
self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
def testAtrousFullyConvolutionalEndpointShapes(self):
global_pool = False
num_classes = 10
output_stride = 8
inputs = create_test_input(2, 321, 321, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_small(inputs,
num_classes,
global_pool=global_pool,
output_stride=output_stride,
spatial_squeeze=False,
scope='resnet')
endpoint_to_shape = {
'resnet/block1': [2, 41, 41, 4],
'resnet/block2': [2, 41, 41, 8],
'resnet/block3': [2, 41, 41, 16],
'resnet/block4': [2, 41, 41, 32]}
for endpoint in endpoint_to_shape:
shape = endpoint_to_shape[endpoint]
self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
def testAtrousFullyConvolutionalValues(self):
"""Verify dense feature extraction with atrous convolution."""
nominal_stride = 32
for output_stride in [4, 8, 16, 32, None]:
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
with tf.Graph().as_default():
with self.test_session() as sess:
tf.set_random_seed(0)
inputs = create_test_input(2, 81, 81, 3)
# Dense feature extraction followed by subsampling.
output, _ = self._resnet_small(inputs, None, is_training=False,
global_pool=False,
output_stride=output_stride)
if output_stride is None:
factor = 1
else:
factor = nominal_stride // output_stride
output = resnet_utils.subsample(output, factor)
# Make the two networks use the same weights.
tf.get_variable_scope().reuse_variables()
# Feature extraction at the nominal network rate.
expected, _ = self._resnet_small(inputs, None, is_training=False,
global_pool=False)
sess.run(tf.global_variables_initializer())
self.assertAllClose(output.eval(), expected.eval(),
atol=1e-4, rtol=1e-4)
def testUnknownBatchSize(self):
batch = 2
height, width = 65, 65
global_pool = True
num_classes = 10
inputs = create_test_input(None, height, width, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
logits, _ = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
spatial_squeeze=False,
scope='resnet')
self.assertTrue(logits.op.name.startswith('resnet/logits'))
self.assertListEqual(logits.get_shape().as_list(),
[None, 1, 1, num_classes])
images = create_test_input(batch, height, width, 3)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(logits, {inputs: images.eval()})
self.assertEqual(output.shape, (batch, 1, 1, num_classes))
def testFullyConvolutionalUnknownHeightWidth(self):
batch = 2
height, width = 65, 65
global_pool = False
inputs = create_test_input(batch, None, None, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
output, _ = self._resnet_small(inputs, None, global_pool=global_pool)
self.assertListEqual(output.get_shape().as_list(),
[batch, None, None, 32])
images = create_test_input(batch, height, width, 3)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(output, {inputs: images.eval()})
self.assertEqual(output.shape, (batch, 3, 3, 32))
def testAtrousFullyConvolutionalUnknownHeightWidth(self):
batch = 2
height, width = 65, 65
global_pool = False
output_stride = 8
inputs = create_test_input(batch, None, None, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
output, _ = self._resnet_small(inputs,
None,
global_pool=global_pool,
output_stride=output_stride)
self.assertListEqual(output.get_shape().as_list(),
[batch, None, None, 32])
images = create_test_input(batch, height, width, 3)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(output, {inputs: images.eval()})
self.assertEqual(output.shape, (batch, 9, 9, 32))
def testDepthMultiplier(self):
resnets = [
resnet_v1.resnet_v1_50, resnet_v1.resnet_v1_101,
resnet_v1.resnet_v1_152, resnet_v1.resnet_v1_200
]
resnet_names = [
'resnet_v1_50', 'resnet_v1_101', 'resnet_v1_152', 'resnet_v1_200'
]
for resnet, resnet_name in zip(resnets, resnet_names):
depth_multiplier = 0.25
global_pool = True
num_classes = 10
inputs = create_test_input(2, 224, 224, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
scope_base = resnet_name + '_base'
_, end_points_base = resnet(
inputs,
num_classes,
global_pool=global_pool,
min_base_depth=1,
scope=scope_base)
scope_test = resnet_name + '_test'
_, end_points_test = resnet(
inputs,
num_classes,
global_pool=global_pool,
min_base_depth=1,
depth_multiplier=depth_multiplier,
scope=scope_test)
for block in ['block1', 'block2', 'block3', 'block4']:
block_name_base = scope_base + '/' + block
block_name_test = scope_test + '/' + block
self.assertTrue(block_name_base in end_points_base)
self.assertTrue(block_name_test in end_points_test)
self.assertEqual(
len(end_points_base[block_name_base].get_shape().as_list()), 4)
self.assertEqual(
len(end_points_test[block_name_test].get_shape().as_list()), 4)
self.assertListEqual(
end_points_base[block_name_base].get_shape().as_list()[:3],
end_points_test[block_name_test].get_shape().as_list()[:3])
self.assertEqual(
int(depth_multiplier *
end_points_base[block_name_base].get_shape().as_list()[3]),
end_points_test[block_name_test].get_shape().as_list()[3])
def testMinBaseDepth(self):
resnets = [
resnet_v1.resnet_v1_50, resnet_v1.resnet_v1_101,
resnet_v1.resnet_v1_152, resnet_v1.resnet_v1_200
]
resnet_names = [
'resnet_v1_50', 'resnet_v1_101', 'resnet_v1_152', 'resnet_v1_200'
]
for resnet, resnet_name in zip(resnets, resnet_names):
min_base_depth = 5
global_pool = True
num_classes = 10
inputs = create_test_input(2, 224, 224, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = resnet(
inputs,
num_classes,
global_pool=global_pool,
min_base_depth=min_base_depth,
depth_multiplier=0,
scope=resnet_name)
for block in ['block1', 'block2', 'block3', 'block4']:
block_name = resnet_name + '/' + block
self.assertTrue(block_name in end_points)
self.assertEqual(
len(end_points[block_name].get_shape().as_list()), 4)
# The output depth is 4 times base_depth.
depth_expected = min_base_depth * 4
self.assertEqual(
end_points[block_name].get_shape().as_list()[3], depth_expected)
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/resnet_v1_test.py | resnet_v1_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Brings all inception models under one namespace."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# pylint: disable=unused-import
from nets.inception_resnet_v2 import inception_resnet_v2
from nets.inception_resnet_v2 import inception_resnet_v2_arg_scope
from nets.inception_resnet_v2 import inception_resnet_v2_base
from nets.inception_v1 import inception_v1
from nets.inception_v1 import inception_v1_arg_scope
from nets.inception_v1 import inception_v1_base
from nets.inception_v2 import inception_v2
from nets.inception_v2 import inception_v2_arg_scope
from nets.inception_v2 import inception_v2_base
from nets.inception_v3 import inception_v3
from nets.inception_v3 import inception_v3_arg_scope
from nets.inception_v3 import inception_v3_base
from nets.inception_v4 import inception_v4
from nets.inception_v4 import inception_v4_arg_scope
from nets.inception_v4 import inception_v4_base
# pylint: enable=unused-import
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception.py | inception.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for slim.nets.resnet_v2."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import resnet_utils
from nets import resnet_v2
tf.disable_resource_variables()
def create_test_input(batch_size, height, width, channels):
"""Create test input tensor.
Args:
batch_size: The number of images per batch or `None` if unknown.
height: The height of each image or `None` if unknown.
width: The width of each image or `None` if unknown.
channels: The number of channels per image or `None` if unknown.
Returns:
Either a placeholder `Tensor` of dimension
[batch_size, height, width, channels] if any of the inputs are `None` or a
constant `Tensor` with the mesh grid values along the spatial dimensions.
"""
if None in [batch_size, height, width, channels]:
return tf.placeholder(tf.float32, (batch_size, height, width, channels))
else:
return tf.cast(
np.tile(
np.reshape(
np.reshape(np.arange(height), [height, 1]) +
np.reshape(np.arange(width), [1, width]),
[1, height, width, 1]), [batch_size, 1, 1, channels]),
dtype=tf.float32)
class ResnetUtilsTest(tf.test.TestCase):
def testSubsampleThreeByThree(self):
x = tf.reshape(tf.cast(tf.range(9), dtype=tf.float32), [1, 3, 3, 1])
x = resnet_utils.subsample(x, 2)
expected = tf.reshape(tf.constant([0, 2, 6, 8]), [1, 2, 2, 1])
with self.test_session():
self.assertAllClose(x.eval(), expected.eval())
def testSubsampleFourByFour(self):
x = tf.reshape(tf.cast(tf.range(16), dtype=tf.float32), [1, 4, 4, 1])
x = resnet_utils.subsample(x, 2)
expected = tf.reshape(tf.constant([0, 2, 8, 10]), [1, 2, 2, 1])
with self.test_session():
self.assertAllClose(x.eval(), expected.eval())
def testConv2DSameEven(self):
n, n2 = 4, 2
# Input image.
x = create_test_input(1, n, n, 1)
# Convolution kernel.
w = create_test_input(1, 3, 3, 1)
w = tf.reshape(w, [3, 3, 1, 1])
tf.get_variable('Conv/weights', initializer=w)
tf.get_variable('Conv/biases', initializer=tf.zeros([1]))
tf.get_variable_scope().reuse_variables()
y1 = slim.conv2d(x, 1, [3, 3], stride=1, scope='Conv')
y1_expected = tf.cast([[14, 28, 43, 26], [28, 48, 66, 37], [43, 66, 84, 46],
[26, 37, 46, 22]],
dtype=tf.float32)
y1_expected = tf.reshape(y1_expected, [1, n, n, 1])
y2 = resnet_utils.subsample(y1, 2)
y2_expected = tf.cast([[14, 43], [43, 84]], dtype=tf.float32)
y2_expected = tf.reshape(y2_expected, [1, n2, n2, 1])
y3 = resnet_utils.conv2d_same(x, 1, 3, stride=2, scope='Conv')
y3_expected = y2_expected
y4 = slim.conv2d(x, 1, [3, 3], stride=2, scope='Conv')
y4_expected = tf.cast([[48, 37], [37, 22]], dtype=tf.float32)
y4_expected = tf.reshape(y4_expected, [1, n2, n2, 1])
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
self.assertAllClose(y1.eval(), y1_expected.eval())
self.assertAllClose(y2.eval(), y2_expected.eval())
self.assertAllClose(y3.eval(), y3_expected.eval())
self.assertAllClose(y4.eval(), y4_expected.eval())
def testConv2DSameOdd(self):
n, n2 = 5, 3
# Input image.
x = create_test_input(1, n, n, 1)
# Convolution kernel.
w = create_test_input(1, 3, 3, 1)
w = tf.reshape(w, [3, 3, 1, 1])
tf.get_variable('Conv/weights', initializer=w)
tf.get_variable('Conv/biases', initializer=tf.zeros([1]))
tf.get_variable_scope().reuse_variables()
y1 = slim.conv2d(x, 1, [3, 3], stride=1, scope='Conv')
y1_expected = tf.cast(
[[14, 28, 43, 58, 34], [28, 48, 66, 84, 46], [43, 66, 84, 102, 55],
[58, 84, 102, 120, 64], [34, 46, 55, 64, 30]],
dtype=tf.float32)
y1_expected = tf.reshape(y1_expected, [1, n, n, 1])
y2 = resnet_utils.subsample(y1, 2)
y2_expected = tf.cast([[14, 43, 34], [43, 84, 55], [34, 55, 30]],
dtype=tf.float32)
y2_expected = tf.reshape(y2_expected, [1, n2, n2, 1])
y3 = resnet_utils.conv2d_same(x, 1, 3, stride=2, scope='Conv')
y3_expected = y2_expected
y4 = slim.conv2d(x, 1, [3, 3], stride=2, scope='Conv')
y4_expected = y2_expected
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
self.assertAllClose(y1.eval(), y1_expected.eval())
self.assertAllClose(y2.eval(), y2_expected.eval())
self.assertAllClose(y3.eval(), y3_expected.eval())
self.assertAllClose(y4.eval(), y4_expected.eval())
def _resnet_plain(self, inputs, blocks, output_stride=None, scope=None):
"""A plain ResNet without extra layers before or after the ResNet blocks."""
with tf.variable_scope(scope, values=[inputs]):
with slim.arg_scope([slim.conv2d], outputs_collections='end_points'):
net = resnet_utils.stack_blocks_dense(inputs, blocks, output_stride)
end_points = slim.utils.convert_collection_to_dict('end_points')
return net, end_points
def testEndPointsV2(self):
"""Test the end points of a tiny v2 bottleneck network."""
blocks = [
resnet_v2.resnet_v2_block(
'block1', base_depth=1, num_units=2, stride=2),
resnet_v2.resnet_v2_block(
'block2', base_depth=2, num_units=2, stride=1),
]
inputs = create_test_input(2, 32, 16, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_plain(inputs, blocks, scope='tiny')
expected = [
'tiny/block1/unit_1/bottleneck_v2/shortcut',
'tiny/block1/unit_1/bottleneck_v2/conv1',
'tiny/block1/unit_1/bottleneck_v2/conv2',
'tiny/block1/unit_1/bottleneck_v2/conv3',
'tiny/block1/unit_2/bottleneck_v2/conv1',
'tiny/block1/unit_2/bottleneck_v2/conv2',
'tiny/block1/unit_2/bottleneck_v2/conv3',
'tiny/block2/unit_1/bottleneck_v2/shortcut',
'tiny/block2/unit_1/bottleneck_v2/conv1',
'tiny/block2/unit_1/bottleneck_v2/conv2',
'tiny/block2/unit_1/bottleneck_v2/conv3',
'tiny/block2/unit_2/bottleneck_v2/conv1',
'tiny/block2/unit_2/bottleneck_v2/conv2',
'tiny/block2/unit_2/bottleneck_v2/conv3']
self.assertItemsEqual(expected, end_points.keys())
def _stack_blocks_nondense(self, net, blocks):
"""A simplified ResNet Block stacker without output stride control."""
for block in blocks:
with tf.variable_scope(block.scope, 'block', [net]):
for i, unit in enumerate(block.args):
with tf.variable_scope('unit_%d' % (i + 1), values=[net]):
net = block.unit_fn(net, rate=1, **unit)
return net
def testAtrousValuesBottleneck(self):
"""Verify the values of dense feature extraction by atrous convolution.
Make sure that dense feature extraction by stack_blocks_dense() followed by
subsampling gives identical results to feature extraction at the nominal
network output stride using the simple self._stack_blocks_nondense() above.
"""
block = resnet_v2.resnet_v2_block
blocks = [
block('block1', base_depth=1, num_units=2, stride=2),
block('block2', base_depth=2, num_units=2, stride=2),
block('block3', base_depth=4, num_units=2, stride=2),
block('block4', base_depth=8, num_units=2, stride=1),
]
nominal_stride = 8
# Test both odd and even input dimensions.
height = 30
width = 31
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
with slim.arg_scope([slim.batch_norm], is_training=False):
for output_stride in [1, 2, 4, 8, None]:
with tf.Graph().as_default():
with self.test_session() as sess:
tf.set_random_seed(0)
inputs = create_test_input(1, height, width, 3)
# Dense feature extraction followed by subsampling.
output = resnet_utils.stack_blocks_dense(inputs,
blocks,
output_stride)
if output_stride is None:
factor = 1
else:
factor = nominal_stride // output_stride
output = resnet_utils.subsample(output, factor)
# Make the two networks use the same weights.
tf.get_variable_scope().reuse_variables()
# Feature extraction at the nominal network rate.
expected = self._stack_blocks_nondense(inputs, blocks)
sess.run(tf.global_variables_initializer())
output, expected = sess.run([output, expected])
self.assertAllClose(output, expected, atol=1e-4, rtol=1e-4)
class ResnetCompleteNetworkTest(tf.test.TestCase):
"""Tests with complete small ResNet v2 networks."""
def _resnet_small(self,
inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
include_root_block=True,
spatial_squeeze=True,
reuse=None,
scope='resnet_v2_small'):
"""A shallow and thin ResNet v2 for faster tests."""
block = resnet_v2.resnet_v2_block
blocks = [
block('block1', base_depth=1, num_units=3, stride=2),
block('block2', base_depth=2, num_units=3, stride=2),
block('block3', base_depth=4, num_units=3, stride=2),
block('block4', base_depth=8, num_units=2, stride=1),
]
return resnet_v2.resnet_v2(inputs, blocks, num_classes,
is_training=is_training,
global_pool=global_pool,
output_stride=output_stride,
include_root_block=include_root_block,
spatial_squeeze=spatial_squeeze,
reuse=reuse,
scope=scope)
def testClassificationEndPoints(self):
global_pool = True
num_classes = 10
inputs = create_test_input(2, 224, 224, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
logits, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
spatial_squeeze=False,
scope='resnet')
self.assertTrue(logits.op.name.startswith('resnet/logits'))
self.assertListEqual(logits.get_shape().as_list(), [2, 1, 1, num_classes])
self.assertTrue('predictions' in end_points)
self.assertListEqual(end_points['predictions'].get_shape().as_list(),
[2, 1, 1, num_classes])
self.assertTrue('global_pool' in end_points)
self.assertListEqual(end_points['global_pool'].get_shape().as_list(),
[2, 1, 1, 32])
def testEndpointNames(self):
# Like ResnetUtilsTest.testEndPointsV2(), but for the public API.
global_pool = True
num_classes = 10
inputs = create_test_input(2, 224, 224, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
scope='resnet')
expected = ['resnet/conv1']
for block in range(1, 5):
for unit in range(1, 4 if block < 4 else 3):
for conv in range(1, 4):
expected.append('resnet/block%d/unit_%d/bottleneck_v2/conv%d' %
(block, unit, conv))
expected.append('resnet/block%d/unit_%d/bottleneck_v2' % (block, unit))
expected.append('resnet/block%d/unit_1/bottleneck_v2/shortcut' % block)
expected.append('resnet/block%d' % block)
expected.extend(['global_pool', 'resnet/logits', 'resnet/spatial_squeeze',
'predictions'])
self.assertItemsEqual(end_points.keys(), expected)
def testClassificationShapes(self):
global_pool = True
num_classes = 10
inputs = create_test_input(2, 224, 224, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
scope='resnet')
endpoint_to_shape = {
'resnet/block1': [2, 28, 28, 4],
'resnet/block2': [2, 14, 14, 8],
'resnet/block3': [2, 7, 7, 16],
'resnet/block4': [2, 7, 7, 32]}
for endpoint in endpoint_to_shape:
shape = endpoint_to_shape[endpoint]
self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
def testFullyConvolutionalEndpointShapes(self):
global_pool = False
num_classes = 10
inputs = create_test_input(2, 321, 321, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
spatial_squeeze=False,
scope='resnet')
endpoint_to_shape = {
'resnet/block1': [2, 41, 41, 4],
'resnet/block2': [2, 21, 21, 8],
'resnet/block3': [2, 11, 11, 16],
'resnet/block4': [2, 11, 11, 32]}
for endpoint in endpoint_to_shape:
shape = endpoint_to_shape[endpoint]
self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
def testRootlessFullyConvolutionalEndpointShapes(self):
global_pool = False
num_classes = 10
inputs = create_test_input(2, 128, 128, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
include_root_block=False,
spatial_squeeze=False,
scope='resnet')
endpoint_to_shape = {
'resnet/block1': [2, 64, 64, 4],
'resnet/block2': [2, 32, 32, 8],
'resnet/block3': [2, 16, 16, 16],
'resnet/block4': [2, 16, 16, 32]}
for endpoint in endpoint_to_shape:
shape = endpoint_to_shape[endpoint]
self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
def testAtrousFullyConvolutionalEndpointShapes(self):
global_pool = False
num_classes = 10
output_stride = 8
inputs = create_test_input(2, 321, 321, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
_, end_points = self._resnet_small(inputs,
num_classes,
global_pool=global_pool,
output_stride=output_stride,
spatial_squeeze=False,
scope='resnet')
endpoint_to_shape = {
'resnet/block1': [2, 41, 41, 4],
'resnet/block2': [2, 41, 41, 8],
'resnet/block3': [2, 41, 41, 16],
'resnet/block4': [2, 41, 41, 32]}
for endpoint in endpoint_to_shape:
shape = endpoint_to_shape[endpoint]
self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
def testAtrousFullyConvolutionalValues(self):
"""Verify dense feature extraction with atrous convolution."""
nominal_stride = 32
for output_stride in [4, 8, 16, 32, None]:
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
with tf.Graph().as_default():
with self.test_session() as sess:
tf.set_random_seed(0)
inputs = create_test_input(2, 81, 81, 3)
# Dense feature extraction followed by subsampling.
output, _ = self._resnet_small(inputs, None,
is_training=False,
global_pool=False,
output_stride=output_stride)
if output_stride is None:
factor = 1
else:
factor = nominal_stride // output_stride
output = resnet_utils.subsample(output, factor)
# Make the two networks use the same weights.
tf.get_variable_scope().reuse_variables()
# Feature extraction at the nominal network rate.
expected, _ = self._resnet_small(inputs, None,
is_training=False,
global_pool=False)
sess.run(tf.global_variables_initializer())
self.assertAllClose(output.eval(), expected.eval(),
atol=1e-4, rtol=1e-4)
def testUnknownBatchSize(self):
batch = 2
height, width = 65, 65
global_pool = True
num_classes = 10
inputs = create_test_input(None, height, width, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
logits, _ = self._resnet_small(inputs, num_classes,
global_pool=global_pool,
spatial_squeeze=False,
scope='resnet')
self.assertTrue(logits.op.name.startswith('resnet/logits'))
self.assertListEqual(logits.get_shape().as_list(),
[None, 1, 1, num_classes])
images = create_test_input(batch, height, width, 3)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(logits, {inputs: images.eval()})
self.assertEqual(output.shape, (batch, 1, 1, num_classes))
def testFullyConvolutionalUnknownHeightWidth(self):
batch = 2
height, width = 65, 65
global_pool = False
inputs = create_test_input(batch, None, None, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
output, _ = self._resnet_small(inputs, None,
global_pool=global_pool)
self.assertListEqual(output.get_shape().as_list(),
[batch, None, None, 32])
images = create_test_input(batch, height, width, 3)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(output, {inputs: images.eval()})
self.assertEqual(output.shape, (batch, 3, 3, 32))
def testAtrousFullyConvolutionalUnknownHeightWidth(self):
batch = 2
height, width = 65, 65
global_pool = False
output_stride = 8
inputs = create_test_input(batch, None, None, 3)
with slim.arg_scope(resnet_utils.resnet_arg_scope()):
output, _ = self._resnet_small(inputs,
None,
global_pool=global_pool,
output_stride=output_stride)
self.assertListEqual(output.get_shape().as_list(),
[batch, None, None, 32])
images = create_test_input(batch, height, width, 3)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(output, {inputs: images.eval()})
self.assertEqual(output.shape, (batch, 9, 9, 32))
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/resnet_v2_test.py | resnet_v2_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for nets.inception_v1."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception
class InceptionV1Test(tf.test.TestCase):
def testBuildClassificationNetwork(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v1(inputs, num_classes)
self.assertTrue(logits.op.name.startswith(
'InceptionV1/Logits/SpatialSqueeze'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('Predictions' in end_points)
self.assertListEqual(end_points['Predictions'].get_shape().as_list(),
[batch_size, num_classes])
def testBuildPreLogitsNetwork(self):
batch_size = 5
height, width = 224, 224
num_classes = None
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = inception.inception_v1(inputs, num_classes)
self.assertTrue(net.op.name.startswith('InceptionV1/Logits/AvgPool'))
self.assertListEqual(net.get_shape().as_list(), [batch_size, 1, 1, 1024])
self.assertFalse('Logits' in end_points)
self.assertFalse('Predictions' in end_points)
def testBuildBaseNetwork(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
mixed_6c, end_points = inception.inception_v1_base(inputs)
self.assertTrue(mixed_6c.op.name.startswith('InceptionV1/Mixed_5c'))
self.assertListEqual(mixed_6c.get_shape().as_list(),
[batch_size, 7, 7, 1024])
expected_endpoints = ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b',
'Mixed_3c', 'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c',
'Mixed_4d', 'Mixed_4e', 'Mixed_4f', 'MaxPool_5a_2x2',
'Mixed_5b', 'Mixed_5c']
self.assertItemsEqual(end_points.keys(), expected_endpoints)
def testBuildOnlyUptoFinalEndpoint(self):
batch_size = 5
height, width = 224, 224
endpoints = ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d',
'Mixed_4e', 'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b',
'Mixed_5c']
for index, endpoint in enumerate(endpoints):
with tf.Graph().as_default():
inputs = tf.random.uniform((batch_size, height, width, 3))
out_tensor, end_points = inception.inception_v1_base(
inputs, final_endpoint=endpoint)
self.assertTrue(out_tensor.op.name.startswith(
'InceptionV1/' + endpoint))
self.assertItemsEqual(endpoints[:index+1], end_points.keys())
def testBuildAndCheckAllEndPointsUptoMixed5c(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v1_base(inputs,
final_endpoint='Mixed_5c')
endpoints_shapes = {
'Conv2d_1a_7x7': [5, 112, 112, 64],
'MaxPool_2a_3x3': [5, 56, 56, 64],
'Conv2d_2b_1x1': [5, 56, 56, 64],
'Conv2d_2c_3x3': [5, 56, 56, 192],
'MaxPool_3a_3x3': [5, 28, 28, 192],
'Mixed_3b': [5, 28, 28, 256],
'Mixed_3c': [5, 28, 28, 480],
'MaxPool_4a_3x3': [5, 14, 14, 480],
'Mixed_4b': [5, 14, 14, 512],
'Mixed_4c': [5, 14, 14, 512],
'Mixed_4d': [5, 14, 14, 512],
'Mixed_4e': [5, 14, 14, 528],
'Mixed_4f': [5, 14, 14, 832],
'MaxPool_5a_2x2': [5, 7, 7, 832],
'Mixed_5b': [5, 7, 7, 832],
'Mixed_5c': [5, 7, 7, 1024]
}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testModelHasExpectedNumberOfParameters(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
with slim.arg_scope(inception.inception_v1_arg_scope()):
inception.inception_v1_base(inputs)
total_params, _ = slim.model_analyzer.analyze_vars(
slim.get_model_variables())
self.assertAlmostEqual(5607184, total_params)
def testHalfSizeImages(self):
batch_size = 5
height, width = 112, 112
inputs = tf.random.uniform((batch_size, height, width, 3))
mixed_5c, _ = inception.inception_v1_base(inputs)
self.assertTrue(mixed_5c.op.name.startswith('InceptionV1/Mixed_5c'))
self.assertListEqual(mixed_5c.get_shape().as_list(),
[batch_size, 4, 4, 1024])
def testBuildBaseNetworkWithoutRootBlock(self):
batch_size = 5
height, width = 28, 28
channels = 192
inputs = tf.random.uniform((batch_size, height, width, channels))
_, end_points = inception.inception_v1_base(
inputs, include_root_block=False)
endpoints_shapes = {
'Mixed_3b': [5, 28, 28, 256],
'Mixed_3c': [5, 28, 28, 480],
'MaxPool_4a_3x3': [5, 14, 14, 480],
'Mixed_4b': [5, 14, 14, 512],
'Mixed_4c': [5, 14, 14, 512],
'Mixed_4d': [5, 14, 14, 512],
'Mixed_4e': [5, 14, 14, 528],
'Mixed_4f': [5, 14, 14, 832],
'MaxPool_5a_2x2': [5, 7, 7, 832],
'Mixed_5b': [5, 7, 7, 832],
'Mixed_5c': [5, 7, 7, 1024]
}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testUnknownImageShape(self):
tf.reset_default_graph()
batch_size = 2
height, width = 224, 224
num_classes = 1000
input_np = np.random.uniform(0, 1, (batch_size, height, width, 3))
with self.test_session() as sess:
inputs = tf.placeholder(
tf.float32, shape=(batch_size, None, None, 3))
logits, end_points = inception.inception_v1(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV1/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_5c']
feed_dict = {inputs: input_np}
tf.global_variables_initializer().run()
pre_pool_out = sess.run(pre_pool, feed_dict=feed_dict)
self.assertListEqual(list(pre_pool_out.shape), [batch_size, 7, 7, 1024])
def testGlobalPoolUnknownImageShape(self):
tf.reset_default_graph()
batch_size = 1
height, width = 250, 300
num_classes = 1000
input_np = np.random.uniform(0, 1, (batch_size, height, width, 3))
with self.test_session() as sess:
inputs = tf.placeholder(
tf.float32, shape=(batch_size, None, None, 3))
logits, end_points = inception.inception_v1(inputs, num_classes,
global_pool=True)
self.assertTrue(logits.op.name.startswith('InceptionV1/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_5c']
feed_dict = {inputs: input_np}
tf.global_variables_initializer().run()
pre_pool_out = sess.run(pre_pool, feed_dict=feed_dict)
self.assertListEqual(list(pre_pool_out.shape), [batch_size, 8, 10, 1024])
def testUnknowBatchSize(self):
batch_size = 1
height, width = 224, 224
num_classes = 1000
inputs = tf.placeholder(tf.float32, (None, height, width, 3))
logits, _ = inception.inception_v1(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV1/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[None, num_classes])
images = tf.random.uniform((batch_size, height, width, 3))
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(logits, {inputs: images.eval()})
self.assertEquals(output.shape, (batch_size, num_classes))
def testEvaluation(self):
batch_size = 2
height, width = 224, 224
num_classes = 1000
eval_inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = inception.inception_v1(eval_inputs, num_classes,
is_training=False)
predictions = tf.argmax(input=logits, axis=1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (batch_size,))
def testTrainEvalWithReuse(self):
train_batch_size = 5
eval_batch_size = 2
height, width = 224, 224
num_classes = 1000
train_inputs = tf.random.uniform((train_batch_size, height, width, 3))
inception.inception_v1(train_inputs, num_classes)
eval_inputs = tf.random.uniform((eval_batch_size, height, width, 3))
logits, _ = inception.inception_v1(eval_inputs, num_classes, reuse=True)
predictions = tf.argmax(input=logits, axis=1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (eval_batch_size,))
def testLogitsNotSqueezed(self):
num_classes = 25
images = tf.random.uniform([1, 224, 224, 3])
logits, _ = inception.inception_v1(images,
num_classes=num_classes,
spatial_squeeze=False)
with self.test_session() as sess:
tf.global_variables_initializer().run()
logits_out = sess.run(logits)
self.assertListEqual(list(logits_out.shape), [1, 1, 1, num_classes])
def testNoBatchNormScaleByDefault(self):
height, width = 224, 224
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with slim.arg_scope(inception.inception_v1_arg_scope()):
inception.inception_v1(inputs, num_classes, is_training=False)
self.assertEqual(tf.global_variables('.*/BatchNorm/gamma:0$'), [])
def testBatchNormScale(self):
height, width = 224, 224
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with slim.arg_scope(
inception.inception_v1_arg_scope(batch_norm_scale=True)):
inception.inception_v1(inputs, num_classes, is_training=False)
gamma_names = set(
v.op.name
for v in tf.global_variables('.*/BatchNorm/gamma:0$'))
self.assertGreater(len(gamma_names), 0)
for v in tf.global_variables('.*/BatchNorm/moving_mean:0$'):
self.assertIn(v.op.name[:-len('moving_mean')] + 'gamma', gamma_names)
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v1_test.py | inception_v1_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for dcgan."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow.compat.v1 as tf
from nets import dcgan
class DCGANTest(tf.test.TestCase):
def test_generator_run(self):
tf.set_random_seed(1234)
noise = tf.random.normal([100, 64])
image, _ = dcgan.generator(noise)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
image.eval()
def test_generator_graph(self):
tf.set_random_seed(1234)
# Check graph construction for a number of image size/depths and batch
# sizes.
for i, batch_size in zip(xrange(3, 7), xrange(3, 8)):
tf.reset_default_graph()
final_size = 2 ** i
noise = tf.random.normal([batch_size, 64])
image, end_points = dcgan.generator(
noise,
depth=32,
final_size=final_size)
self.assertAllEqual([batch_size, final_size, final_size, 3],
image.shape.as_list())
expected_names = ['deconv%i' % j for j in xrange(1, i)] + ['logits']
self.assertSetEqual(set(expected_names), set(end_points.keys()))
# Check layer depths.
for j in range(1, i):
layer = end_points['deconv%i' % j]
self.assertEqual(32 * 2**(i-j-1), layer.get_shape().as_list()[-1])
def test_generator_invalid_input(self):
wrong_dim_input = tf.zeros([5, 32, 32])
with self.assertRaises(ValueError):
dcgan.generator(wrong_dim_input)
correct_input = tf.zeros([3, 2])
with self.assertRaisesRegexp(ValueError, 'must be a power of 2'):
dcgan.generator(correct_input, final_size=30)
with self.assertRaisesRegexp(ValueError, 'must be greater than 8'):
dcgan.generator(correct_input, final_size=4)
def test_discriminator_run(self):
image = tf.random.uniform([5, 32, 32, 3], -1, 1)
output, _ = dcgan.discriminator(image)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output.eval()
def test_discriminator_graph(self):
# Check graph construction for a number of image size/depths and batch
# sizes.
for i, batch_size in zip(xrange(1, 6), xrange(3, 8)):
tf.reset_default_graph()
img_w = 2 ** i
image = tf.random.uniform([batch_size, img_w, img_w, 3], -1, 1)
output, end_points = dcgan.discriminator(
image,
depth=32)
self.assertAllEqual([batch_size, 1], output.get_shape().as_list())
expected_names = ['conv%i' % j for j in xrange(1, i+1)] + ['logits']
self.assertSetEqual(set(expected_names), set(end_points.keys()))
# Check layer depths.
for j in range(1, i+1):
layer = end_points['conv%i' % j]
self.assertEqual(32 * 2**(j-1), layer.get_shape().as_list()[-1])
def test_discriminator_invalid_input(self):
wrong_dim_img = tf.zeros([5, 32, 32])
with self.assertRaises(ValueError):
dcgan.discriminator(wrong_dim_img)
spatially_undefined_shape = tf.placeholder(
tf.float32, [5, 32, None, 3])
with self.assertRaises(ValueError):
dcgan.discriminator(spatially_undefined_shape)
not_square = tf.zeros([5, 32, 16, 3])
with self.assertRaisesRegexp(ValueError, 'not have equal width and height'):
dcgan.discriminator(not_square)
not_power_2 = tf.zeros([5, 30, 30, 3])
with self.assertRaisesRegexp(ValueError, 'not a power of 2'):
dcgan.discriminator(not_power_2)
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/dcgan_test.py | dcgan_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for nets.inception_v2."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception
class InceptionV2Test(tf.test.TestCase):
def testBuildClassificationNetwork(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v2(inputs, num_classes)
self.assertTrue(logits.op.name.startswith(
'InceptionV2/Logits/SpatialSqueeze'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('Predictions' in end_points)
self.assertListEqual(end_points['Predictions'].get_shape().as_list(),
[batch_size, num_classes])
def testBuildPreLogitsNetwork(self):
batch_size = 5
height, width = 224, 224
num_classes = None
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = inception.inception_v2(inputs, num_classes)
self.assertTrue(net.op.name.startswith('InceptionV2/Logits/AvgPool'))
self.assertListEqual(net.get_shape().as_list(), [batch_size, 1, 1, 1024])
self.assertFalse('Logits' in end_points)
self.assertFalse('Predictions' in end_points)
def testBuildBaseNetwork(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
mixed_5c, end_points = inception.inception_v2_base(inputs)
self.assertTrue(mixed_5c.op.name.startswith('InceptionV2/Mixed_5c'))
self.assertListEqual(mixed_5c.get_shape().as_list(),
[batch_size, 7, 7, 1024])
expected_endpoints = ['Mixed_3b', 'Mixed_3c', 'Mixed_4a', 'Mixed_4b',
'Mixed_4c', 'Mixed_4d', 'Mixed_4e', 'Mixed_5a',
'Mixed_5b', 'Mixed_5c', 'Conv2d_1a_7x7',
'MaxPool_2a_3x3', 'Conv2d_2b_1x1', 'Conv2d_2c_3x3',
'MaxPool_3a_3x3']
self.assertItemsEqual(list(end_points.keys()), expected_endpoints)
def testBuildOnlyUptoFinalEndpoint(self):
batch_size = 5
height, width = 224, 224
endpoints = ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'Mixed_4a', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e',
'Mixed_5a', 'Mixed_5b', 'Mixed_5c']
for index, endpoint in enumerate(endpoints):
with tf.Graph().as_default():
inputs = tf.random.uniform((batch_size, height, width, 3))
out_tensor, end_points = inception.inception_v2_base(
inputs, final_endpoint=endpoint)
self.assertTrue(out_tensor.op.name.startswith(
'InceptionV2/' + endpoint))
self.assertItemsEqual(endpoints[:index + 1], list(end_points.keys()))
def testBuildAndCheckAllEndPointsUptoMixed5c(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v2_base(inputs,
final_endpoint='Mixed_5c')
endpoints_shapes = {'Mixed_3b': [batch_size, 28, 28, 256],
'Mixed_3c': [batch_size, 28, 28, 320],
'Mixed_4a': [batch_size, 14, 14, 576],
'Mixed_4b': [batch_size, 14, 14, 576],
'Mixed_4c': [batch_size, 14, 14, 576],
'Mixed_4d': [batch_size, 14, 14, 576],
'Mixed_4e': [batch_size, 14, 14, 576],
'Mixed_5a': [batch_size, 7, 7, 1024],
'Mixed_5b': [batch_size, 7, 7, 1024],
'Mixed_5c': [batch_size, 7, 7, 1024],
'Conv2d_1a_7x7': [batch_size, 112, 112, 64],
'MaxPool_2a_3x3': [batch_size, 56, 56, 64],
'Conv2d_2b_1x1': [batch_size, 56, 56, 64],
'Conv2d_2c_3x3': [batch_size, 56, 56, 192],
'MaxPool_3a_3x3': [batch_size, 28, 28, 192]}
self.assertItemsEqual(
list(endpoints_shapes.keys()), list(end_points.keys()))
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testModelHasExpectedNumberOfParameters(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
with slim.arg_scope(inception.inception_v2_arg_scope()):
inception.inception_v2_base(inputs)
total_params, _ = slim.model_analyzer.analyze_vars(
slim.get_model_variables())
self.assertAlmostEqual(10173112, total_params)
def testBuildEndPointsWithDepthMultiplierLessThanOne(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v2(inputs, num_classes)
endpoint_keys = [key for key in end_points.keys()
if key.startswith('Mixed') or key.startswith('Conv')]
_, end_points_with_multiplier = inception.inception_v2(
inputs, num_classes, scope='depth_multiplied_net',
depth_multiplier=0.5)
for key in endpoint_keys:
original_depth = end_points[key].get_shape().as_list()[3]
new_depth = end_points_with_multiplier[key].get_shape().as_list()[3]
self.assertEqual(0.5 * original_depth, new_depth)
def testBuildEndPointsWithDepthMultiplierGreaterThanOne(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v2(inputs, num_classes)
endpoint_keys = [key for key in end_points.keys()
if key.startswith('Mixed') or key.startswith('Conv')]
_, end_points_with_multiplier = inception.inception_v2(
inputs, num_classes, scope='depth_multiplied_net',
depth_multiplier=2.0)
for key in endpoint_keys:
original_depth = end_points[key].get_shape().as_list()[3]
new_depth = end_points_with_multiplier[key].get_shape().as_list()[3]
self.assertEqual(2.0 * original_depth, new_depth)
def testRaiseValueErrorWithInvalidDepthMultiplier(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
with self.assertRaises(ValueError):
_ = inception.inception_v2(inputs, num_classes, depth_multiplier=-0.1)
with self.assertRaises(ValueError):
_ = inception.inception_v2(inputs, num_classes, depth_multiplier=0.0)
def testBuildEndPointsWithUseSeparableConvolutionFalse(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v2_base(inputs)
endpoint_keys = [
key for key in end_points.keys()
if key.startswith('Mixed') or key.startswith('Conv')
]
_, end_points_with_replacement = inception.inception_v2_base(
inputs, use_separable_conv=False)
# The endpoint shapes must be equal to the original shape even when the
# separable convolution is replaced with a normal convolution.
for key in endpoint_keys:
original_shape = end_points[key].get_shape().as_list()
self.assertTrue(key in end_points_with_replacement)
new_shape = end_points_with_replacement[key].get_shape().as_list()
self.assertListEqual(original_shape, new_shape)
def testBuildEndPointsNCHWDataFormat(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v2_base(inputs)
endpoint_keys = [
key for key in end_points.keys()
if key.startswith('Mixed') or key.startswith('Conv')
]
inputs_in_nchw = tf.random.uniform((batch_size, 3, height, width))
_, end_points_with_replacement = inception.inception_v2_base(
inputs_in_nchw, use_separable_conv=False, data_format='NCHW')
# With the 'NCHW' data format, all endpoint activations have a transposed
# shape from the original shape with the 'NHWC' layout.
for key in endpoint_keys:
transposed_original_shape = tf.transpose(
a=end_points[key], perm=[0, 3, 1, 2]).get_shape().as_list()
self.assertTrue(key in end_points_with_replacement)
new_shape = end_points_with_replacement[key].get_shape().as_list()
self.assertListEqual(transposed_original_shape, new_shape)
def testBuildErrorsForDataFormats(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
# 'NCWH' data format is not supported.
with self.assertRaises(ValueError):
_ = inception.inception_v2_base(inputs, data_format='NCWH')
# 'NCHW' data format is not supported for separable convolution.
with self.assertRaises(ValueError):
_ = inception.inception_v2_base(inputs, data_format='NCHW')
def testHalfSizeImages(self):
batch_size = 5
height, width = 112, 112
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v2(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV2/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_5c']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 4, 4, 1024])
def testBuildBaseNetworkWithoutRootBlock(self):
batch_size = 5
height, width = 28, 28
channels = 192
inputs = tf.random.uniform((batch_size, height, width, channels))
_, end_points = inception.inception_v2_base(
inputs, include_root_block=False)
endpoints_shapes = {
'Mixed_3b': [batch_size, 28, 28, 256],
'Mixed_3c': [batch_size, 28, 28, 320],
'Mixed_4a': [batch_size, 14, 14, 576],
'Mixed_4b': [batch_size, 14, 14, 576],
'Mixed_4c': [batch_size, 14, 14, 576],
'Mixed_4d': [batch_size, 14, 14, 576],
'Mixed_4e': [batch_size, 14, 14, 576],
'Mixed_5a': [batch_size, 7, 7, 1024],
'Mixed_5b': [batch_size, 7, 7, 1024],
'Mixed_5c': [batch_size, 7, 7, 1024]
}
self.assertItemsEqual(
list(endpoints_shapes.keys()), list(end_points.keys()))
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testUnknownImageShape(self):
tf.reset_default_graph()
batch_size = 2
height, width = 224, 224
num_classes = 1000
input_np = np.random.uniform(0, 1, (batch_size, height, width, 3))
with self.test_session() as sess:
inputs = tf.placeholder(
tf.float32, shape=(batch_size, None, None, 3))
logits, end_points = inception.inception_v2(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV2/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_5c']
feed_dict = {inputs: input_np}
tf.global_variables_initializer().run()
pre_pool_out = sess.run(pre_pool, feed_dict=feed_dict)
self.assertListEqual(list(pre_pool_out.shape), [batch_size, 7, 7, 1024])
def testGlobalPoolUnknownImageShape(self):
tf.reset_default_graph()
batch_size = 1
height, width = 250, 300
num_classes = 1000
input_np = np.random.uniform(0, 1, (batch_size, height, width, 3))
with self.test_session() as sess:
inputs = tf.placeholder(
tf.float32, shape=(batch_size, None, None, 3))
logits, end_points = inception.inception_v2(inputs, num_classes,
global_pool=True)
self.assertTrue(logits.op.name.startswith('InceptionV2/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_5c']
feed_dict = {inputs: input_np}
tf.global_variables_initializer().run()
pre_pool_out = sess.run(pre_pool, feed_dict=feed_dict)
self.assertListEqual(list(pre_pool_out.shape), [batch_size, 8, 10, 1024])
def testUnknowBatchSize(self):
batch_size = 1
height, width = 224, 224
num_classes = 1000
inputs = tf.placeholder(tf.float32, (None, height, width, 3))
logits, _ = inception.inception_v2(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV2/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[None, num_classes])
images = tf.random.uniform((batch_size, height, width, 3))
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(logits, {inputs: images.eval()})
self.assertEquals(output.shape, (batch_size, num_classes))
def testEvaluation(self):
batch_size = 2
height, width = 224, 224
num_classes = 1000
eval_inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = inception.inception_v2(eval_inputs, num_classes,
is_training=False)
predictions = tf.argmax(input=logits, axis=1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (batch_size,))
def testTrainEvalWithReuse(self):
train_batch_size = 5
eval_batch_size = 2
height, width = 150, 150
num_classes = 1000
train_inputs = tf.random.uniform((train_batch_size, height, width, 3))
inception.inception_v2(train_inputs, num_classes)
eval_inputs = tf.random.uniform((eval_batch_size, height, width, 3))
logits, _ = inception.inception_v2(eval_inputs, num_classes, reuse=True)
predictions = tf.argmax(input=logits, axis=1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (eval_batch_size,))
def testLogitsNotSqueezed(self):
num_classes = 25
images = tf.random.uniform([1, 224, 224, 3])
logits, _ = inception.inception_v2(images,
num_classes=num_classes,
spatial_squeeze=False)
with self.test_session() as sess:
tf.global_variables_initializer().run()
logits_out = sess.run(logits)
self.assertListEqual(list(logits_out.shape), [1, 1, 1, num_classes])
def testNoBatchNormScaleByDefault(self):
height, width = 224, 224
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with slim.arg_scope(inception.inception_v2_arg_scope()):
inception.inception_v2(inputs, num_classes, is_training=False)
self.assertEqual(tf.global_variables('.*/BatchNorm/gamma:0$'), [])
def testBatchNormScale(self):
height, width = 224, 224
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with slim.arg_scope(
inception.inception_v2_arg_scope(batch_norm_scale=True)):
inception.inception_v2(inputs, num_classes, is_training=False)
gamma_names = set(
v.op.name
for v in tf.global_variables('.*/BatchNorm/gamma:0$'))
self.assertGreater(len(gamma_names), 0)
for v in tf.global_variables('.*/BatchNorm/moving_mean:0$'):
self.assertIn(v.op.name[:-len('moving_mean')] + 'gamma', gamma_names)
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v2_test.py | inception_v2_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the definition for inception v1 classification network."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception_utils
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
def inception_v1_base(inputs,
final_endpoint='Mixed_5c',
include_root_block=True,
scope='InceptionV1'):
"""Defines the Inception V1 base architecture.
This architecture is defined in:
Going deeper with convolutions
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed,
Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich.
http://arxiv.org/pdf/1409.4842v1.pdf.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
final_endpoint: specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e',
'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b', 'Mixed_5c']. If
include_root_block is False, ['Conv2d_1a_7x7', 'MaxPool_2a_3x3',
'Conv2d_2b_1x1', 'Conv2d_2c_3x3', 'MaxPool_3a_3x3'] will not be available.
include_root_block: If True, include the convolution and max-pooling layers
before the inception modules. If False, excludes those layers.
scope: Optional variable_scope.
Returns:
A dictionary from components of the network to the corresponding activation.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values.
"""
end_points = {}
with tf.variable_scope(scope, 'InceptionV1', [inputs]):
with slim.arg_scope(
[slim.conv2d, slim.fully_connected],
weights_initializer=trunc_normal(0.01)):
with slim.arg_scope([slim.conv2d, slim.max_pool2d],
stride=1, padding='SAME'):
net = inputs
if include_root_block:
end_point = 'Conv2d_1a_7x7'
net = slim.conv2d(inputs, 64, [7, 7], stride=2, scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'MaxPool_2a_3x3'
net = slim.max_pool2d(net, [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'Conv2d_2b_1x1'
net = slim.conv2d(net, 64, [1, 1], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'Conv2d_2c_3x3'
net = slim.conv2d(net, 192, [3, 3], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'MaxPool_3a_3x3'
net = slim.max_pool2d(net, [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'Mixed_3b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 96, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 128, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 16, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 32, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 32, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_3c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 192, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 32, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'MaxPool_4a_3x3'
net = slim.max_pool2d(net, [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_4b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 96, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 208, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 16, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 48, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_4c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 112, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 224, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 24, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 64, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_4d'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 256, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 24, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 64, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_4e'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 112, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 144, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 288, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 32, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 64, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_4f'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 256, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 320, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 32, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 128, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 128, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'MaxPool_5a_2x2'
net = slim.max_pool2d(net, [2, 2], stride=2, scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_5b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 256, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 320, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 32, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 128, [3, 3], scope='Conv2d_0a_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 128, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_5c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 384, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 384, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 48, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 128, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 128, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def inception_v1(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.8,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
scope='InceptionV1',
global_pool=False):
"""Defines the Inception V1 architecture.
This architecture is defined in:
Going deeper with convolutions
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed,
Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich.
http://arxiv.org/pdf/1409.4842v1.pdf.
The default image size used to train this network is 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
is_training: whether is training or not.
dropout_keep_prob: the percentage of activation values that are retained.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape [B, C], if false logits is of
shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
global_pool: Optional boolean flag to control the avgpooling before the
logits layer. If false or unset, pooling is done with a fixed window
that reduces default-sized inputs to 1x1, while larger inputs lead to
larger outputs. If true, any input size is pooled down to 1x1.
Returns:
net: a Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the non-dropped-out input to the logits layer
if num_classes is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
"""
# Final pooling and prediction
with tf.variable_scope(
scope, 'InceptionV1', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_v1_base(inputs, scope=scope)
with tf.variable_scope('Logits'):
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
else:
# Pooling with a fixed kernel size.
net = slim.avg_pool2d(net, [7, 7], stride=1, scope='AvgPool_0a_7x7')
end_points['AvgPool_0a_7x7'] = net
if not num_classes:
return net, end_points
net = slim.dropout(net, dropout_keep_prob, scope='Dropout_0b')
logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='Conv2d_0c_1x1')
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
inception_v1.default_image_size = 224
inception_v1_arg_scope = inception_utils.inception_arg_scope
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v1.py | inception_v1.py |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Validate mobilenet_v1 with options for quantization."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow.compat.v1 as tf
import tf_slim as slim
from tensorflow.contrib import quantize as contrib_quantize
from datasets import dataset_factory
from nets import mobilenet_v1
from preprocessing import preprocessing_factory
flags = tf.app.flags
flags.DEFINE_string('master', '', 'Session master')
flags.DEFINE_integer('batch_size', 250, 'Batch size')
flags.DEFINE_integer('num_classes', 1001, 'Number of classes to distinguish')
flags.DEFINE_integer('num_examples', 50000, 'Number of examples to evaluate')
flags.DEFINE_integer('image_size', 224, 'Input image resolution')
flags.DEFINE_float('depth_multiplier', 1.0, 'Depth multiplier for mobilenet')
flags.DEFINE_bool('quantize', False, 'Quantize training')
flags.DEFINE_string('checkpoint_dir', '', 'The directory for checkpoints')
flags.DEFINE_string('eval_dir', '', 'Directory for writing eval event logs')
flags.DEFINE_string('dataset_dir', '', 'Location of dataset')
FLAGS = flags.FLAGS
def imagenet_input(is_training):
"""Data reader for imagenet.
Reads in imagenet data and performs pre-processing on the images.
Args:
is_training: bool specifying if train or validation dataset is needed.
Returns:
A batch of images and labels.
"""
if is_training:
dataset = dataset_factory.get_dataset('imagenet', 'train',
FLAGS.dataset_dir)
else:
dataset = dataset_factory.get_dataset('imagenet', 'validation',
FLAGS.dataset_dir)
provider = slim.dataset_data_provider.DatasetDataProvider(
dataset,
shuffle=is_training,
common_queue_capacity=2 * FLAGS.batch_size,
common_queue_min=FLAGS.batch_size)
[image, label] = provider.get(['image', 'label'])
image_preprocessing_fn = preprocessing_factory.get_preprocessing(
'mobilenet_v1', is_training=is_training)
image = image_preprocessing_fn(image, FLAGS.image_size, FLAGS.image_size)
images, labels = tf.train.batch(
tensors=[image, label],
batch_size=FLAGS.batch_size,
num_threads=4,
capacity=5 * FLAGS.batch_size)
return images, labels
def metrics(logits, labels):
"""Specify the metrics for eval.
Args:
logits: Logits output from the graph.
labels: Ground truth labels for inputs.
Returns:
Eval Op for the graph.
"""
labels = tf.squeeze(labels)
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
'Accuracy':
tf.metrics.accuracy(
tf.argmax(input=logits, axis=1), labels),
'Recall_5':
tf.metrics.recall_at_k(labels, logits, 5),
})
for name, value in names_to_values.iteritems():
slim.summaries.add_scalar_summary(
value, name, prefix='eval', print_summary=True)
return names_to_updates.values()
def build_model():
"""Build the mobilenet_v1 model for evaluation.
Returns:
g: graph with rewrites after insertion of quantization ops and batch norm
folding.
eval_ops: eval ops for inference.
variables_to_restore: List of variables to restore from checkpoint.
"""
g = tf.Graph()
with g.as_default():
inputs, labels = imagenet_input(is_training=False)
scope = mobilenet_v1.mobilenet_v1_arg_scope(
is_training=False, weight_decay=0.0)
with slim.arg_scope(scope):
logits, _ = mobilenet_v1.mobilenet_v1(
inputs,
is_training=False,
depth_multiplier=FLAGS.depth_multiplier,
num_classes=FLAGS.num_classes)
if FLAGS.quantize:
contrib_quantize.create_eval_graph()
eval_ops = metrics(logits, labels)
return g, eval_ops
def eval_model():
"""Evaluates mobilenet_v1."""
g, eval_ops = build_model()
with g.as_default():
num_batches = math.ceil(FLAGS.num_examples / float(FLAGS.batch_size))
slim.evaluation.evaluate_once(
FLAGS.master,
FLAGS.checkpoint_dir,
logdir=FLAGS.eval_dir,
num_evals=num_batches,
eval_op=eval_ops)
def main(unused_arg):
eval_model()
if __name__ == '__main__':
tf.app.run(main)
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet_v1_eval.py | mobilenet_v1_eval.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the definition of the Inception V4 architecture.
As described in http://arxiv.org/abs/1602.07261.
Inception-v4, Inception-ResNet and the Impact of Residual Connections
on Learning
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception_utils
def block_inception_a(inputs, scope=None, reuse=None):
"""Builds Inception-A block for Inception v4 network."""
# By default use stride=1 and SAME padding
with slim.arg_scope([slim.conv2d, slim.avg_pool2d, slim.max_pool2d],
stride=1, padding='SAME'):
with tf.variable_scope(
scope, 'BlockInceptionA', [inputs], reuse=reuse):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(inputs, 96, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(inputs, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 96, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(inputs, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(inputs, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 96, [1, 1], scope='Conv2d_0b_1x1')
return tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
def block_reduction_a(inputs, scope=None, reuse=None):
"""Builds Reduction-A block for Inception v4 network."""
# By default use stride=1 and SAME padding
with slim.arg_scope([slim.conv2d, slim.avg_pool2d, slim.max_pool2d],
stride=1, padding='SAME'):
with tf.variable_scope(
scope, 'BlockReductionA', [inputs], reuse=reuse):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(inputs, 384, [3, 3], stride=2, padding='VALID',
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(inputs, 192, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 224, [3, 3], scope='Conv2d_0b_3x3')
branch_1 = slim.conv2d(branch_1, 256, [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(inputs, [3, 3], stride=2, padding='VALID',
scope='MaxPool_1a_3x3')
return tf.concat(axis=3, values=[branch_0, branch_1, branch_2])
def block_inception_b(inputs, scope=None, reuse=None):
"""Builds Inception-B block for Inception v4 network."""
# By default use stride=1 and SAME padding
with slim.arg_scope([slim.conv2d, slim.avg_pool2d, slim.max_pool2d],
stride=1, padding='SAME'):
with tf.variable_scope(
scope, 'BlockInceptionB', [inputs], reuse=reuse):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(inputs, 384, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(inputs, 192, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 224, [1, 7], scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, 256, [7, 1], scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(inputs, 192, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 192, [7, 1], scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, 224, [1, 7], scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, 224, [7, 1], scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, 256, [1, 7], scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(inputs, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 128, [1, 1], scope='Conv2d_0b_1x1')
return tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
def block_reduction_b(inputs, scope=None, reuse=None):
"""Builds Reduction-B block for Inception v4 network."""
# By default use stride=1 and SAME padding
with slim.arg_scope([slim.conv2d, slim.avg_pool2d, slim.max_pool2d],
stride=1, padding='SAME'):
with tf.variable_scope(
scope, 'BlockReductionB', [inputs], reuse=reuse):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(inputs, 192, [1, 1], scope='Conv2d_0a_1x1')
branch_0 = slim.conv2d(branch_0, 192, [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(inputs, 256, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 256, [1, 7], scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, 320, [7, 1], scope='Conv2d_0c_7x1')
branch_1 = slim.conv2d(branch_1, 320, [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(inputs, [3, 3], stride=2, padding='VALID',
scope='MaxPool_1a_3x3')
return tf.concat(axis=3, values=[branch_0, branch_1, branch_2])
def block_inception_c(inputs, scope=None, reuse=None):
"""Builds Inception-C block for Inception v4 network."""
# By default use stride=1 and SAME padding
with slim.arg_scope([slim.conv2d, slim.avg_pool2d, slim.max_pool2d],
stride=1, padding='SAME'):
with tf.variable_scope(
scope, 'BlockInceptionC', [inputs], reuse=reuse):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(inputs, 256, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(inputs, 384, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = tf.concat(axis=3, values=[
slim.conv2d(branch_1, 256, [1, 3], scope='Conv2d_0b_1x3'),
slim.conv2d(branch_1, 256, [3, 1], scope='Conv2d_0c_3x1')])
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(inputs, 384, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 448, [3, 1], scope='Conv2d_0b_3x1')
branch_2 = slim.conv2d(branch_2, 512, [1, 3], scope='Conv2d_0c_1x3')
branch_2 = tf.concat(axis=3, values=[
slim.conv2d(branch_2, 256, [1, 3], scope='Conv2d_0d_1x3'),
slim.conv2d(branch_2, 256, [3, 1], scope='Conv2d_0e_3x1')])
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(inputs, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 256, [1, 1], scope='Conv2d_0b_1x1')
return tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
def inception_v4_base(inputs, final_endpoint='Mixed_7d', scope=None):
"""Creates the Inception V4 network up to the given final endpoint.
Args:
inputs: a 4-D tensor of size [batch_size, height, width, 3].
final_endpoint: specifies the endpoint to construct the network up to.
It can be one of [ 'Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'Mixed_3a', 'Mixed_4a', 'Mixed_5a', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d',
'Mixed_5e', 'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d', 'Mixed_6e',
'Mixed_6f', 'Mixed_6g', 'Mixed_6h', 'Mixed_7a', 'Mixed_7b', 'Mixed_7c',
'Mixed_7d']
scope: Optional variable_scope.
Returns:
logits: the logits outputs of the model.
end_points: the set of end_points from the inception model.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
"""
end_points = {}
def add_and_check_final(name, net):
end_points[name] = net
return name == final_endpoint
with tf.variable_scope(scope, 'InceptionV4', [inputs]):
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
# 299 x 299 x 3
net = slim.conv2d(inputs, 32, [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
if add_and_check_final('Conv2d_1a_3x3', net): return net, end_points
# 149 x 149 x 32
net = slim.conv2d(net, 32, [3, 3], padding='VALID',
scope='Conv2d_2a_3x3')
if add_and_check_final('Conv2d_2a_3x3', net): return net, end_points
# 147 x 147 x 32
net = slim.conv2d(net, 64, [3, 3], scope='Conv2d_2b_3x3')
if add_and_check_final('Conv2d_2b_3x3', net): return net, end_points
# 147 x 147 x 64
with tf.variable_scope('Mixed_3a'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',
scope='MaxPool_0a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 96, [3, 3], stride=2, padding='VALID',
scope='Conv2d_0a_3x3')
net = tf.concat(axis=3, values=[branch_0, branch_1])
if add_and_check_final('Mixed_3a', net): return net, end_points
# 73 x 73 x 160
with tf.variable_scope('Mixed_4a'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_0 = slim.conv2d(branch_0, 96, [3, 3], padding='VALID',
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 64, [1, 7], scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, 64, [7, 1], scope='Conv2d_0c_7x1')
branch_1 = slim.conv2d(branch_1, 96, [3, 3], padding='VALID',
scope='Conv2d_1a_3x3')
net = tf.concat(axis=3, values=[branch_0, branch_1])
if add_and_check_final('Mixed_4a', net): return net, end_points
# 71 x 71 x 192
with tf.variable_scope('Mixed_5a'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 192, [3, 3], stride=2, padding='VALID',
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',
scope='MaxPool_1a_3x3')
net = tf.concat(axis=3, values=[branch_0, branch_1])
if add_and_check_final('Mixed_5a', net): return net, end_points
# 35 x 35 x 384
# 4 x Inception-A blocks
for idx in range(4):
block_scope = 'Mixed_5' + chr(ord('b') + idx)
net = block_inception_a(net, block_scope)
if add_and_check_final(block_scope, net): return net, end_points
# 35 x 35 x 384
# Reduction-A block
net = block_reduction_a(net, 'Mixed_6a')
if add_and_check_final('Mixed_6a', net): return net, end_points
# 17 x 17 x 1024
# 7 x Inception-B blocks
for idx in range(7):
block_scope = 'Mixed_6' + chr(ord('b') + idx)
net = block_inception_b(net, block_scope)
if add_and_check_final(block_scope, net): return net, end_points
# 17 x 17 x 1024
# Reduction-B block
net = block_reduction_b(net, 'Mixed_7a')
if add_and_check_final('Mixed_7a', net): return net, end_points
# 8 x 8 x 1536
# 3 x Inception-C blocks
for idx in range(3):
block_scope = 'Mixed_7' + chr(ord('b') + idx)
net = block_inception_c(net, block_scope)
if add_and_check_final(block_scope, net): return net, end_points
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def inception_v4(inputs, num_classes=1001, is_training=True,
dropout_keep_prob=0.8,
reuse=None,
scope='InceptionV4',
create_aux_logits=True):
"""Creates the Inception V4 model.
Args:
inputs: a 4-D tensor of size [batch_size, height, width, 3].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
is_training: whether is training or not.
dropout_keep_prob: float, the fraction to keep before final layer.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
create_aux_logits: Whether to include the auxiliary logits.
Returns:
net: a Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the non-dropped input to the logits layer
if num_classes is 0 or None.
end_points: the set of end_points from the inception model.
"""
end_points = {}
with tf.variable_scope(
scope, 'InceptionV4', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_v4_base(inputs, scope=scope)
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
# Auxiliary Head logits
if create_aux_logits and num_classes:
with tf.variable_scope('AuxLogits'):
# 17 x 17 x 1024
aux_logits = end_points['Mixed_6h']
aux_logits = slim.avg_pool2d(aux_logits, [5, 5], stride=3,
padding='VALID',
scope='AvgPool_1a_5x5')
aux_logits = slim.conv2d(aux_logits, 128, [1, 1],
scope='Conv2d_1b_1x1')
aux_logits = slim.conv2d(aux_logits, 768,
aux_logits.get_shape()[1:3],
padding='VALID', scope='Conv2d_2a')
aux_logits = slim.flatten(aux_logits)
aux_logits = slim.fully_connected(aux_logits, num_classes,
activation_fn=None,
scope='Aux_logits')
end_points['AuxLogits'] = aux_logits
# Final pooling and prediction
# TODO(sguada,arnoegw): Consider adding a parameter global_pool which
# can be set to False to disable pooling here (as in resnet_*()).
with tf.variable_scope('Logits'):
# 8 x 8 x 1536
kernel_size = net.get_shape()[1:3]
if kernel_size.is_fully_defined():
net = slim.avg_pool2d(net, kernel_size, padding='VALID',
scope='AvgPool_1a')
else:
net = tf.reduce_mean(
input_tensor=net,
axis=[1, 2],
keepdims=True,
name='global_pool')
end_points['global_pool'] = net
if not num_classes:
return net, end_points
# 1 x 1 x 1536
net = slim.dropout(net, dropout_keep_prob, scope='Dropout_1b')
net = slim.flatten(net, scope='PreLogitsFlatten')
end_points['PreLogitsFlatten'] = net
# 1536
logits = slim.fully_connected(net, num_classes, activation_fn=None,
scope='Logits')
end_points['Logits'] = logits
end_points['Predictions'] = tf.nn.softmax(logits, name='Predictions')
return logits, end_points
inception_v4.default_image_size = 299
inception_v4_arg_scope = inception_utils.inception_arg_scope
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v4.py | inception_v4.py |
# Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for slim.inception."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
from nets import nets_factory
class NetworksTest(tf.test.TestCase):
def testGetNetworkFnFirstHalf(self):
batch_size = 5
num_classes = 1000
for net in list(nets_factory.networks_map.keys())[:10]:
with tf.Graph().as_default() as g, self.test_session(g):
net_fn = nets_factory.get_network_fn(net, num_classes=num_classes)
# Most networks use 224 as their default_image_size
image_size = getattr(net_fn, 'default_image_size', 224)
if net not in ['i3d', 's3dg']:
inputs = tf.random.uniform((batch_size, image_size, image_size, 3))
logits, end_points = net_fn(inputs)
self.assertTrue(isinstance(logits, tf.Tensor))
self.assertTrue(isinstance(end_points, dict))
self.assertEqual(logits.get_shape().as_list()[0], batch_size)
self.assertEqual(logits.get_shape().as_list()[-1], num_classes)
def testGetNetworkFnSecondHalf(self):
batch_size = 5
num_classes = 1000
for net in list(nets_factory.networks_map.keys())[10:]:
with tf.Graph().as_default() as g, self.test_session(g):
net_fn = nets_factory.get_network_fn(net, num_classes=num_classes)
# Most networks use 224 as their default_image_size
image_size = getattr(net_fn, 'default_image_size', 224)
if net not in ['i3d', 's3dg']:
inputs = tf.random.uniform((batch_size, image_size, image_size, 3))
logits, end_points = net_fn(inputs)
self.assertTrue(isinstance(logits, tf.Tensor))
self.assertTrue(isinstance(end_points, dict))
self.assertEqual(logits.get_shape().as_list()[0], batch_size)
self.assertEqual(logits.get_shape().as_list()[-1], num_classes)
def testGetNetworkFnVideoModels(self):
batch_size = 5
num_classes = 400
for net in ['i3d', 's3dg']:
with tf.Graph().as_default() as g, self.test_session(g):
net_fn = nets_factory.get_network_fn(net, num_classes=num_classes)
# Most networks use 224 as their default_image_size
image_size = getattr(net_fn, 'default_image_size', 224) // 2
inputs = tf.random.uniform((batch_size, 10, image_size, image_size, 3))
logits, end_points = net_fn(inputs)
self.assertTrue(isinstance(logits, tf.Tensor))
self.assertTrue(isinstance(end_points, dict))
self.assertEqual(logits.get_shape().as_list()[0], batch_size)
self.assertEqual(logits.get_shape().as_list()[-1], num_classes)
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/nets_factory_test.py | nets_factory_test.py |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Export quantized tflite model from a trained checkpoint."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
from absl import app
from absl import flags
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
from nets import nets_factory
from preprocessing import preprocessing_factory
flags.DEFINE_string("model_name", None,
"The name of the architecture to quantize.")
flags.DEFINE_string("checkpoint_path", None, "Path to the training checkpoint.")
flags.DEFINE_string("dataset_name", "imagenet2012",
"Name of the dataset to use for quantization calibration.")
flags.DEFINE_string("dataset_dir", None, "Dataset location.")
flags.DEFINE_string(
"dataset_split", "train",
"The dataset split (train, validation etc.) to use for calibration.")
flags.DEFINE_string("output_tflite", None, "Path to output tflite file.")
flags.DEFINE_boolean(
"use_model_specific_preprocessing", False,
"When true, uses the preprocessing corresponding to the model as specified "
"in preprocessing factory.")
flags.DEFINE_boolean("enable_ema", True,
"Load exponential moving average version of variables.")
flags.DEFINE_integer(
"num_steps", 1000,
"Number of post-training quantization calibration steps to run.")
flags.DEFINE_integer("image_size", 224, "Size of the input image.")
flags.DEFINE_integer("num_classes", 1001,
"Number of output classes for the model.")
FLAGS = flags.FLAGS
# Mean and standard deviation used for normalizing the image tensor.
_MEAN_RGB = 127.5
_STD_RGB = 127.5
def _preprocess_for_quantization(image_data, image_size, crop_padding=32):
"""Crops to center of image with padding then scales, normalizes image_size.
Args:
image_data: A 3D Tensor representing the RGB image data. Image can be of
arbitrary height and width.
image_size: image height/width dimension.
crop_padding: the padding size to use when centering the crop.
Returns:
A decoded and cropped image Tensor. Image is normalized to [-1,1].
"""
shape = tf.shape(image_data)
image_height = shape[0]
image_width = shape[1]
padded_center_crop_size = tf.cast(
(image_size * 1.0 / (image_size + crop_padding)) *
tf.cast(tf.minimum(image_height, image_width), tf.float32), tf.int32)
offset_height = ((image_height - padded_center_crop_size) + 1) // 2
offset_width = ((image_width - padded_center_crop_size) + 1) // 2
image = tf.image.crop_to_bounding_box(
image_data,
offset_height=offset_height,
offset_width=offset_width,
target_height=padded_center_crop_size,
target_width=padded_center_crop_size)
image = tf.image.resize([image], [image_size, image_size],
method=tf.image.ResizeMethod.BICUBIC)[0]
image = tf.cast(image, tf.float32)
image -= tf.constant(_MEAN_RGB)
image /= tf.constant(_STD_RGB)
return image
def restore_model(sess, checkpoint_path, enable_ema=True):
"""Restore variables from the checkpoint into the provided session.
Args:
sess: A tensorflow session where the checkpoint will be loaded.
checkpoint_path: Path to the trained checkpoint.
enable_ema: (optional) Whether to load the exponential moving average (ema)
version of the tensorflow variables. Defaults to True.
"""
if enable_ema:
ema = tf.train.ExponentialMovingAverage(decay=0.0)
ema_vars = tf.trainable_variables() + tf.get_collection("moving_vars")
for v in tf.global_variables():
if "moving_mean" in v.name or "moving_variance" in v.name:
ema_vars.append(v)
ema_vars = list(set(ema_vars))
var_dict = ema.variables_to_restore(ema_vars)
else:
var_dict = None
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(var_dict, max_to_keep=1)
saver.restore(sess, checkpoint_path)
def _representative_dataset_gen():
"""Gets a python generator of numpy arrays for the given dataset."""
image_size = FLAGS.image_size
dataset = tfds.builder(FLAGS.dataset_name, data_dir=FLAGS.dataset_dir)
dataset.download_and_prepare()
data = dataset.as_dataset()[FLAGS.dataset_split]
iterator = tf.data.make_one_shot_iterator(data)
if FLAGS.use_model_specific_preprocessing:
preprocess_fn = functools.partial(
preprocessing_factory.get_preprocessing(name=FLAGS.model_name),
output_height=image_size,
output_width=image_size)
else:
preprocess_fn = functools.partial(
_preprocess_for_quantization, image_size=image_size)
features = iterator.get_next()
image = features["image"]
image = preprocess_fn(image)
image = tf.reshape(image, [1, image_size, image_size, 3])
for _ in range(FLAGS.num_steps):
yield [image.eval()]
def main(_):
with tf.Graph().as_default(), tf.Session() as sess:
network_fn = nets_factory.get_network_fn(
FLAGS.model_name, num_classes=FLAGS.num_classes, is_training=False)
image_size = FLAGS.image_size
images = tf.placeholder(
tf.float32, shape=(1, image_size, image_size, 3), name="images")
logits, _ = network_fn(images)
output_tensor = tf.nn.softmax(logits)
restore_model(sess, FLAGS.checkpoint_path, enable_ema=FLAGS.enable_ema)
converter = tf.lite.TFLiteConverter.from_session(sess, [images],
[output_tensor])
converter.representative_dataset = tf.lite.RepresentativeDataset(
_representative_dataset_gen)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_buffer = converter.convert()
with tf.gfile.GFile(FLAGS.output_tflite, "wb") as output_tflite:
output_tflite.write(tflite_buffer)
print("tflite model written to %s" % FLAGS.output_tflite)
if __name__ == "__main__":
flags.mark_flag_as_required("model_name")
flags.mark_flag_as_required("checkpoint_path")
flags.mark_flag_as_required("dataset_dir")
flags.mark_flag_as_required("output_tflite")
app.run(main)
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/post_training_quantization.py | post_training_quantization.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the model definition for the OverFeat network.
The definition for the network was obtained from:
OverFeat: Integrated Recognition, Localization and Detection using
Convolutional Networks
Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus and
Yann LeCun, 2014
http://arxiv.org/abs/1312.6229
Usage:
with slim.arg_scope(overfeat.overfeat_arg_scope()):
outputs, end_points = overfeat.overfeat(inputs)
@@overfeat
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
def overfeat_arg_scope(weight_decay=0.0005):
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(weight_decay),
biases_initializer=tf.zeros_initializer()):
with slim.arg_scope([slim.conv2d], padding='SAME'):
with slim.arg_scope([slim.max_pool2d], padding='VALID') as arg_sc:
return arg_sc
def overfeat(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
scope='overfeat',
global_pool=False):
"""Contains the model definition for the OverFeat network.
The definition for the network was obtained from:
OverFeat: Integrated Recognition, Localization and Detection using
Convolutional Networks
Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus and
Yann LeCun, 2014
http://arxiv.org/abs/1312.6229
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 231x231. To use in fully
convolutional mode, set spatial_squeeze to false.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
scope: Optional scope for the variables.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original OverFeat.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the non-dropped-out input to the logits layer (if num_classes is 0 or
None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(scope, 'overfeat', [inputs]) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID',
scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.conv2d(net, 256, [5, 5], padding='VALID', scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.conv2d(net, 512, [3, 3], scope='conv3')
net = slim.conv2d(net, 1024, [3, 3], scope='conv4')
net = slim.conv2d(net, 1024, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
with slim.arg_scope(
[slim.conv2d],
weights_initializer=trunc_normal(0.005),
biases_initializer=tf.constant_initializer(0.1)):
net = slim.conv2d(net, 3072, [6, 6], padding='VALID', scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(
end_points_collection)
if global_pool:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(
net,
num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
biases_initializer=tf.zeros_initializer(),
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
overfeat.default_image_size = 231
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/overfeat.py | overfeat.py |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the definition for Inflated 3D Inception V1 (I3D).
The network architecture is proposed by:
Joao Carreira and Andrew Zisserman,
Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset.
https://arxiv.org/abs/1705.07750
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import i3d_utils
from nets import s3dg
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
conv3d_spatiotemporal = i3d_utils.conv3d_spatiotemporal
def i3d_arg_scope(weight_decay=1e-7,
batch_norm_decay=0.999,
batch_norm_epsilon=0.001,
use_renorm=False,
separable_conv3d=False):
"""Defines default arg_scope for I3D.
Args:
weight_decay: The weight decay to use for regularizing the model.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
use_renorm: Whether to use batch renormalization or not.
separable_conv3d: Whether to use separable 3d Convs.
Returns:
sc: An arg_scope to use for the models.
"""
batch_norm_params = {
# Decay for the moving averages.
'decay': batch_norm_decay,
# epsilon to prevent 0s in variance.
'epsilon': batch_norm_epsilon,
# Turns off fused batch norm.
'fused': False,
'renorm': use_renorm,
# collection containing the moving mean and moving variance.
'variables_collections': {
'beta': None,
'gamma': None,
'moving_mean': ['moving_vars'],
'moving_variance': ['moving_vars'],
}
}
with slim.arg_scope(
[slim.conv3d, conv3d_spatiotemporal],
weights_regularizer=slim.l2_regularizer(weight_decay),
activation_fn=tf.nn.relu,
normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params):
with slim.arg_scope(
[conv3d_spatiotemporal], separable=separable_conv3d) as sc:
return sc
def i3d_base(inputs, final_endpoint='Mixed_5c',
scope='InceptionV1'):
"""Defines the I3D base architecture.
Note that we use the names as defined in Inception V1 to facilitate checkpoint
conversion from an image-trained Inception V1 checkpoint to I3D checkpoint.
Args:
inputs: A 5-D float tensor of size [batch_size, num_frames, height, width,
channels].
final_endpoint: Specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e',
'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b', 'Mixed_5c']
scope: Optional variable_scope.
Returns:
A dictionary from components of the network to the corresponding activation.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values.
"""
return s3dg.s3dg_base(
inputs,
first_temporal_kernel_size=7,
temporal_conv_startat='Conv2d_2c_3x3',
gating_startat=None,
final_endpoint=final_endpoint,
min_depth=16,
depth_multiplier=1.0,
data_format='NDHWC',
scope=scope)
def i3d(inputs,
num_classes=1000,
dropout_keep_prob=0.8,
is_training=True,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
scope='InceptionV1'):
"""Defines the I3D architecture.
The default image size used to train this network is 224x224.
Args:
inputs: A 5-D float tensor of size [batch_size, num_frames, height, width,
channels].
num_classes: number of predicted classes.
dropout_keep_prob: the percentage of activation values that are retained.
is_training: whether is training or not.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape is [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
Returns:
logits: the pre-softmax activations, a tensor of size
[batch_size, num_classes]
end_points: a dictionary from components of the network to the corresponding
activation.
"""
# Final pooling and prediction
with tf.variable_scope(
scope, 'InceptionV1', [inputs, num_classes], reuse=reuse) as scope:
with slim.arg_scope(
[slim.batch_norm, slim.dropout], is_training=is_training):
net, end_points = i3d_base(inputs, scope=scope)
with tf.variable_scope('Logits'):
kernel_size = i3d_utils.reduced_kernel_size_3d(net, [2, 7, 7])
net = slim.avg_pool3d(
net, kernel_size, stride=1, scope='AvgPool_0a_7x7')
net = slim.dropout(net, dropout_keep_prob, scope='Dropout_0b')
logits = slim.conv3d(
net,
num_classes, [1, 1, 1],
activation_fn=None,
normalizer_fn=None,
scope='Conv2d_0c_1x1')
# Temporal average pooling.
logits = tf.reduce_mean(input_tensor=logits, axis=1)
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
i3d.default_image_size = 224
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/i3d.py | i3d.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains a variant of the CIFAR-10 model definition."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
stddev=stddev)
def cifarnet(images, num_classes=10, is_training=False,
dropout_keep_prob=0.5,
prediction_fn=slim.softmax,
scope='CifarNet'):
"""Creates a variant of the CifarNet model.
Note that since the output is a set of 'logits', the values fall in the
interval of (-infinity, infinity). Consequently, to convert the outputs to a
probability distribution over the characters, one will need to convert them
using the softmax function:
logits = cifarnet.cifarnet(images, is_training=False)
probabilities = tf.nn.softmax(logits)
predictions = tf.argmax(logits, 1)
Args:
images: A batch of `Tensors` of size [batch_size, height, width, channels].
num_classes: the number of classes in the dataset. If 0 or None, the logits
layer is omitted and the input features to the logits layer are returned
instead.
is_training: specifies whether or not we're currently training the model.
This variable will determine the behaviour of the dropout layer.
dropout_keep_prob: the percentage of activation values that are retained.
prediction_fn: a function to get predictions out of logits.
scope: Optional variable_scope.
Returns:
net: a 2D Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the input to the logits layer if num_classes
is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
"""
end_points = {}
with tf.variable_scope(scope, 'CifarNet', [images]):
net = slim.conv2d(images, 64, [5, 5], scope='conv1')
end_points['conv1'] = net
net = slim.max_pool2d(net, [2, 2], 2, scope='pool1')
end_points['pool1'] = net
net = tf.nn.lrn(net, 4, bias=1.0, alpha=0.001/9.0, beta=0.75, name='norm1')
net = slim.conv2d(net, 64, [5, 5], scope='conv2')
end_points['conv2'] = net
net = tf.nn.lrn(net, 4, bias=1.0, alpha=0.001/9.0, beta=0.75, name='norm2')
net = slim.max_pool2d(net, [2, 2], 2, scope='pool2')
end_points['pool2'] = net
net = slim.flatten(net)
end_points['Flatten'] = net
net = slim.fully_connected(net, 384, scope='fc3')
end_points['fc3'] = net
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout3')
net = slim.fully_connected(net, 192, scope='fc4')
end_points['fc4'] = net
if not num_classes:
return net, end_points
logits = slim.fully_connected(
net,
num_classes,
biases_initializer=tf.zeros_initializer(),
weights_initializer=trunc_normal(1 / 192.0),
weights_regularizer=None,
activation_fn=None,
scope='logits')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
cifarnet.default_image_size = 32
def cifarnet_arg_scope(weight_decay=0.004):
"""Defines the default cifarnet argument scope.
Args:
weight_decay: The weight decay to use for regularizing the model.
Returns:
An `arg_scope` to use for the inception v3 model.
"""
with slim.arg_scope(
[slim.conv2d],
weights_initializer=tf.truncated_normal_initializer(
stddev=5e-2),
activation_fn=tf.nn.relu):
with slim.arg_scope(
[slim.fully_connected],
biases_initializer=tf.constant_initializer(0.1),
weights_initializer=trunc_normal(0.04),
weights_regularizer=slim.l2_regularizer(weight_decay),
activation_fn=tf.nn.relu) as sc:
return sc
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/cifarnet.py | cifarnet.py |
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
"""Tests for MobileNet v1."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import mobilenet_v1
class MobilenetV1Test(tf.test.TestCase):
def testBuildClassificationNetwork(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = mobilenet_v1.mobilenet_v1(inputs, num_classes)
self.assertTrue(logits.op.name.startswith(
'MobilenetV1/Logits/SpatialSqueeze'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('Predictions' in end_points)
self.assertListEqual(end_points['Predictions'].get_shape().as_list(),
[batch_size, num_classes])
def testBuildPreLogitsNetwork(self):
batch_size = 5
height, width = 224, 224
num_classes = None
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = mobilenet_v1.mobilenet_v1(inputs, num_classes)
self.assertTrue(net.op.name.startswith('MobilenetV1/Logits/AvgPool'))
self.assertListEqual(net.get_shape().as_list(), [batch_size, 1, 1, 1024])
self.assertFalse('Logits' in end_points)
self.assertFalse('Predictions' in end_points)
def testBuildBaseNetwork(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = mobilenet_v1.mobilenet_v1_base(inputs)
self.assertTrue(net.op.name.startswith('MobilenetV1/Conv2d_13'))
self.assertListEqual(net.get_shape().as_list(),
[batch_size, 7, 7, 1024])
expected_endpoints = ['Conv2d_0',
'Conv2d_1_depthwise', 'Conv2d_1_pointwise',
'Conv2d_2_depthwise', 'Conv2d_2_pointwise',
'Conv2d_3_depthwise', 'Conv2d_3_pointwise',
'Conv2d_4_depthwise', 'Conv2d_4_pointwise',
'Conv2d_5_depthwise', 'Conv2d_5_pointwise',
'Conv2d_6_depthwise', 'Conv2d_6_pointwise',
'Conv2d_7_depthwise', 'Conv2d_7_pointwise',
'Conv2d_8_depthwise', 'Conv2d_8_pointwise',
'Conv2d_9_depthwise', 'Conv2d_9_pointwise',
'Conv2d_10_depthwise', 'Conv2d_10_pointwise',
'Conv2d_11_depthwise', 'Conv2d_11_pointwise',
'Conv2d_12_depthwise', 'Conv2d_12_pointwise',
'Conv2d_13_depthwise', 'Conv2d_13_pointwise']
self.assertItemsEqual(end_points.keys(), expected_endpoints)
def testBuildOnlyUptoFinalEndpoint(self):
batch_size = 5
height, width = 224, 224
endpoints = ['Conv2d_0',
'Conv2d_1_depthwise', 'Conv2d_1_pointwise',
'Conv2d_2_depthwise', 'Conv2d_2_pointwise',
'Conv2d_3_depthwise', 'Conv2d_3_pointwise',
'Conv2d_4_depthwise', 'Conv2d_4_pointwise',
'Conv2d_5_depthwise', 'Conv2d_5_pointwise',
'Conv2d_6_depthwise', 'Conv2d_6_pointwise',
'Conv2d_7_depthwise', 'Conv2d_7_pointwise',
'Conv2d_8_depthwise', 'Conv2d_8_pointwise',
'Conv2d_9_depthwise', 'Conv2d_9_pointwise',
'Conv2d_10_depthwise', 'Conv2d_10_pointwise',
'Conv2d_11_depthwise', 'Conv2d_11_pointwise',
'Conv2d_12_depthwise', 'Conv2d_12_pointwise',
'Conv2d_13_depthwise', 'Conv2d_13_pointwise']
for index, endpoint in enumerate(endpoints):
with tf.Graph().as_default():
inputs = tf.random.uniform((batch_size, height, width, 3))
out_tensor, end_points = mobilenet_v1.mobilenet_v1_base(
inputs, final_endpoint=endpoint)
self.assertTrue(out_tensor.op.name.startswith(
'MobilenetV1/' + endpoint))
self.assertItemsEqual(endpoints[:index + 1], end_points.keys())
def testBuildCustomNetworkUsingConvDefs(self):
batch_size = 5
height, width = 224, 224
conv_defs = [
mobilenet_v1.Conv(kernel=[3, 3], stride=2, depth=32),
mobilenet_v1.DepthSepConv(kernel=[3, 3], stride=1, depth=64),
mobilenet_v1.DepthSepConv(kernel=[3, 3], stride=2, depth=128),
mobilenet_v1.DepthSepConv(kernel=[3, 3], stride=1, depth=512)
]
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = mobilenet_v1.mobilenet_v1_base(
inputs, final_endpoint='Conv2d_3_pointwise', conv_defs=conv_defs)
self.assertTrue(net.op.name.startswith('MobilenetV1/Conv2d_3'))
self.assertListEqual(net.get_shape().as_list(),
[batch_size, 56, 56, 512])
expected_endpoints = ['Conv2d_0',
'Conv2d_1_depthwise', 'Conv2d_1_pointwise',
'Conv2d_2_depthwise', 'Conv2d_2_pointwise',
'Conv2d_3_depthwise', 'Conv2d_3_pointwise']
self.assertItemsEqual(end_points.keys(), expected_endpoints)
def testBuildAndCheckAllEndPointsUptoConv2d_13(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
with slim.arg_scope([slim.conv2d, slim.separable_conv2d],
normalizer_fn=slim.batch_norm):
_, end_points = mobilenet_v1.mobilenet_v1_base(
inputs, final_endpoint='Conv2d_13_pointwise')
_, explicit_padding_end_points = mobilenet_v1.mobilenet_v1_base(
inputs, final_endpoint='Conv2d_13_pointwise',
use_explicit_padding=True)
endpoints_shapes = {'Conv2d_0': [batch_size, 112, 112, 32],
'Conv2d_1_depthwise': [batch_size, 112, 112, 32],
'Conv2d_1_pointwise': [batch_size, 112, 112, 64],
'Conv2d_2_depthwise': [batch_size, 56, 56, 64],
'Conv2d_2_pointwise': [batch_size, 56, 56, 128],
'Conv2d_3_depthwise': [batch_size, 56, 56, 128],
'Conv2d_3_pointwise': [batch_size, 56, 56, 128],
'Conv2d_4_depthwise': [batch_size, 28, 28, 128],
'Conv2d_4_pointwise': [batch_size, 28, 28, 256],
'Conv2d_5_depthwise': [batch_size, 28, 28, 256],
'Conv2d_5_pointwise': [batch_size, 28, 28, 256],
'Conv2d_6_depthwise': [batch_size, 14, 14, 256],
'Conv2d_6_pointwise': [batch_size, 14, 14, 512],
'Conv2d_7_depthwise': [batch_size, 14, 14, 512],
'Conv2d_7_pointwise': [batch_size, 14, 14, 512],
'Conv2d_8_depthwise': [batch_size, 14, 14, 512],
'Conv2d_8_pointwise': [batch_size, 14, 14, 512],
'Conv2d_9_depthwise': [batch_size, 14, 14, 512],
'Conv2d_9_pointwise': [batch_size, 14, 14, 512],
'Conv2d_10_depthwise': [batch_size, 14, 14, 512],
'Conv2d_10_pointwise': [batch_size, 14, 14, 512],
'Conv2d_11_depthwise': [batch_size, 14, 14, 512],
'Conv2d_11_pointwise': [batch_size, 14, 14, 512],
'Conv2d_12_depthwise': [batch_size, 7, 7, 512],
'Conv2d_12_pointwise': [batch_size, 7, 7, 1024],
'Conv2d_13_depthwise': [batch_size, 7, 7, 1024],
'Conv2d_13_pointwise': [batch_size, 7, 7, 1024]}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name, expected_shape in endpoints_shapes.items():
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
self.assertItemsEqual(endpoints_shapes.keys(),
explicit_padding_end_points.keys())
for endpoint_name, expected_shape in endpoints_shapes.items():
self.assertTrue(endpoint_name in explicit_padding_end_points)
self.assertListEqual(
explicit_padding_end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testOutputStride16BuildAndCheckAllEndPointsUptoConv2d_13(self):
batch_size = 5
height, width = 224, 224
output_stride = 16
inputs = tf.random.uniform((batch_size, height, width, 3))
with slim.arg_scope([slim.conv2d, slim.separable_conv2d],
normalizer_fn=slim.batch_norm):
_, end_points = mobilenet_v1.mobilenet_v1_base(
inputs, output_stride=output_stride,
final_endpoint='Conv2d_13_pointwise')
_, explicit_padding_end_points = mobilenet_v1.mobilenet_v1_base(
inputs, output_stride=output_stride,
final_endpoint='Conv2d_13_pointwise', use_explicit_padding=True)
endpoints_shapes = {'Conv2d_0': [batch_size, 112, 112, 32],
'Conv2d_1_depthwise': [batch_size, 112, 112, 32],
'Conv2d_1_pointwise': [batch_size, 112, 112, 64],
'Conv2d_2_depthwise': [batch_size, 56, 56, 64],
'Conv2d_2_pointwise': [batch_size, 56, 56, 128],
'Conv2d_3_depthwise': [batch_size, 56, 56, 128],
'Conv2d_3_pointwise': [batch_size, 56, 56, 128],
'Conv2d_4_depthwise': [batch_size, 28, 28, 128],
'Conv2d_4_pointwise': [batch_size, 28, 28, 256],
'Conv2d_5_depthwise': [batch_size, 28, 28, 256],
'Conv2d_5_pointwise': [batch_size, 28, 28, 256],
'Conv2d_6_depthwise': [batch_size, 14, 14, 256],
'Conv2d_6_pointwise': [batch_size, 14, 14, 512],
'Conv2d_7_depthwise': [batch_size, 14, 14, 512],
'Conv2d_7_pointwise': [batch_size, 14, 14, 512],
'Conv2d_8_depthwise': [batch_size, 14, 14, 512],
'Conv2d_8_pointwise': [batch_size, 14, 14, 512],
'Conv2d_9_depthwise': [batch_size, 14, 14, 512],
'Conv2d_9_pointwise': [batch_size, 14, 14, 512],
'Conv2d_10_depthwise': [batch_size, 14, 14, 512],
'Conv2d_10_pointwise': [batch_size, 14, 14, 512],
'Conv2d_11_depthwise': [batch_size, 14, 14, 512],
'Conv2d_11_pointwise': [batch_size, 14, 14, 512],
'Conv2d_12_depthwise': [batch_size, 14, 14, 512],
'Conv2d_12_pointwise': [batch_size, 14, 14, 1024],
'Conv2d_13_depthwise': [batch_size, 14, 14, 1024],
'Conv2d_13_pointwise': [batch_size, 14, 14, 1024]}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name, expected_shape in endpoints_shapes.items():
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
self.assertItemsEqual(endpoints_shapes.keys(),
explicit_padding_end_points.keys())
for endpoint_name, expected_shape in endpoints_shapes.items():
self.assertTrue(endpoint_name in explicit_padding_end_points)
self.assertListEqual(
explicit_padding_end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testOutputStride8BuildAndCheckAllEndPointsUptoConv2d_13(self):
batch_size = 5
height, width = 224, 224
output_stride = 8
inputs = tf.random.uniform((batch_size, height, width, 3))
with slim.arg_scope([slim.conv2d, slim.separable_conv2d],
normalizer_fn=slim.batch_norm):
_, end_points = mobilenet_v1.mobilenet_v1_base(
inputs, output_stride=output_stride,
final_endpoint='Conv2d_13_pointwise')
_, explicit_padding_end_points = mobilenet_v1.mobilenet_v1_base(
inputs, output_stride=output_stride,
final_endpoint='Conv2d_13_pointwise', use_explicit_padding=True)
endpoints_shapes = {'Conv2d_0': [batch_size, 112, 112, 32],
'Conv2d_1_depthwise': [batch_size, 112, 112, 32],
'Conv2d_1_pointwise': [batch_size, 112, 112, 64],
'Conv2d_2_depthwise': [batch_size, 56, 56, 64],
'Conv2d_2_pointwise': [batch_size, 56, 56, 128],
'Conv2d_3_depthwise': [batch_size, 56, 56, 128],
'Conv2d_3_pointwise': [batch_size, 56, 56, 128],
'Conv2d_4_depthwise': [batch_size, 28, 28, 128],
'Conv2d_4_pointwise': [batch_size, 28, 28, 256],
'Conv2d_5_depthwise': [batch_size, 28, 28, 256],
'Conv2d_5_pointwise': [batch_size, 28, 28, 256],
'Conv2d_6_depthwise': [batch_size, 28, 28, 256],
'Conv2d_6_pointwise': [batch_size, 28, 28, 512],
'Conv2d_7_depthwise': [batch_size, 28, 28, 512],
'Conv2d_7_pointwise': [batch_size, 28, 28, 512],
'Conv2d_8_depthwise': [batch_size, 28, 28, 512],
'Conv2d_8_pointwise': [batch_size, 28, 28, 512],
'Conv2d_9_depthwise': [batch_size, 28, 28, 512],
'Conv2d_9_pointwise': [batch_size, 28, 28, 512],
'Conv2d_10_depthwise': [batch_size, 28, 28, 512],
'Conv2d_10_pointwise': [batch_size, 28, 28, 512],
'Conv2d_11_depthwise': [batch_size, 28, 28, 512],
'Conv2d_11_pointwise': [batch_size, 28, 28, 512],
'Conv2d_12_depthwise': [batch_size, 28, 28, 512],
'Conv2d_12_pointwise': [batch_size, 28, 28, 1024],
'Conv2d_13_depthwise': [batch_size, 28, 28, 1024],
'Conv2d_13_pointwise': [batch_size, 28, 28, 1024]}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name, expected_shape in endpoints_shapes.items():
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
self.assertItemsEqual(endpoints_shapes.keys(),
explicit_padding_end_points.keys())
for endpoint_name, expected_shape in endpoints_shapes.items():
self.assertTrue(endpoint_name in explicit_padding_end_points)
self.assertListEqual(
explicit_padding_end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testBuildAndCheckAllEndPointsApproximateFaceNet(self):
batch_size = 5
height, width = 128, 128
inputs = tf.random.uniform((batch_size, height, width, 3))
with slim.arg_scope([slim.conv2d, slim.separable_conv2d],
normalizer_fn=slim.batch_norm):
_, end_points = mobilenet_v1.mobilenet_v1_base(
inputs, final_endpoint='Conv2d_13_pointwise', depth_multiplier=0.75)
_, explicit_padding_end_points = mobilenet_v1.mobilenet_v1_base(
inputs, final_endpoint='Conv2d_13_pointwise', depth_multiplier=0.75,
use_explicit_padding=True)
# For the Conv2d_0 layer FaceNet has depth=16
endpoints_shapes = {'Conv2d_0': [batch_size, 64, 64, 24],
'Conv2d_1_depthwise': [batch_size, 64, 64, 24],
'Conv2d_1_pointwise': [batch_size, 64, 64, 48],
'Conv2d_2_depthwise': [batch_size, 32, 32, 48],
'Conv2d_2_pointwise': [batch_size, 32, 32, 96],
'Conv2d_3_depthwise': [batch_size, 32, 32, 96],
'Conv2d_3_pointwise': [batch_size, 32, 32, 96],
'Conv2d_4_depthwise': [batch_size, 16, 16, 96],
'Conv2d_4_pointwise': [batch_size, 16, 16, 192],
'Conv2d_5_depthwise': [batch_size, 16, 16, 192],
'Conv2d_5_pointwise': [batch_size, 16, 16, 192],
'Conv2d_6_depthwise': [batch_size, 8, 8, 192],
'Conv2d_6_pointwise': [batch_size, 8, 8, 384],
'Conv2d_7_depthwise': [batch_size, 8, 8, 384],
'Conv2d_7_pointwise': [batch_size, 8, 8, 384],
'Conv2d_8_depthwise': [batch_size, 8, 8, 384],
'Conv2d_8_pointwise': [batch_size, 8, 8, 384],
'Conv2d_9_depthwise': [batch_size, 8, 8, 384],
'Conv2d_9_pointwise': [batch_size, 8, 8, 384],
'Conv2d_10_depthwise': [batch_size, 8, 8, 384],
'Conv2d_10_pointwise': [batch_size, 8, 8, 384],
'Conv2d_11_depthwise': [batch_size, 8, 8, 384],
'Conv2d_11_pointwise': [batch_size, 8, 8, 384],
'Conv2d_12_depthwise': [batch_size, 4, 4, 384],
'Conv2d_12_pointwise': [batch_size, 4, 4, 768],
'Conv2d_13_depthwise': [batch_size, 4, 4, 768],
'Conv2d_13_pointwise': [batch_size, 4, 4, 768]}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name, expected_shape in endpoints_shapes.items():
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
self.assertItemsEqual(endpoints_shapes.keys(),
explicit_padding_end_points.keys())
for endpoint_name, expected_shape in endpoints_shapes.items():
self.assertTrue(endpoint_name in explicit_padding_end_points)
self.assertListEqual(
explicit_padding_end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testModelHasExpectedNumberOfParameters(self):
batch_size = 5
height, width = 224, 224
inputs = tf.random.uniform((batch_size, height, width, 3))
with slim.arg_scope([slim.conv2d, slim.separable_conv2d],
normalizer_fn=slim.batch_norm):
mobilenet_v1.mobilenet_v1_base(inputs)
total_params, _ = slim.model_analyzer.analyze_vars(
slim.get_model_variables())
self.assertAlmostEqual(3217920, total_params)
def testBuildEndPointsWithDepthMultiplierLessThanOne(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = mobilenet_v1.mobilenet_v1(inputs, num_classes)
endpoint_keys = [key for key in end_points.keys() if key.startswith('Conv')]
_, end_points_with_multiplier = mobilenet_v1.mobilenet_v1(
inputs, num_classes, scope='depth_multiplied_net',
depth_multiplier=0.5)
for key in endpoint_keys:
original_depth = end_points[key].get_shape().as_list()[3]
new_depth = end_points_with_multiplier[key].get_shape().as_list()[3]
self.assertEqual(0.5 * original_depth, new_depth)
def testBuildEndPointsWithDepthMultiplierGreaterThanOne(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = mobilenet_v1.mobilenet_v1(inputs, num_classes)
endpoint_keys = [key for key in end_points.keys()
if key.startswith('Mixed') or key.startswith('Conv')]
_, end_points_with_multiplier = mobilenet_v1.mobilenet_v1(
inputs, num_classes, scope='depth_multiplied_net',
depth_multiplier=2.0)
for key in endpoint_keys:
original_depth = end_points[key].get_shape().as_list()[3]
new_depth = end_points_with_multiplier[key].get_shape().as_list()[3]
self.assertEqual(2.0 * original_depth, new_depth)
def testRaiseValueErrorWithInvalidDepthMultiplier(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
with self.assertRaises(ValueError):
_ = mobilenet_v1.mobilenet_v1(
inputs, num_classes, depth_multiplier=-0.1)
with self.assertRaises(ValueError):
_ = mobilenet_v1.mobilenet_v1(
inputs, num_classes, depth_multiplier=0.0)
def testHalfSizeImages(self):
batch_size = 5
height, width = 112, 112
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = mobilenet_v1.mobilenet_v1(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('MobilenetV1/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Conv2d_13_pointwise']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 4, 4, 1024])
def testUnknownImageShape(self):
tf.reset_default_graph()
batch_size = 2
height, width = 224, 224
num_classes = 1000
input_np = np.random.uniform(0, 1, (batch_size, height, width, 3))
with self.test_session() as sess:
inputs = tf.placeholder(
tf.float32, shape=(batch_size, None, None, 3))
logits, end_points = mobilenet_v1.mobilenet_v1(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('MobilenetV1/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Conv2d_13_pointwise']
feed_dict = {inputs: input_np}
tf.global_variables_initializer().run()
pre_pool_out = sess.run(pre_pool, feed_dict=feed_dict)
self.assertListEqual(list(pre_pool_out.shape), [batch_size, 7, 7, 1024])
def testGlobalPoolUnknownImageShape(self):
tf.reset_default_graph()
batch_size = 1
height, width = 250, 300
num_classes = 1000
input_np = np.random.uniform(0, 1, (batch_size, height, width, 3))
with self.test_session() as sess:
inputs = tf.placeholder(
tf.float32, shape=(batch_size, None, None, 3))
logits, end_points = mobilenet_v1.mobilenet_v1(inputs, num_classes,
global_pool=True)
self.assertTrue(logits.op.name.startswith('MobilenetV1/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Conv2d_13_pointwise']
feed_dict = {inputs: input_np}
tf.global_variables_initializer().run()
pre_pool_out = sess.run(pre_pool, feed_dict=feed_dict)
self.assertListEqual(list(pre_pool_out.shape), [batch_size, 8, 10, 1024])
def testUnknowBatchSize(self):
batch_size = 1
height, width = 224, 224
num_classes = 1000
inputs = tf.placeholder(tf.float32, (None, height, width, 3))
logits, _ = mobilenet_v1.mobilenet_v1(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('MobilenetV1/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[None, num_classes])
images = tf.random.uniform((batch_size, height, width, 3))
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(logits, {inputs: images.eval()})
self.assertEquals(output.shape, (batch_size, num_classes))
def testEvaluation(self):
batch_size = 2
height, width = 224, 224
num_classes = 1000
eval_inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = mobilenet_v1.mobilenet_v1(eval_inputs, num_classes,
is_training=False)
predictions = tf.argmax(input=logits, axis=1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (batch_size,))
def testTrainEvalWithReuse(self):
train_batch_size = 5
eval_batch_size = 2
height, width = 150, 150
num_classes = 1000
train_inputs = tf.random.uniform((train_batch_size, height, width, 3))
mobilenet_v1.mobilenet_v1(train_inputs, num_classes)
eval_inputs = tf.random.uniform((eval_batch_size, height, width, 3))
logits, _ = mobilenet_v1.mobilenet_v1(eval_inputs, num_classes,
reuse=True)
predictions = tf.argmax(input=logits, axis=1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (eval_batch_size,))
def testLogitsNotSqueezed(self):
num_classes = 25
images = tf.random.uniform([1, 224, 224, 3])
logits, _ = mobilenet_v1.mobilenet_v1(images,
num_classes=num_classes,
spatial_squeeze=False)
with self.test_session() as sess:
tf.global_variables_initializer().run()
logits_out = sess.run(logits)
self.assertListEqual(list(logits_out.shape), [1, 1, 1, num_classes])
def testBatchNormScopeDoesNotHaveIsTrainingWhenItsSetToNone(self):
sc = mobilenet_v1.mobilenet_v1_arg_scope(is_training=None)
self.assertNotIn('is_training', sc[slim.arg_scope_func_key(
slim.batch_norm)])
def testBatchNormScopeDoesHasIsTrainingWhenItsNotNone(self):
sc = mobilenet_v1.mobilenet_v1_arg_scope(is_training=True)
self.assertIn('is_training', sc[slim.arg_scope_func_key(slim.batch_norm)])
sc = mobilenet_v1.mobilenet_v1_arg_scope(is_training=False)
self.assertIn('is_training', sc[slim.arg_scope_func_key(slim.batch_norm)])
sc = mobilenet_v1.mobilenet_v1_arg_scope()
self.assertIn('is_training', sc[slim.arg_scope_func_key(slim.batch_norm)])
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet_v1_test.py | mobilenet_v1_test.py |
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Defines the CycleGAN generator and discriminator networks."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow.compat.v1 as tf
import tf_slim as slim
from tensorflow.python.framework import tensor_util
def cyclegan_arg_scope(instance_norm_center=True,
instance_norm_scale=True,
instance_norm_epsilon=0.001,
weights_init_stddev=0.02,
weight_decay=0.0):
"""Returns a default argument scope for all generators and discriminators.
Args:
instance_norm_center: Whether instance normalization applies centering.
instance_norm_scale: Whether instance normalization applies scaling.
instance_norm_epsilon: Small float added to the variance in the instance
normalization to avoid dividing by zero.
weights_init_stddev: Standard deviation of the random values to initialize
the convolution kernels with.
weight_decay: Magnitude of weight decay applied to all convolution kernel
variables of the generator.
Returns:
An arg-scope.
"""
instance_norm_params = {
'center': instance_norm_center,
'scale': instance_norm_scale,
'epsilon': instance_norm_epsilon,
}
weights_regularizer = None
if weight_decay and weight_decay > 0.0:
weights_regularizer = slim.l2_regularizer(weight_decay)
with slim.arg_scope(
[slim.conv2d],
normalizer_fn=slim.instance_norm,
normalizer_params=instance_norm_params,
weights_initializer=tf.random_normal_initializer(
0, weights_init_stddev),
weights_regularizer=weights_regularizer) as sc:
return sc
def cyclegan_upsample(net, num_outputs, stride, method='conv2d_transpose',
pad_mode='REFLECT', align_corners=False):
"""Upsamples the given inputs.
Args:
net: A Tensor of size [batch_size, height, width, filters].
num_outputs: The number of output filters.
stride: A list of 2 scalars or a 1x2 Tensor indicating the scale,
relative to the inputs, of the output dimensions. For example, if kernel
size is [2, 3], then the output height and width will be twice and three
times the input size.
method: The upsampling method: 'nn_upsample_conv', 'bilinear_upsample_conv',
or 'conv2d_transpose'.
pad_mode: mode for tf.pad, one of "CONSTANT", "REFLECT", or "SYMMETRIC".
align_corners: option for method, 'bilinear_upsample_conv'. If true, the
centers of the 4 corner pixels of the input and output tensors are
aligned, preserving the values at the corner pixels.
Returns:
A Tensor which was upsampled using the specified method.
Raises:
ValueError: if `method` is not recognized.
"""
with tf.variable_scope('upconv'):
net_shape = tf.shape(input=net)
height = net_shape[1]
width = net_shape[2]
# Reflection pad by 1 in spatial dimensions (axes 1, 2 = h, w) to make a 3x3
# 'valid' convolution produce an output with the same dimension as the
# input.
spatial_pad_1 = np.array([[0, 0], [1, 1], [1, 1], [0, 0]])
if method == 'nn_upsample_conv':
net = tf.image.resize(
net, [stride[0] * height, stride[1] * width],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
net = tf.pad(tensor=net, paddings=spatial_pad_1, mode=pad_mode)
net = slim.conv2d(net, num_outputs, kernel_size=[3, 3], padding='valid')
elif method == 'bilinear_upsample_conv':
net = tf.image.resize_bilinear(
net, [stride[0] * height, stride[1] * width],
align_corners=align_corners)
net = tf.pad(tensor=net, paddings=spatial_pad_1, mode=pad_mode)
net = slim.conv2d(net, num_outputs, kernel_size=[3, 3], padding='valid')
elif method == 'conv2d_transpose':
# This corrects 1 pixel offset for images with even width and height.
# conv2d is left aligned and conv2d_transpose is right aligned for even
# sized images (while doing 'SAME' padding).
# Note: This doesn't reflect actual model in paper.
net = slim.conv2d_transpose(
net, num_outputs, kernel_size=[3, 3], stride=stride, padding='valid')
net = net[:, 1:, 1:, :]
else:
raise ValueError('Unknown method: [%s]' % method)
return net
def _dynamic_or_static_shape(tensor):
shape = tf.shape(input=tensor)
static_shape = tensor_util.constant_value(shape)
return static_shape if static_shape is not None else shape
def cyclegan_generator_resnet(images,
arg_scope_fn=cyclegan_arg_scope,
num_resnet_blocks=6,
num_filters=64,
upsample_fn=cyclegan_upsample,
kernel_size=3,
tanh_linear_slope=0.0,
is_training=False):
"""Defines the cyclegan resnet network architecture.
As closely as possible following
https://github.com/junyanz/CycleGAN/blob/master/models/architectures.lua#L232
FYI: This network requires input height and width to be divisible by 4 in
order to generate an output with shape equal to input shape. Assertions will
catch this if input dimensions are known at graph construction time, but
there's no protection if unknown at graph construction time (you'll see an
error).
Args:
images: Input image tensor of shape [batch_size, h, w, 3].
arg_scope_fn: Function to create the global arg_scope for the network.
num_resnet_blocks: Number of ResNet blocks in the middle of the generator.
num_filters: Number of filters of the first hidden layer.
upsample_fn: Upsampling function for the decoder part of the generator.
kernel_size: Size w or list/tuple [h, w] of the filter kernels for all inner
layers.
tanh_linear_slope: Slope of the linear function to add to the tanh over the
logits.
is_training: Whether the network is created in training mode or inference
only mode. Not actually needed, just for compliance with other generator
network functions.
Returns:
A `Tensor` representing the model output and a dictionary of model end
points.
Raises:
ValueError: If the input height or width is known at graph construction time
and not a multiple of 4.
"""
# Neither dropout nor batch norm -> dont need is_training
del is_training
end_points = {}
input_size = images.shape.as_list()
height, width = input_size[1], input_size[2]
if height and height % 4 != 0:
raise ValueError('The input height must be a multiple of 4.')
if width and width % 4 != 0:
raise ValueError('The input width must be a multiple of 4.')
num_outputs = input_size[3]
if not isinstance(kernel_size, (list, tuple)):
kernel_size = [kernel_size, kernel_size]
kernel_height = kernel_size[0]
kernel_width = kernel_size[1]
pad_top = (kernel_height - 1) // 2
pad_bottom = kernel_height // 2
pad_left = (kernel_width - 1) // 2
pad_right = kernel_width // 2
paddings = np.array(
[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]],
dtype=np.int32)
spatial_pad_3 = np.array([[0, 0], [3, 3], [3, 3], [0, 0]])
with slim.arg_scope(arg_scope_fn()):
###########
# Encoder #
###########
with tf.variable_scope('input'):
# 7x7 input stage
net = tf.pad(tensor=images, paddings=spatial_pad_3, mode='REFLECT')
net = slim.conv2d(net, num_filters, kernel_size=[7, 7], padding='VALID')
end_points['encoder_0'] = net
with tf.variable_scope('encoder'):
with slim.arg_scope([slim.conv2d],
kernel_size=kernel_size,
stride=2,
activation_fn=tf.nn.relu,
padding='VALID'):
net = tf.pad(tensor=net, paddings=paddings, mode='REFLECT')
net = slim.conv2d(net, num_filters * 2)
end_points['encoder_1'] = net
net = tf.pad(tensor=net, paddings=paddings, mode='REFLECT')
net = slim.conv2d(net, num_filters * 4)
end_points['encoder_2'] = net
###################
# Residual Blocks #
###################
with tf.variable_scope('residual_blocks'):
with slim.arg_scope([slim.conv2d],
kernel_size=kernel_size,
stride=1,
activation_fn=tf.nn.relu,
padding='VALID'):
for block_id in xrange(num_resnet_blocks):
with tf.variable_scope('block_{}'.format(block_id)):
res_net = tf.pad(tensor=net, paddings=paddings, mode='REFLECT')
res_net = slim.conv2d(res_net, num_filters * 4)
res_net = tf.pad(tensor=res_net, paddings=paddings, mode='REFLECT')
res_net = slim.conv2d(res_net, num_filters * 4, activation_fn=None)
net += res_net
end_points['resnet_block_%d' % block_id] = net
###########
# Decoder #
###########
with tf.variable_scope('decoder'):
with slim.arg_scope([slim.conv2d],
kernel_size=kernel_size,
stride=1,
activation_fn=tf.nn.relu):
with tf.variable_scope('decoder1'):
net = upsample_fn(net, num_outputs=num_filters * 2, stride=[2, 2])
end_points['decoder1'] = net
with tf.variable_scope('decoder2'):
net = upsample_fn(net, num_outputs=num_filters, stride=[2, 2])
end_points['decoder2'] = net
with tf.variable_scope('output'):
net = tf.pad(tensor=net, paddings=spatial_pad_3, mode='REFLECT')
logits = slim.conv2d(
net,
num_outputs, [7, 7],
activation_fn=None,
normalizer_fn=None,
padding='valid')
logits = tf.reshape(logits, _dynamic_or_static_shape(images))
end_points['logits'] = logits
end_points['predictions'] = tf.tanh(logits) + logits * tanh_linear_slope
return end_points['predictions'], end_points
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/cyclegan.py | cyclegan.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for nets.inception_v1."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception
class InceptionV3Test(tf.test.TestCase):
def testBuildClassificationNetwork(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v3(inputs, num_classes)
self.assertTrue(logits.op.name.startswith(
'InceptionV3/Logits/SpatialSqueeze'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('Predictions' in end_points)
self.assertListEqual(end_points['Predictions'].get_shape().as_list(),
[batch_size, num_classes])
def testBuildPreLogitsNetwork(self):
batch_size = 5
height, width = 299, 299
num_classes = None
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = inception.inception_v3(inputs, num_classes)
self.assertTrue(net.op.name.startswith('InceptionV3/Logits/AvgPool'))
self.assertListEqual(net.get_shape().as_list(), [batch_size, 1, 1, 2048])
self.assertFalse('Logits' in end_points)
self.assertFalse('Predictions' in end_points)
def testBuildBaseNetwork(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random.uniform((batch_size, height, width, 3))
final_endpoint, end_points = inception.inception_v3_base(inputs)
self.assertTrue(final_endpoint.op.name.startswith(
'InceptionV3/Mixed_7c'))
self.assertListEqual(final_endpoint.get_shape().as_list(),
[batch_size, 8, 8, 2048])
expected_endpoints = ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3',
'MaxPool_5a_3x3', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d',
'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d',
'Mixed_6e', 'Mixed_7a', 'Mixed_7b', 'Mixed_7c']
self.assertItemsEqual(end_points.keys(), expected_endpoints)
def testBuildOnlyUptoFinalEndpoint(self):
batch_size = 5
height, width = 299, 299
endpoints = ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3',
'MaxPool_5a_3x3', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d',
'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d',
'Mixed_6e', 'Mixed_7a', 'Mixed_7b', 'Mixed_7c']
for index, endpoint in enumerate(endpoints):
with tf.Graph().as_default():
inputs = tf.random.uniform((batch_size, height, width, 3))
out_tensor, end_points = inception.inception_v3_base(
inputs, final_endpoint=endpoint)
self.assertTrue(out_tensor.op.name.startswith(
'InceptionV3/' + endpoint))
self.assertItemsEqual(endpoints[:index + 1], end_points.keys())
def testBuildAndCheckAllEndPointsUptoMixed7c(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v3_base(
inputs, final_endpoint='Mixed_7c')
endpoints_shapes = {'Conv2d_1a_3x3': [batch_size, 149, 149, 32],
'Conv2d_2a_3x3': [batch_size, 147, 147, 32],
'Conv2d_2b_3x3': [batch_size, 147, 147, 64],
'MaxPool_3a_3x3': [batch_size, 73, 73, 64],
'Conv2d_3b_1x1': [batch_size, 73, 73, 80],
'Conv2d_4a_3x3': [batch_size, 71, 71, 192],
'MaxPool_5a_3x3': [batch_size, 35, 35, 192],
'Mixed_5b': [batch_size, 35, 35, 256],
'Mixed_5c': [batch_size, 35, 35, 288],
'Mixed_5d': [batch_size, 35, 35, 288],
'Mixed_6a': [batch_size, 17, 17, 768],
'Mixed_6b': [batch_size, 17, 17, 768],
'Mixed_6c': [batch_size, 17, 17, 768],
'Mixed_6d': [batch_size, 17, 17, 768],
'Mixed_6e': [batch_size, 17, 17, 768],
'Mixed_7a': [batch_size, 8, 8, 1280],
'Mixed_7b': [batch_size, 8, 8, 2048],
'Mixed_7c': [batch_size, 8, 8, 2048]}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testModelHasExpectedNumberOfParameters(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random.uniform((batch_size, height, width, 3))
with slim.arg_scope(inception.inception_v3_arg_scope()):
inception.inception_v3_base(inputs)
total_params, _ = slim.model_analyzer.analyze_vars(
slim.get_model_variables())
self.assertAlmostEqual(21802784, total_params)
def testBuildEndPoints(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v3(inputs, num_classes)
self.assertTrue('Logits' in end_points)
logits = end_points['Logits']
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('AuxLogits' in end_points)
aux_logits = end_points['AuxLogits']
self.assertListEqual(aux_logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('Mixed_7c' in end_points)
pre_pool = end_points['Mixed_7c']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 8, 8, 2048])
self.assertTrue('PreLogits' in end_points)
pre_logits = end_points['PreLogits']
self.assertListEqual(pre_logits.get_shape().as_list(),
[batch_size, 1, 1, 2048])
def testBuildEndPointsWithDepthMultiplierLessThanOne(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v3(inputs, num_classes)
endpoint_keys = [key for key in end_points.keys()
if key.startswith('Mixed') or key.startswith('Conv')]
_, end_points_with_multiplier = inception.inception_v3(
inputs, num_classes, scope='depth_multiplied_net',
depth_multiplier=0.5)
for key in endpoint_keys:
original_depth = end_points[key].get_shape().as_list()[3]
new_depth = end_points_with_multiplier[key].get_shape().as_list()[3]
self.assertEqual(0.5 * original_depth, new_depth)
def testBuildEndPointsWithDepthMultiplierGreaterThanOne(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v3(inputs, num_classes)
endpoint_keys = [key for key in end_points.keys()
if key.startswith('Mixed') or key.startswith('Conv')]
_, end_points_with_multiplier = inception.inception_v3(
inputs, num_classes, scope='depth_multiplied_net',
depth_multiplier=2.0)
for key in endpoint_keys:
original_depth = end_points[key].get_shape().as_list()[3]
new_depth = end_points_with_multiplier[key].get_shape().as_list()[3]
self.assertEqual(2.0 * original_depth, new_depth)
def testRaiseValueErrorWithInvalidDepthMultiplier(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
with self.assertRaises(ValueError):
_ = inception.inception_v3(inputs, num_classes, depth_multiplier=-0.1)
with self.assertRaises(ValueError):
_ = inception.inception_v3(inputs, num_classes, depth_multiplier=0.0)
def testHalfSizeImages(self):
batch_size = 5
height, width = 150, 150
num_classes = 1000
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v3(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV3/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7c']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 3, 3, 2048])
def testUnknownImageShape(self):
tf.reset_default_graph()
batch_size = 2
height, width = 299, 299
num_classes = 1000
input_np = np.random.uniform(0, 1, (batch_size, height, width, 3))
with self.test_session() as sess:
inputs = tf.placeholder(
tf.float32, shape=(batch_size, None, None, 3))
logits, end_points = inception.inception_v3(inputs, num_classes)
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7c']
feed_dict = {inputs: input_np}
tf.global_variables_initializer().run()
pre_pool_out = sess.run(pre_pool, feed_dict=feed_dict)
self.assertListEqual(list(pre_pool_out.shape), [batch_size, 8, 8, 2048])
def testGlobalPoolUnknownImageShape(self):
tf.reset_default_graph()
batch_size = 1
height, width = 330, 400
num_classes = 1000
input_np = np.random.uniform(0, 1, (batch_size, height, width, 3))
with self.test_session() as sess:
inputs = tf.placeholder(
tf.float32, shape=(batch_size, None, None, 3))
logits, end_points = inception.inception_v3(inputs, num_classes,
global_pool=True)
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7c']
feed_dict = {inputs: input_np}
tf.global_variables_initializer().run()
pre_pool_out = sess.run(pre_pool, feed_dict=feed_dict)
self.assertListEqual(list(pre_pool_out.shape), [batch_size, 8, 11, 2048])
def testUnknowBatchSize(self):
batch_size = 1
height, width = 299, 299
num_classes = 1000
inputs = tf.placeholder(tf.float32, (None, height, width, 3))
logits, _ = inception.inception_v3(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV3/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[None, num_classes])
images = tf.random.uniform((batch_size, height, width, 3))
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(logits, {inputs: images.eval()})
self.assertEquals(output.shape, (batch_size, num_classes))
def testEvaluation(self):
batch_size = 2
height, width = 299, 299
num_classes = 1000
eval_inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = inception.inception_v3(eval_inputs, num_classes,
is_training=False)
predictions = tf.argmax(input=logits, axis=1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (batch_size,))
def testTrainEvalWithReuse(self):
train_batch_size = 5
eval_batch_size = 2
height, width = 150, 150
num_classes = 1000
train_inputs = tf.random.uniform((train_batch_size, height, width, 3))
inception.inception_v3(train_inputs, num_classes)
eval_inputs = tf.random.uniform((eval_batch_size, height, width, 3))
logits, _ = inception.inception_v3(eval_inputs, num_classes,
is_training=False, reuse=True)
predictions = tf.argmax(input=logits, axis=1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (eval_batch_size,))
def testLogitsNotSqueezed(self):
num_classes = 25
images = tf.random.uniform([1, 299, 299, 3])
logits, _ = inception.inception_v3(images,
num_classes=num_classes,
spatial_squeeze=False)
with self.test_session() as sess:
tf.global_variables_initializer().run()
logits_out = sess.run(logits)
self.assertListEqual(list(logits_out.shape), [1, 1, 1, num_classes])
def testNoBatchNormScaleByDefault(self):
height, width = 299, 299
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with slim.arg_scope(inception.inception_v3_arg_scope()):
inception.inception_v3(inputs, num_classes, is_training=False)
self.assertEqual(tf.global_variables('.*/BatchNorm/gamma:0$'), [])
def testBatchNormScale(self):
height, width = 299, 299
num_classes = 1000
inputs = tf.placeholder(tf.float32, (1, height, width, 3))
with slim.arg_scope(
inception.inception_v3_arg_scope(batch_norm_scale=True)):
inception.inception_v3(inputs, num_classes, is_training=False)
gamma_names = set(
v.op.name
for v in tf.global_variables('.*/BatchNorm/gamma:0$'))
self.assertGreater(len(gamma_names), 0)
for v in tf.global_variables('.*/BatchNorm/moving_mean:0$'):
self.assertIn(v.op.name[:-len('moving_mean')] + 'gamma', gamma_names)
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v3_test.py | inception_v3_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the definition for inception v3 classification network."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception_utils
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
def inception_v3_base(inputs,
final_endpoint='Mixed_7c',
min_depth=16,
depth_multiplier=1.0,
scope=None):
"""Inception model from http://arxiv.org/abs/1512.00567.
Constructs an Inception v3 network from inputs to the given final endpoint.
This method can construct the network up to the final inception block
Mixed_7c.
Note that the names of the layers in the paper do not correspond to the names
of the endpoints registered by this function although they build the same
network.
Here is a mapping from the old_names to the new names:
Old name | New name
=======================================
conv0 | Conv2d_1a_3x3
conv1 | Conv2d_2a_3x3
conv2 | Conv2d_2b_3x3
pool1 | MaxPool_3a_3x3
conv3 | Conv2d_3b_1x1
conv4 | Conv2d_4a_3x3
pool2 | MaxPool_5a_3x3
mixed_35x35x256a | Mixed_5b
mixed_35x35x288a | Mixed_5c
mixed_35x35x288b | Mixed_5d
mixed_17x17x768a | Mixed_6a
mixed_17x17x768b | Mixed_6b
mixed_17x17x768c | Mixed_6c
mixed_17x17x768d | Mixed_6d
mixed_17x17x768e | Mixed_6e
mixed_8x8x1280a | Mixed_7a
mixed_8x8x2048a | Mixed_7b
mixed_8x8x2048b | Mixed_7c
Args:
inputs: a tensor of size [batch_size, height, width, channels].
final_endpoint: specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3', 'MaxPool_5a_3x3',
'Mixed_5b', 'Mixed_5c', 'Mixed_5d', 'Mixed_6a', 'Mixed_6b', 'Mixed_6c',
'Mixed_6d', 'Mixed_6e', 'Mixed_7a', 'Mixed_7b', 'Mixed_7c'].
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
scope: Optional variable_scope.
Returns:
tensor_out: output tensor corresponding to the final_endpoint.
end_points: a set of activations for external use, for example summaries or
losses.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
or depth_multiplier <= 0
"""
# end_points will collect relevant activations for external use, for example
# summaries or losses.
end_points = {}
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
depth = lambda d: max(int(d * depth_multiplier), min_depth)
with tf.variable_scope(scope, 'InceptionV3', [inputs]):
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='VALID'):
# 299 x 299 x 3
end_point = 'Conv2d_1a_3x3'
net = slim.conv2d(inputs, depth(32), [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 149 x 149 x 32
end_point = 'Conv2d_2a_3x3'
net = slim.conv2d(net, depth(32), [3, 3], scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 147 x 147 x 32
end_point = 'Conv2d_2b_3x3'
net = slim.conv2d(net, depth(64), [3, 3], padding='SAME', scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 147 x 147 x 64
end_point = 'MaxPool_3a_3x3'
net = slim.max_pool2d(net, [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 73 x 73 x 64
end_point = 'Conv2d_3b_1x1'
net = slim.conv2d(net, depth(80), [1, 1], scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 73 x 73 x 80.
end_point = 'Conv2d_4a_3x3'
net = slim.conv2d(net, depth(192), [3, 3], scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 71 x 71 x 192.
end_point = 'MaxPool_5a_3x3'
net = slim.max_pool2d(net, [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 35 x 35 x 192.
# Inception blocks
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
# mixed: 35 x 35 x 256.
end_point = 'Mixed_5b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(48), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(64), [5, 5],
scope='Conv2d_0b_5x5')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(32), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_1: 35 x 35 x 288.
end_point = 'Mixed_5c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(48), [1, 1], scope='Conv2d_0b_1x1')
branch_1 = slim.conv2d(branch_1, depth(64), [5, 5],
scope='Conv_1_0c_5x5')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(64), [1, 1],
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(64), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_2: 35 x 35 x 288.
end_point = 'Mixed_5d'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(48), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(64), [5, 5],
scope='Conv2d_0b_5x5')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(64), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_3: 17 x 17 x 768.
end_point = 'Mixed_6a'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(384), [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_1 = slim.conv2d(branch_1, depth(96), [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_1x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',
scope='MaxPool_1a_3x3')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed4: 17 x 17 x 768.
end_point = 'Mixed_6b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(128), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(128), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(128), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(128), [7, 1],
scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, depth(128), [1, 7],
scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, depth(128), [7, 1],
scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_5: 17 x 17 x 768.
end_point = 'Mixed_6c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(160), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(160), [7, 1],
scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, depth(160), [1, 7],
scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, depth(160), [7, 1],
scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_6: 17 x 17 x 768.
end_point = 'Mixed_6d'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(160), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(160), [7, 1],
scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, depth(160), [1, 7],
scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, depth(160), [7, 1],
scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_7: 17 x 17 x 768.
end_point = 'Mixed_6e'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(192), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(192), [7, 1],
scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, depth(192), [7, 1],
scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_8: 8 x 8 x 1280.
end_point = 'Mixed_7a'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
branch_0 = slim.conv2d(branch_0, depth(320), [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(192), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
branch_1 = slim.conv2d(branch_1, depth(192), [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',
scope='MaxPool_1a_3x3')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_9: 8 x 8 x 2048.
end_point = 'Mixed_7b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(320), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(384), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = tf.concat(axis=3, values=[
slim.conv2d(branch_1, depth(384), [1, 3], scope='Conv2d_0b_1x3'),
slim.conv2d(branch_1, depth(384), [3, 1], scope='Conv2d_0b_3x1')])
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(448), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(
branch_2, depth(384), [3, 3], scope='Conv2d_0b_3x3')
branch_2 = tf.concat(axis=3, values=[
slim.conv2d(branch_2, depth(384), [1, 3], scope='Conv2d_0c_1x3'),
slim.conv2d(branch_2, depth(384), [3, 1], scope='Conv2d_0d_3x1')])
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(192), [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_10: 8 x 8 x 2048.
end_point = 'Mixed_7c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(320), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(384), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = tf.concat(axis=3, values=[
slim.conv2d(branch_1, depth(384), [1, 3], scope='Conv2d_0b_1x3'),
slim.conv2d(branch_1, depth(384), [3, 1], scope='Conv2d_0c_3x1')])
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(448), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(
branch_2, depth(384), [3, 3], scope='Conv2d_0b_3x3')
branch_2 = tf.concat(axis=3, values=[
slim.conv2d(branch_2, depth(384), [1, 3], scope='Conv2d_0c_1x3'),
slim.conv2d(branch_2, depth(384), [3, 1], scope='Conv2d_0d_3x1')])
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(192), [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def inception_v3(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.8,
min_depth=16,
depth_multiplier=1.0,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
create_aux_logits=True,
scope='InceptionV3',
global_pool=False):
"""Inception model from http://arxiv.org/abs/1512.00567.
"Rethinking the Inception Architecture for Computer Vision"
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens,
Zbigniew Wojna.
With the default arguments this method constructs the exact model defined in
the paper. However, one can experiment with variations of the inception_v3
network by changing arguments dropout_keep_prob, min_depth and
depth_multiplier.
The default image size used to train this network is 299x299.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
is_training: whether is training or not.
dropout_keep_prob: the percentage of activation values that are retained.
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape [B, C], if false logits is of
shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
create_aux_logits: Whether to create the auxiliary logits.
scope: Optional variable_scope.
global_pool: Optional boolean flag to control the avgpooling before the
logits layer. If false or unset, pooling is done with a fixed window
that reduces default-sized inputs to 1x1, while larger inputs lead to
larger outputs. If true, any input size is pooled down to 1x1.
Returns:
net: a Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the non-dropped-out input to the logits layer
if num_classes is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: if 'depth_multiplier' is less than or equal to zero.
"""
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
depth = lambda d: max(int(d * depth_multiplier), min_depth)
with tf.variable_scope(
scope, 'InceptionV3', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_v3_base(
inputs, scope=scope, min_depth=min_depth,
depth_multiplier=depth_multiplier)
# Auxiliary Head logits
if create_aux_logits and num_classes:
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
aux_logits = end_points['Mixed_6e']
with tf.variable_scope('AuxLogits'):
aux_logits = slim.avg_pool2d(
aux_logits, [5, 5], stride=3, padding='VALID',
scope='AvgPool_1a_5x5')
aux_logits = slim.conv2d(aux_logits, depth(128), [1, 1],
scope='Conv2d_1b_1x1')
# Shape of feature map before the final layer.
kernel_size = _reduced_kernel_size_for_small_input(
aux_logits, [5, 5])
aux_logits = slim.conv2d(
aux_logits, depth(768), kernel_size,
weights_initializer=trunc_normal(0.01),
padding='VALID', scope='Conv2d_2a_{}x{}'.format(*kernel_size))
aux_logits = slim.conv2d(
aux_logits, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, weights_initializer=trunc_normal(0.001),
scope='Conv2d_2b_1x1')
if spatial_squeeze:
aux_logits = tf.squeeze(aux_logits, [1, 2], name='SpatialSqueeze')
end_points['AuxLogits'] = aux_logits
# Final pooling and prediction
with tf.variable_scope('Logits'):
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='GlobalPool')
end_points['global_pool'] = net
else:
# Pooling with a fixed kernel size.
kernel_size = _reduced_kernel_size_for_small_input(net, [8, 8])
net = slim.avg_pool2d(net, kernel_size, padding='VALID',
scope='AvgPool_1a_{}x{}'.format(*kernel_size))
end_points['AvgPool_1a'] = net
if not num_classes:
return net, end_points
# 1 x 1 x 2048
net = slim.dropout(net, keep_prob=dropout_keep_prob, scope='Dropout_1b')
end_points['PreLogits'] = net
# 2048
logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='Conv2d_1c_1x1')
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
# 1000
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
inception_v3.default_image_size = 299
def _reduced_kernel_size_for_small_input(input_tensor, kernel_size):
"""Define kernel size which is automatically reduced for small input.
If the shape of the input images is unknown at graph construction time this
function assumes that the input images are is large enough.
Args:
input_tensor: input tensor of size [batch_size, height, width, channels].
kernel_size: desired kernel size of length 2: [kernel_height, kernel_width]
Returns:
a tensor with the kernel size.
TODO(jrru): Make this function work with unknown shapes. Theoretically, this
can be done with the code below. Problems are two-fold: (1) If the shape was
known, it will be lost. (2) inception.slim.ops._two_element_tuple cannot
handle tensors that define the kernel size.
shape = tf.shape(input_tensor)
return = tf.stack([tf.minimum(shape[1], kernel_size[0]),
tf.minimum(shape[2], kernel_size[1])])
"""
shape = input_tensor.get_shape().as_list()
if shape[1] is None or shape[2] is None:
kernel_size_out = kernel_size
else:
kernel_size_out = [min(shape[1], kernel_size[0]),
min(shape[2], kernel_size[1])]
return kernel_size_out
inception_v3_arg_scope = inception_utils.inception_arg_scope
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v3.py | inception_v3.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains a variant of the LeNet model definition."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
def lenet(images, num_classes=10, is_training=False,
dropout_keep_prob=0.5,
prediction_fn=slim.softmax,
scope='LeNet'):
"""Creates a variant of the LeNet model.
Note that since the output is a set of 'logits', the values fall in the
interval of (-infinity, infinity). Consequently, to convert the outputs to a
probability distribution over the characters, one will need to convert them
using the softmax function:
logits = lenet.lenet(images, is_training=False)
probabilities = tf.nn.softmax(logits)
predictions = tf.argmax(logits, 1)
Args:
images: A batch of `Tensors` of size [batch_size, height, width, channels].
num_classes: the number of classes in the dataset. If 0 or None, the logits
layer is omitted and the input features to the logits layer are returned
instead.
is_training: specifies whether or not we're currently training the model.
This variable will determine the behaviour of the dropout layer.
dropout_keep_prob: the percentage of activation values that are retained.
prediction_fn: a function to get predictions out of logits.
scope: Optional variable_scope.
Returns:
net: a 2D Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the inon-dropped-out nput to the logits layer
if num_classes is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
"""
end_points = {}
with tf.variable_scope(scope, 'LeNet', [images]):
net = end_points['conv1'] = slim.conv2d(images, 32, [5, 5], scope='conv1')
net = end_points['pool1'] = slim.max_pool2d(net, [2, 2], 2, scope='pool1')
net = end_points['conv2'] = slim.conv2d(net, 64, [5, 5], scope='conv2')
net = end_points['pool2'] = slim.max_pool2d(net, [2, 2], 2, scope='pool2')
net = slim.flatten(net)
end_points['Flatten'] = net
net = end_points['fc3'] = slim.fully_connected(net, 1024, scope='fc3')
if not num_classes:
return net, end_points
net = end_points['dropout3'] = slim.dropout(
net, dropout_keep_prob, is_training=is_training, scope='dropout3')
logits = end_points['Logits'] = slim.fully_connected(
net, num_classes, activation_fn=None, scope='fc4')
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
lenet.default_image_size = 28
def lenet_arg_scope(weight_decay=0.0):
"""Defines the default lenet argument scope.
Args:
weight_decay: The weight decay to use for regularizing the model.
Returns:
An `arg_scope` to use for the inception v3 model.
"""
with slim.arg_scope(
[slim.conv2d, slim.fully_connected],
weights_regularizer=slim.l2_regularizer(weight_decay),
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
activation_fn=tf.nn.relu) as sc:
return sc
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/lenet.py | lenet.py |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for networks.s3dg."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import six
import tensorflow.compat.v1 as tf
from nets import s3dg
class S3DGTest(tf.test.TestCase):
def testBuildClassificationNetwork(self):
batch_size = 5
num_frames = 64
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
logits, end_points = s3dg.s3dg(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV1/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('Predictions' in end_points)
self.assertListEqual(end_points['Predictions'].get_shape().as_list(),
[batch_size, num_classes])
def testBuildBaseNetwork(self):
batch_size = 5
num_frames = 64
height, width = 224, 224
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
mixed_6c, end_points = s3dg.s3dg_base(inputs)
self.assertTrue(mixed_6c.op.name.startswith('InceptionV1/Mixed_5c'))
self.assertListEqual(mixed_6c.get_shape().as_list(),
[batch_size, 8, 7, 7, 1024])
expected_endpoints = ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b',
'Mixed_3c', 'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c',
'Mixed_4d', 'Mixed_4e', 'Mixed_4f', 'MaxPool_5a_2x2',
'Mixed_5b', 'Mixed_5c']
self.assertItemsEqual(list(end_points.keys()), expected_endpoints)
def testBuildOnlyUptoFinalEndpointNoGating(self):
batch_size = 5
num_frames = 64
height, width = 224, 224
endpoints = ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d',
'Mixed_4e', 'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b',
'Mixed_5c']
for index, endpoint in enumerate(endpoints):
with tf.Graph().as_default():
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
out_tensor, end_points = s3dg.s3dg_base(
inputs, final_endpoint=endpoint, gating_startat=None)
print(endpoint, out_tensor.op.name)
self.assertTrue(out_tensor.op.name.startswith(
'InceptionV1/' + endpoint))
self.assertItemsEqual(endpoints[:index+1], end_points)
def testBuildAndCheckAllEndPointsUptoMixed5c(self):
batch_size = 5
num_frames = 64
height, width = 224, 224
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
_, end_points = s3dg.s3dg_base(inputs,
final_endpoint='Mixed_5c')
endpoints_shapes = {'Conv2d_1a_7x7': [5, 32, 112, 112, 64],
'MaxPool_2a_3x3': [5, 32, 56, 56, 64],
'Conv2d_2b_1x1': [5, 32, 56, 56, 64],
'Conv2d_2c_3x3': [5, 32, 56, 56, 192],
'MaxPool_3a_3x3': [5, 32, 28, 28, 192],
'Mixed_3b': [5, 32, 28, 28, 256],
'Mixed_3c': [5, 32, 28, 28, 480],
'MaxPool_4a_3x3': [5, 16, 14, 14, 480],
'Mixed_4b': [5, 16, 14, 14, 512],
'Mixed_4c': [5, 16, 14, 14, 512],
'Mixed_4d': [5, 16, 14, 14, 512],
'Mixed_4e': [5, 16, 14, 14, 528],
'Mixed_4f': [5, 16, 14, 14, 832],
'MaxPool_5a_2x2': [5, 8, 7, 7, 832],
'Mixed_5b': [5, 8, 7, 7, 832],
'Mixed_5c': [5, 8, 7, 7, 1024]}
self.assertItemsEqual(
list(endpoints_shapes.keys()), list(end_points.keys()))
for endpoint_name, expected_shape in six.iteritems(endpoints_shapes):
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testHalfSizeImages(self):
batch_size = 5
num_frames = 64
height, width = 112, 112
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
mixed_5c, _ = s3dg.s3dg_base(inputs)
self.assertTrue(mixed_5c.op.name.startswith('InceptionV1/Mixed_5c'))
self.assertListEqual(mixed_5c.get_shape().as_list(),
[batch_size, 8, 4, 4, 1024])
def testTenFrames(self):
batch_size = 5
num_frames = 10
height, width = 224, 224
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
mixed_5c, _ = s3dg.s3dg_base(inputs)
self.assertTrue(mixed_5c.op.name.startswith('InceptionV1/Mixed_5c'))
self.assertListEqual(mixed_5c.get_shape().as_list(),
[batch_size, 2, 7, 7, 1024])
def testEvaluation(self):
batch_size = 2
num_frames = 64
height, width = 224, 224
num_classes = 1000
eval_inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
logits, _ = s3dg.s3dg(eval_inputs, num_classes,
is_training=False)
predictions = tf.argmax(input=logits, axis=1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (batch_size,))
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/s3dg_test.py | s3dg_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for slim.nets.vgg."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import vgg
class VGGATest(tf.test.TestCase):
def testBuild(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_a(inputs, num_classes)
self.assertEquals(logits.op.name, 'vgg_a/fc8/squeezed')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
def testFullyConvolutional(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_a(inputs, num_classes, spatial_squeeze=False)
self.assertEquals(logits.op.name, 'vgg_a/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 2, 2, num_classes])
def testGlobalPool(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_a(inputs, num_classes, spatial_squeeze=False,
global_pool=True)
self.assertEquals(logits.op.name, 'vgg_a/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 1, 1, num_classes])
def testEndPoints(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = vgg.vgg_a(inputs, num_classes)
expected_names = ['vgg_a/conv1/conv1_1',
'vgg_a/pool1',
'vgg_a/conv2/conv2_1',
'vgg_a/pool2',
'vgg_a/conv3/conv3_1',
'vgg_a/conv3/conv3_2',
'vgg_a/pool3',
'vgg_a/conv4/conv4_1',
'vgg_a/conv4/conv4_2',
'vgg_a/pool4',
'vgg_a/conv5/conv5_1',
'vgg_a/conv5/conv5_2',
'vgg_a/pool5',
'vgg_a/fc6',
'vgg_a/fc7',
'vgg_a/fc8'
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
def testNoClasses(self):
batch_size = 5
height, width = 224, 224
num_classes = None
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = vgg.vgg_a(inputs, num_classes)
expected_names = ['vgg_a/conv1/conv1_1',
'vgg_a/pool1',
'vgg_a/conv2/conv2_1',
'vgg_a/pool2',
'vgg_a/conv3/conv3_1',
'vgg_a/conv3/conv3_2',
'vgg_a/pool3',
'vgg_a/conv4/conv4_1',
'vgg_a/conv4/conv4_2',
'vgg_a/pool4',
'vgg_a/conv5/conv5_1',
'vgg_a/conv5/conv5_2',
'vgg_a/pool5',
'vgg_a/fc6',
'vgg_a/fc7',
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
self.assertTrue(net.op.name.startswith('vgg_a/fc7'))
def testModelVariables(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
vgg.vgg_a(inputs, num_classes)
expected_names = ['vgg_a/conv1/conv1_1/weights',
'vgg_a/conv1/conv1_1/biases',
'vgg_a/conv2/conv2_1/weights',
'vgg_a/conv2/conv2_1/biases',
'vgg_a/conv3/conv3_1/weights',
'vgg_a/conv3/conv3_1/biases',
'vgg_a/conv3/conv3_2/weights',
'vgg_a/conv3/conv3_2/biases',
'vgg_a/conv4/conv4_1/weights',
'vgg_a/conv4/conv4_1/biases',
'vgg_a/conv4/conv4_2/weights',
'vgg_a/conv4/conv4_2/biases',
'vgg_a/conv5/conv5_1/weights',
'vgg_a/conv5/conv5_1/biases',
'vgg_a/conv5/conv5_2/weights',
'vgg_a/conv5/conv5_2/biases',
'vgg_a/fc6/weights',
'vgg_a/fc6/biases',
'vgg_a/fc7/weights',
'vgg_a/fc7/biases',
'vgg_a/fc8/weights',
'vgg_a/fc8/biases',
]
model_variables = [v.op.name for v in slim.get_model_variables()]
self.assertSetEqual(set(model_variables), set(expected_names))
def testEvaluation(self):
batch_size = 2
height, width = 224, 224
num_classes = 1000
with self.test_session():
eval_inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_a(eval_inputs, is_training=False)
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
predictions = tf.argmax(input=logits, axis=1)
self.assertListEqual(predictions.get_shape().as_list(), [batch_size])
def testTrainEvalWithReuse(self):
train_batch_size = 2
eval_batch_size = 1
train_height, train_width = 224, 224
eval_height, eval_width = 256, 256
num_classes = 1000
with self.test_session():
train_inputs = tf.random.uniform(
(train_batch_size, train_height, train_width, 3))
logits, _ = vgg.vgg_a(train_inputs)
self.assertListEqual(logits.get_shape().as_list(),
[train_batch_size, num_classes])
tf.get_variable_scope().reuse_variables()
eval_inputs = tf.random.uniform(
(eval_batch_size, eval_height, eval_width, 3))
logits, _ = vgg.vgg_a(eval_inputs, is_training=False,
spatial_squeeze=False)
self.assertListEqual(logits.get_shape().as_list(),
[eval_batch_size, 2, 2, num_classes])
logits = tf.reduce_mean(input_tensor=logits, axis=[1, 2])
predictions = tf.argmax(input=logits, axis=1)
self.assertEquals(predictions.get_shape().as_list(), [eval_batch_size])
def testForward(self):
batch_size = 1
height, width = 224, 224
with self.test_session() as sess:
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_a(inputs)
sess.run(tf.global_variables_initializer())
output = sess.run(logits)
self.assertTrue(output.any())
class VGG16Test(tf.test.TestCase):
def testBuild(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_16(inputs, num_classes)
self.assertEquals(logits.op.name, 'vgg_16/fc8/squeezed')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
def testFullyConvolutional(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_16(inputs, num_classes, spatial_squeeze=False)
self.assertEquals(logits.op.name, 'vgg_16/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 2, 2, num_classes])
def testGlobalPool(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_16(inputs, num_classes, spatial_squeeze=False,
global_pool=True)
self.assertEquals(logits.op.name, 'vgg_16/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 1, 1, num_classes])
def testEndPoints(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = vgg.vgg_16(inputs, num_classes)
expected_names = ['vgg_16/conv1/conv1_1',
'vgg_16/conv1/conv1_2',
'vgg_16/pool1',
'vgg_16/conv2/conv2_1',
'vgg_16/conv2/conv2_2',
'vgg_16/pool2',
'vgg_16/conv3/conv3_1',
'vgg_16/conv3/conv3_2',
'vgg_16/conv3/conv3_3',
'vgg_16/pool3',
'vgg_16/conv4/conv4_1',
'vgg_16/conv4/conv4_2',
'vgg_16/conv4/conv4_3',
'vgg_16/pool4',
'vgg_16/conv5/conv5_1',
'vgg_16/conv5/conv5_2',
'vgg_16/conv5/conv5_3',
'vgg_16/pool5',
'vgg_16/fc6',
'vgg_16/fc7',
'vgg_16/fc8'
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
def testNoClasses(self):
batch_size = 5
height, width = 224, 224
num_classes = None
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = vgg.vgg_16(inputs, num_classes)
expected_names = ['vgg_16/conv1/conv1_1',
'vgg_16/conv1/conv1_2',
'vgg_16/pool1',
'vgg_16/conv2/conv2_1',
'vgg_16/conv2/conv2_2',
'vgg_16/pool2',
'vgg_16/conv3/conv3_1',
'vgg_16/conv3/conv3_2',
'vgg_16/conv3/conv3_3',
'vgg_16/pool3',
'vgg_16/conv4/conv4_1',
'vgg_16/conv4/conv4_2',
'vgg_16/conv4/conv4_3',
'vgg_16/pool4',
'vgg_16/conv5/conv5_1',
'vgg_16/conv5/conv5_2',
'vgg_16/conv5/conv5_3',
'vgg_16/pool5',
'vgg_16/fc6',
'vgg_16/fc7',
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
self.assertTrue(net.op.name.startswith('vgg_16/fc7'))
def testModelVariables(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
vgg.vgg_16(inputs, num_classes)
expected_names = ['vgg_16/conv1/conv1_1/weights',
'vgg_16/conv1/conv1_1/biases',
'vgg_16/conv1/conv1_2/weights',
'vgg_16/conv1/conv1_2/biases',
'vgg_16/conv2/conv2_1/weights',
'vgg_16/conv2/conv2_1/biases',
'vgg_16/conv2/conv2_2/weights',
'vgg_16/conv2/conv2_2/biases',
'vgg_16/conv3/conv3_1/weights',
'vgg_16/conv3/conv3_1/biases',
'vgg_16/conv3/conv3_2/weights',
'vgg_16/conv3/conv3_2/biases',
'vgg_16/conv3/conv3_3/weights',
'vgg_16/conv3/conv3_3/biases',
'vgg_16/conv4/conv4_1/weights',
'vgg_16/conv4/conv4_1/biases',
'vgg_16/conv4/conv4_2/weights',
'vgg_16/conv4/conv4_2/biases',
'vgg_16/conv4/conv4_3/weights',
'vgg_16/conv4/conv4_3/biases',
'vgg_16/conv5/conv5_1/weights',
'vgg_16/conv5/conv5_1/biases',
'vgg_16/conv5/conv5_2/weights',
'vgg_16/conv5/conv5_2/biases',
'vgg_16/conv5/conv5_3/weights',
'vgg_16/conv5/conv5_3/biases',
'vgg_16/fc6/weights',
'vgg_16/fc6/biases',
'vgg_16/fc7/weights',
'vgg_16/fc7/biases',
'vgg_16/fc8/weights',
'vgg_16/fc8/biases',
]
model_variables = [v.op.name for v in slim.get_model_variables()]
self.assertSetEqual(set(model_variables), set(expected_names))
def testEvaluation(self):
batch_size = 2
height, width = 224, 224
num_classes = 1000
with self.test_session():
eval_inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_16(eval_inputs, is_training=False)
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
predictions = tf.argmax(input=logits, axis=1)
self.assertListEqual(predictions.get_shape().as_list(), [batch_size])
def testTrainEvalWithReuse(self):
train_batch_size = 2
eval_batch_size = 1
train_height, train_width = 224, 224
eval_height, eval_width = 256, 256
num_classes = 1000
with self.test_session():
train_inputs = tf.random.uniform(
(train_batch_size, train_height, train_width, 3))
logits, _ = vgg.vgg_16(train_inputs)
self.assertListEqual(logits.get_shape().as_list(),
[train_batch_size, num_classes])
tf.get_variable_scope().reuse_variables()
eval_inputs = tf.random.uniform(
(eval_batch_size, eval_height, eval_width, 3))
logits, _ = vgg.vgg_16(eval_inputs, is_training=False,
spatial_squeeze=False)
self.assertListEqual(logits.get_shape().as_list(),
[eval_batch_size, 2, 2, num_classes])
logits = tf.reduce_mean(input_tensor=logits, axis=[1, 2])
predictions = tf.argmax(input=logits, axis=1)
self.assertEquals(predictions.get_shape().as_list(), [eval_batch_size])
def testForward(self):
batch_size = 1
height, width = 224, 224
with self.test_session() as sess:
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_16(inputs)
sess.run(tf.global_variables_initializer())
output = sess.run(logits)
self.assertTrue(output.any())
class VGG19Test(tf.test.TestCase):
def testBuild(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_19(inputs, num_classes)
self.assertEquals(logits.op.name, 'vgg_19/fc8/squeezed')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
def testFullyConvolutional(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_19(inputs, num_classes, spatial_squeeze=False)
self.assertEquals(logits.op.name, 'vgg_19/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 2, 2, num_classes])
def testGlobalPool(self):
batch_size = 1
height, width = 256, 256
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_19(inputs, num_classes, spatial_squeeze=False,
global_pool=True)
self.assertEquals(logits.op.name, 'vgg_19/fc8/BiasAdd')
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, 1, 1, num_classes])
def testEndPoints(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
_, end_points = vgg.vgg_19(inputs, num_classes)
expected_names = [
'vgg_19/conv1/conv1_1',
'vgg_19/conv1/conv1_2',
'vgg_19/pool1',
'vgg_19/conv2/conv2_1',
'vgg_19/conv2/conv2_2',
'vgg_19/pool2',
'vgg_19/conv3/conv3_1',
'vgg_19/conv3/conv3_2',
'vgg_19/conv3/conv3_3',
'vgg_19/conv3/conv3_4',
'vgg_19/pool3',
'vgg_19/conv4/conv4_1',
'vgg_19/conv4/conv4_2',
'vgg_19/conv4/conv4_3',
'vgg_19/conv4/conv4_4',
'vgg_19/pool4',
'vgg_19/conv5/conv5_1',
'vgg_19/conv5/conv5_2',
'vgg_19/conv5/conv5_3',
'vgg_19/conv5/conv5_4',
'vgg_19/pool5',
'vgg_19/fc6',
'vgg_19/fc7',
'vgg_19/fc8'
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
def testNoClasses(self):
batch_size = 5
height, width = 224, 224
num_classes = None
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
net, end_points = vgg.vgg_19(inputs, num_classes)
expected_names = [
'vgg_19/conv1/conv1_1',
'vgg_19/conv1/conv1_2',
'vgg_19/pool1',
'vgg_19/conv2/conv2_1',
'vgg_19/conv2/conv2_2',
'vgg_19/pool2',
'vgg_19/conv3/conv3_1',
'vgg_19/conv3/conv3_2',
'vgg_19/conv3/conv3_3',
'vgg_19/conv3/conv3_4',
'vgg_19/pool3',
'vgg_19/conv4/conv4_1',
'vgg_19/conv4/conv4_2',
'vgg_19/conv4/conv4_3',
'vgg_19/conv4/conv4_4',
'vgg_19/pool4',
'vgg_19/conv5/conv5_1',
'vgg_19/conv5/conv5_2',
'vgg_19/conv5/conv5_3',
'vgg_19/conv5/conv5_4',
'vgg_19/pool5',
'vgg_19/fc6',
'vgg_19/fc7',
]
self.assertSetEqual(set(end_points.keys()), set(expected_names))
self.assertTrue(net.op.name.startswith('vgg_19/fc7'))
def testModelVariables(self):
batch_size = 5
height, width = 224, 224
num_classes = 1000
with self.test_session():
inputs = tf.random.uniform((batch_size, height, width, 3))
vgg.vgg_19(inputs, num_classes)
expected_names = [
'vgg_19/conv1/conv1_1/weights',
'vgg_19/conv1/conv1_1/biases',
'vgg_19/conv1/conv1_2/weights',
'vgg_19/conv1/conv1_2/biases',
'vgg_19/conv2/conv2_1/weights',
'vgg_19/conv2/conv2_1/biases',
'vgg_19/conv2/conv2_2/weights',
'vgg_19/conv2/conv2_2/biases',
'vgg_19/conv3/conv3_1/weights',
'vgg_19/conv3/conv3_1/biases',
'vgg_19/conv3/conv3_2/weights',
'vgg_19/conv3/conv3_2/biases',
'vgg_19/conv3/conv3_3/weights',
'vgg_19/conv3/conv3_3/biases',
'vgg_19/conv3/conv3_4/weights',
'vgg_19/conv3/conv3_4/biases',
'vgg_19/conv4/conv4_1/weights',
'vgg_19/conv4/conv4_1/biases',
'vgg_19/conv4/conv4_2/weights',
'vgg_19/conv4/conv4_2/biases',
'vgg_19/conv4/conv4_3/weights',
'vgg_19/conv4/conv4_3/biases',
'vgg_19/conv4/conv4_4/weights',
'vgg_19/conv4/conv4_4/biases',
'vgg_19/conv5/conv5_1/weights',
'vgg_19/conv5/conv5_1/biases',
'vgg_19/conv5/conv5_2/weights',
'vgg_19/conv5/conv5_2/biases',
'vgg_19/conv5/conv5_3/weights',
'vgg_19/conv5/conv5_3/biases',
'vgg_19/conv5/conv5_4/weights',
'vgg_19/conv5/conv5_4/biases',
'vgg_19/fc6/weights',
'vgg_19/fc6/biases',
'vgg_19/fc7/weights',
'vgg_19/fc7/biases',
'vgg_19/fc8/weights',
'vgg_19/fc8/biases',
]
model_variables = [v.op.name for v in slim.get_model_variables()]
self.assertSetEqual(set(model_variables), set(expected_names))
def testEvaluation(self):
batch_size = 2
height, width = 224, 224
num_classes = 1000
with self.test_session():
eval_inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_19(eval_inputs, is_training=False)
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
predictions = tf.argmax(input=logits, axis=1)
self.assertListEqual(predictions.get_shape().as_list(), [batch_size])
def testTrainEvalWithReuse(self):
train_batch_size = 2
eval_batch_size = 1
train_height, train_width = 224, 224
eval_height, eval_width = 256, 256
num_classes = 1000
with self.test_session():
train_inputs = tf.random.uniform(
(train_batch_size, train_height, train_width, 3))
logits, _ = vgg.vgg_19(train_inputs)
self.assertListEqual(logits.get_shape().as_list(),
[train_batch_size, num_classes])
tf.get_variable_scope().reuse_variables()
eval_inputs = tf.random.uniform(
(eval_batch_size, eval_height, eval_width, 3))
logits, _ = vgg.vgg_19(eval_inputs, is_training=False,
spatial_squeeze=False)
self.assertListEqual(logits.get_shape().as_list(),
[eval_batch_size, 2, 2, num_classes])
logits = tf.reduce_mean(input_tensor=logits, axis=[1, 2])
predictions = tf.argmax(input=logits, axis=1)
self.assertEquals(predictions.get_shape().as_list(), [eval_batch_size])
def testForward(self):
batch_size = 1
height, width = 224, 224
with self.test_session() as sess:
inputs = tf.random.uniform((batch_size, height, width, 3))
logits, _ = vgg.vgg_19(inputs)
sess.run(tf.global_variables_initializer())
output = sess.run(logits)
self.assertTrue(output.any())
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/vgg_test.py | vgg_test.py |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for networks.i3d."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import six
import tensorflow.compat.v1 as tf
from nets import i3d
class I3DTest(tf.test.TestCase):
def testBuildClassificationNetwork(self):
batch_size = 5
num_frames = 64
height, width = 224, 224
num_classes = 1000
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
logits, end_points = i3d.i3d(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV1/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('Predictions' in end_points)
self.assertListEqual(end_points['Predictions'].get_shape().as_list(),
[batch_size, num_classes])
def testBuildBaseNetwork(self):
batch_size = 5
num_frames = 64
height, width = 224, 224
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
mixed_6c, end_points = i3d.i3d_base(inputs)
self.assertTrue(mixed_6c.op.name.startswith('InceptionV1/Mixed_5c'))
self.assertListEqual(mixed_6c.get_shape().as_list(),
[batch_size, 8, 7, 7, 1024])
expected_endpoints = ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b',
'Mixed_3c', 'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c',
'Mixed_4d', 'Mixed_4e', 'Mixed_4f', 'MaxPool_5a_2x2',
'Mixed_5b', 'Mixed_5c']
self.assertItemsEqual(list(end_points.keys()), expected_endpoints)
def testBuildOnlyUptoFinalEndpoint(self):
batch_size = 5
num_frames = 64
height, width = 224, 224
endpoints = ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d',
'Mixed_4e', 'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b',
'Mixed_5c']
for index, endpoint in enumerate(endpoints):
with tf.Graph().as_default():
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
out_tensor, end_points = i3d.i3d_base(
inputs, final_endpoint=endpoint)
self.assertTrue(out_tensor.op.name.startswith(
'InceptionV1/' + endpoint))
self.assertItemsEqual(endpoints[:index+1], end_points)
def testBuildAndCheckAllEndPointsUptoMixed5c(self):
batch_size = 5
num_frames = 64
height, width = 224, 224
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
_, end_points = i3d.i3d_base(inputs,
final_endpoint='Mixed_5c')
endpoints_shapes = {'Conv2d_1a_7x7': [5, 32, 112, 112, 64],
'MaxPool_2a_3x3': [5, 32, 56, 56, 64],
'Conv2d_2b_1x1': [5, 32, 56, 56, 64],
'Conv2d_2c_3x3': [5, 32, 56, 56, 192],
'MaxPool_3a_3x3': [5, 32, 28, 28, 192],
'Mixed_3b': [5, 32, 28, 28, 256],
'Mixed_3c': [5, 32, 28, 28, 480],
'MaxPool_4a_3x3': [5, 16, 14, 14, 480],
'Mixed_4b': [5, 16, 14, 14, 512],
'Mixed_4c': [5, 16, 14, 14, 512],
'Mixed_4d': [5, 16, 14, 14, 512],
'Mixed_4e': [5, 16, 14, 14, 528],
'Mixed_4f': [5, 16, 14, 14, 832],
'MaxPool_5a_2x2': [5, 8, 7, 7, 832],
'Mixed_5b': [5, 8, 7, 7, 832],
'Mixed_5c': [5, 8, 7, 7, 1024]}
self.assertItemsEqual(
list(endpoints_shapes.keys()), list(end_points.keys()))
for endpoint_name, expected_shape in six.iteritems(endpoints_shapes):
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testHalfSizeImages(self):
batch_size = 5
num_frames = 64
height, width = 112, 112
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
mixed_5c, _ = i3d.i3d_base(inputs)
self.assertTrue(mixed_5c.op.name.startswith('InceptionV1/Mixed_5c'))
self.assertListEqual(mixed_5c.get_shape().as_list(),
[batch_size, 8, 4, 4, 1024])
def testTenFrames(self):
batch_size = 5
num_frames = 10
height, width = 224, 224
inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
mixed_5c, _ = i3d.i3d_base(inputs)
self.assertTrue(mixed_5c.op.name.startswith('InceptionV1/Mixed_5c'))
self.assertListEqual(mixed_5c.get_shape().as_list(),
[batch_size, 2, 7, 7, 1024])
def testEvaluation(self):
batch_size = 2
num_frames = 64
height, width = 224, 224
num_classes = 1000
eval_inputs = tf.random.uniform((batch_size, num_frames, height, width, 3))
logits, _ = i3d.i3d(eval_inputs, num_classes,
is_training=False)
predictions = tf.argmax(input=logits, axis=1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (batch_size,))
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/i3d_test.py | i3d_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains definitions for the original form of Residual Networks.
The 'v1' residual networks (ResNets) implemented in this module were proposed
by:
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Deep Residual Learning for Image Recognition. arXiv:1512.03385
Other variants were introduced in:
[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Identity Mappings in Deep Residual Networks. arXiv: 1603.05027
The networks defined in this module utilize the bottleneck building block of
[1] with projection shortcuts only for increasing depths. They employ batch
normalization *after* every weight layer. This is the architecture used by
MSRA in the Imagenet and MSCOCO 2016 competition models ResNet-101 and
ResNet-152. See [2; Fig. 1a] for a comparison between the current 'v1'
architecture and the alternative 'v2' architecture of [2] which uses batch
normalization *before* every weight layer in the so-called full pre-activation
units.
Typical use:
from tf_slim.nets import resnet_v1
ResNet-101 for image classification into 1000 classes:
# inputs has shape [batch, 224, 224, 3]
with slim.arg_scope(resnet_v1.resnet_arg_scope()):
net, end_points = resnet_v1.resnet_v1_101(inputs, 1000, is_training=False)
ResNet-101 for semantic segmentation into 21 classes:
# inputs has shape [batch, 513, 513, 3]
with slim.arg_scope(resnet_v1.resnet_arg_scope()):
net, end_points = resnet_v1.resnet_v1_101(inputs,
21,
is_training=False,
global_pool=False,
output_stride=16)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import resnet_utils
resnet_arg_scope = resnet_utils.resnet_arg_scope
class NoOpScope(object):
"""No-op context manager."""
def __enter__(self):
return None
def __exit__(self, exc_type, exc_value, traceback):
return False
@slim.add_arg_scope
def bottleneck(inputs,
depth,
depth_bottleneck,
stride,
rate=1,
outputs_collections=None,
scope=None,
use_bounded_activations=False):
"""Bottleneck residual unit variant with BN after convolutions.
This is the original residual unit proposed in [1]. See Fig. 1(a) of [2] for
its definition. Note that we use here the bottleneck variant which has an
extra bottleneck layer.
When putting together two consecutive ResNet blocks that use this unit, one
should use stride = 2 in the last unit of the first block.
Args:
inputs: A tensor of size [batch, height, width, channels].
depth: The depth of the ResNet unit output.
depth_bottleneck: The depth of the bottleneck layers.
stride: The ResNet unit's stride. Determines the amount of downsampling of
the units output compared to its input.
rate: An integer, rate for atrous convolution.
outputs_collections: Collection to add the ResNet unit output.
scope: Optional variable_scope.
use_bounded_activations: Whether or not to use bounded activations. Bounded
activations better lend themselves to quantized inference.
Returns:
The ResNet unit's output.
"""
with tf.variable_scope(scope, 'bottleneck_v1', [inputs]) as sc:
depth_in = slim.utils.last_dimension(inputs.get_shape(), min_rank=4)
if depth == depth_in:
shortcut = resnet_utils.subsample(inputs, stride, 'shortcut')
else:
shortcut = slim.conv2d(
inputs,
depth, [1, 1],
stride=stride,
activation_fn=tf.nn.relu6 if use_bounded_activations else None,
scope='shortcut')
residual = slim.conv2d(inputs, depth_bottleneck, [1, 1], stride=1,
scope='conv1')
residual = resnet_utils.conv2d_same(residual, depth_bottleneck, 3, stride,
rate=rate, scope='conv2')
residual = slim.conv2d(residual, depth, [1, 1], stride=1,
activation_fn=None, scope='conv3')
if use_bounded_activations:
# Use clip_by_value to simulate bandpass activation.
residual = tf.clip_by_value(residual, -6.0, 6.0)
output = tf.nn.relu6(shortcut + residual)
else:
output = tf.nn.relu(shortcut + residual)
return slim.utils.collect_named_outputs(outputs_collections,
sc.name,
output)
def resnet_v1(inputs,
blocks,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
include_root_block=True,
spatial_squeeze=True,
store_non_strided_activations=False,
reuse=None,
scope=None):
"""Generator for v1 ResNet models.
This function generates a family of ResNet v1 models. See the resnet_v1_*()
methods for specific model instantiations, obtained by selecting different
block instantiations that produce ResNets of various depths.
Training for image classification on Imagenet is usually done with [224, 224]
inputs, resulting in [7, 7] feature maps at the output of the last ResNet
block for the ResNets defined in [1] that have nominal stride equal to 32.
However, for dense prediction tasks we advise that one uses inputs with
spatial dimensions that are multiples of 32 plus 1, e.g., [321, 321]. In
this case the feature maps at the ResNet output will have spatial shape
[(height - 1) / output_stride + 1, (width - 1) / output_stride + 1]
and corners exactly aligned with the input image corners, which greatly
facilitates alignment of the features to the image. Using as input [225, 225]
images results in [8, 8] feature maps at the output of the last ResNet block.
For dense prediction tasks, the ResNet needs to run in fully-convolutional
(FCN) mode and global_pool needs to be set to False. The ResNets in [1, 2] all
have nominal stride equal to 32 and a good choice in FCN mode is to use
output_stride=16 in order to increase the density of the computed features at
small computational and memory overhead, cf. http://arxiv.org/abs/1606.00915.
Args:
inputs: A tensor of size [batch, height_in, width_in, channels].
blocks: A list of length equal to the number of ResNet blocks. Each element
is a resnet_utils.Block object describing the units in the block.
num_classes: Number of predicted classes for classification tasks.
If 0 or None, we return the features before the logit layer.
is_training: whether batch_norm layers are in training mode. If this is set
to None, the callers can specify slim.batch_norm's is_training parameter
from an outer slim.arg_scope.
global_pool: If True, we perform global average pooling before computing the
logits. Set to True for image classification, False for dense prediction.
output_stride: If None, then the output will be computed at the nominal
network stride. If output_stride is not None, it specifies the requested
ratio of input to output spatial resolution.
include_root_block: If True, include the initial convolution followed by
max-pooling, if False excludes it.
spatial_squeeze: if True, logits is of shape [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
To use this parameter, the input images must be smaller than 300x300
pixels, in which case the output logit layer does not contain spatial
information and can be removed.
store_non_strided_activations: If True, we compute non-strided (undecimated)
activations at the last unit of each block and store them in the
`outputs_collections` before subsampling them. This gives us access to
higher resolution intermediate activations which are useful in some
dense prediction problems but increases 4x the computation and memory cost
at the last unit of each block.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
Returns:
net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
If global_pool is False, then height_out and width_out are reduced by a
factor of output_stride compared to the respective height_in and width_in,
else both height_out and width_out equal one. If num_classes is 0 or None,
then net is the output of the last ResNet block, potentially after global
average pooling. If num_classes a non-zero integer, net contains the
pre-softmax activations.
end_points: A dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: If the target output_stride is not valid.
"""
with tf.variable_scope(
scope, 'resnet_v1', [inputs], reuse=reuse) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
with slim.arg_scope([slim.conv2d, bottleneck,
resnet_utils.stack_blocks_dense],
outputs_collections=end_points_collection):
with (slim.arg_scope([slim.batch_norm], is_training=is_training)
if is_training is not None else NoOpScope()):
net = inputs
if include_root_block:
if output_stride is not None:
if output_stride % 4 != 0:
raise ValueError('The output_stride needs to be a multiple of 4.')
output_stride /= 4
net = resnet_utils.conv2d_same(net, 64, 7, stride=2, scope='conv1')
net = slim.max_pool2d(net, [3, 3], stride=2, scope='pool1')
net = resnet_utils.stack_blocks_dense(net, blocks, output_stride,
store_non_strided_activations)
# Convert end_points_collection into a dictionary of end_points.
end_points = slim.utils.convert_collection_to_dict(
end_points_collection)
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], name='pool5', keepdims=True)
end_points['global_pool'] = net
if num_classes:
net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='logits')
end_points[sc.name + '/logits'] = net
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='SpatialSqueeze')
end_points[sc.name + '/spatial_squeeze'] = net
end_points['predictions'] = slim.softmax(net, scope='predictions')
return net, end_points
resnet_v1.default_image_size = 224
def resnet_v1_block(scope, base_depth, num_units, stride):
"""Helper function for creating a resnet_v1 bottleneck block.
Args:
scope: The scope of the block.
base_depth: The depth of the bottleneck layer for each unit.
num_units: The number of units in the block.
stride: The stride of the block, implemented as a stride in the last unit.
All other units have stride=1.
Returns:
A resnet_v1 bottleneck block.
"""
return resnet_utils.Block(scope, bottleneck, [{
'depth': base_depth * 4,
'depth_bottleneck': base_depth,
'stride': 1
}] * (num_units - 1) + [{
'depth': base_depth * 4,
'depth_bottleneck': base_depth,
'stride': stride
}])
def resnet_v1_50(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
store_non_strided_activations=False,
min_base_depth=8,
depth_multiplier=1,
reuse=None,
scope='resnet_v1_50'):
"""ResNet-50 model of [1]. See resnet_v1() for arg and return description."""
depth_func = lambda d: max(int(d * depth_multiplier), min_base_depth)
blocks = [
resnet_v1_block('block1', base_depth=depth_func(64), num_units=3,
stride=2),
resnet_v1_block('block2', base_depth=depth_func(128), num_units=4,
stride=2),
resnet_v1_block('block3', base_depth=depth_func(256), num_units=6,
stride=2),
resnet_v1_block('block4', base_depth=depth_func(512), num_units=3,
stride=1),
]
return resnet_v1(inputs, blocks, num_classes, is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
store_non_strided_activations=store_non_strided_activations,
reuse=reuse, scope=scope)
resnet_v1_50.default_image_size = resnet_v1.default_image_size
def resnet_v1_101(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
store_non_strided_activations=False,
min_base_depth=8,
depth_multiplier=1,
reuse=None,
scope='resnet_v1_101'):
"""ResNet-101 model of [1]. See resnet_v1() for arg and return description."""
depth_func = lambda d: max(int(d * depth_multiplier), min_base_depth)
blocks = [
resnet_v1_block('block1', base_depth=depth_func(64), num_units=3,
stride=2),
resnet_v1_block('block2', base_depth=depth_func(128), num_units=4,
stride=2),
resnet_v1_block('block3', base_depth=depth_func(256), num_units=23,
stride=2),
resnet_v1_block('block4', base_depth=depth_func(512), num_units=3,
stride=1),
]
return resnet_v1(inputs, blocks, num_classes, is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
store_non_strided_activations=store_non_strided_activations,
reuse=reuse, scope=scope)
resnet_v1_101.default_image_size = resnet_v1.default_image_size
def resnet_v1_152(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
store_non_strided_activations=False,
spatial_squeeze=True,
min_base_depth=8,
depth_multiplier=1,
reuse=None,
scope='resnet_v1_152'):
"""ResNet-152 model of [1]. See resnet_v1() for arg and return description."""
depth_func = lambda d: max(int(d * depth_multiplier), min_base_depth)
blocks = [
resnet_v1_block('block1', base_depth=depth_func(64), num_units=3,
stride=2),
resnet_v1_block('block2', base_depth=depth_func(128), num_units=8,
stride=2),
resnet_v1_block('block3', base_depth=depth_func(256), num_units=36,
stride=2),
resnet_v1_block('block4', base_depth=depth_func(512), num_units=3,
stride=1),
]
return resnet_v1(inputs, blocks, num_classes, is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
store_non_strided_activations=store_non_strided_activations,
reuse=reuse, scope=scope)
resnet_v1_152.default_image_size = resnet_v1.default_image_size
def resnet_v1_200(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
store_non_strided_activations=False,
spatial_squeeze=True,
min_base_depth=8,
depth_multiplier=1,
reuse=None,
scope='resnet_v1_200'):
"""ResNet-200 model of [2]. See resnet_v1() for arg and return description."""
depth_func = lambda d: max(int(d * depth_multiplier), min_base_depth)
blocks = [
resnet_v1_block('block1', base_depth=depth_func(64), num_units=3,
stride=2),
resnet_v1_block('block2', base_depth=depth_func(128), num_units=24,
stride=2),
resnet_v1_block('block3', base_depth=depth_func(256), num_units=36,
stride=2),
resnet_v1_block('block4', base_depth=depth_func(512), num_units=3,
stride=1),
]
return resnet_v1(inputs, blocks, num_classes, is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
store_non_strided_activations=store_non_strided_activations,
reuse=reuse, scope=scope)
resnet_v1_200.default_image_size = resnet_v1.default_image_size
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/resnet_v1.py | resnet_v1.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains a model definition for AlexNet.
This work was first described in:
ImageNet Classification with Deep Convolutional Neural Networks
Alex Krizhevsky, Ilya Sutskever and Geoffrey E. Hinton
and later refined in:
One weird trick for parallelizing convolutional neural networks
Alex Krizhevsky, 2014
Here we provide the implementation proposed in "One weird trick" and not
"ImageNet Classification", as per the paper, the LRN layers have been removed.
Usage:
with slim.arg_scope(alexnet.alexnet_v2_arg_scope()):
outputs, end_points = alexnet.alexnet_v2(inputs)
@@alexnet_v2
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
def alexnet_v2_arg_scope(weight_decay=0.0005):
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
biases_initializer=tf.constant_initializer(0.1),
weights_regularizer=slim.l2_regularizer(weight_decay)):
with slim.arg_scope([slim.conv2d], padding='SAME'):
with slim.arg_scope([slim.max_pool2d], padding='VALID') as arg_sc:
return arg_sc
def alexnet_v2(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
scope='alexnet_v2',
global_pool=False):
"""AlexNet version 2.
Described in: http://arxiv.org/pdf/1404.5997v2.pdf
Parameters from:
github.com/akrizhevsky/cuda-convnet2/blob/master/layers/
layers-imagenet-1gpu.cfg
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224 or set
global_pool=True. To use in fully convolutional mode, set
spatial_squeeze to false.
The LRN layers have been removed and change the initializers from
random_normal_initializer to xavier_initializer.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: the number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
logits. Useful to remove unnecessary dimensions for classification.
scope: Optional scope for the variables.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original AlexNet.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the non-dropped-out input to the logits layer (if num_classes is 0
or None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(scope, 'alexnet_v2', [inputs]) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=[end_points_collection]):
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID',
scope='conv1')
net = slim.max_pool2d(net, [3, 3], 2, scope='pool1')
net = slim.conv2d(net, 192, [5, 5], scope='conv2')
net = slim.max_pool2d(net, [3, 3], 2, scope='pool2')
net = slim.conv2d(net, 384, [3, 3], scope='conv3')
net = slim.conv2d(net, 384, [3, 3], scope='conv4')
net = slim.conv2d(net, 256, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [3, 3], 2, scope='pool5')
# Use conv2d instead of fully_connected layers.
with slim.arg_scope(
[slim.conv2d],
weights_initializer=trunc_normal(0.005),
biases_initializer=tf.constant_initializer(0.1)):
net = slim.conv2d(net, 4096, [5, 5], padding='VALID',
scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(
end_points_collection)
if global_pool:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(
net,
num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
biases_initializer=tf.zeros_initializer(),
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
alexnet_v2.default_image_size = 224
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/alexnet.py | alexnet.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""DCGAN generator and discriminator from https://arxiv.org/abs/1511.06434."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from math import log
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow.compat.v1 as tf
import tf_slim as slim
def _validate_image_inputs(inputs):
inputs.get_shape().assert_has_rank(4)
inputs.get_shape()[1:3].assert_is_fully_defined()
if inputs.get_shape()[1] != inputs.get_shape()[2]:
raise ValueError('Input tensor does not have equal width and height: ',
inputs.get_shape()[1:3])
width = inputs.get_shape().as_list()[1]
if log(width, 2) != int(log(width, 2)):
raise ValueError('Input tensor `width` is not a power of 2: ', width)
# TODO(joelshor): Use fused batch norm by default. Investigate why some GAN
# setups need the gradient of gradient FusedBatchNormGrad.
def discriminator(inputs,
depth=64,
is_training=True,
reuse=None,
scope='Discriminator',
fused_batch_norm=False):
"""Discriminator network for DCGAN.
Construct discriminator network from inputs to the final endpoint.
Args:
inputs: A tensor of size [batch_size, height, width, channels]. Must be
floating point.
depth: Number of channels in first convolution layer.
is_training: Whether the network is for training or not.
reuse: Whether or not the network variables should be reused. `scope`
must be given to be reused.
scope: Optional variable_scope.
fused_batch_norm: If `True`, use a faster, fused implementation of
batch norm.
Returns:
logits: The pre-softmax activations, a tensor of size [batch_size, 1]
end_points: a dictionary from components of the network to their activation.
Raises:
ValueError: If the input image shape is not 4-dimensional, if the spatial
dimensions aren't defined at graph construction time, if the spatial
dimensions aren't square, or if the spatial dimensions aren't a power of
two.
"""
normalizer_fn = slim.batch_norm
normalizer_fn_args = {
'is_training': is_training,
'zero_debias_moving_mean': True,
'fused': fused_batch_norm,
}
_validate_image_inputs(inputs)
inp_shape = inputs.get_shape().as_list()[1]
end_points = {}
with tf.variable_scope(
scope, values=[inputs], reuse=reuse) as scope:
with slim.arg_scope([normalizer_fn], **normalizer_fn_args):
with slim.arg_scope([slim.conv2d],
stride=2,
kernel_size=4,
activation_fn=tf.nn.leaky_relu):
net = inputs
for i in xrange(int(log(inp_shape, 2))):
scope = 'conv%i' % (i + 1)
current_depth = depth * 2**i
normalizer_fn_ = None if i == 0 else normalizer_fn
net = slim.conv2d(
net, current_depth, normalizer_fn=normalizer_fn_, scope=scope)
end_points[scope] = net
logits = slim.conv2d(net, 1, kernel_size=1, stride=1, padding='VALID',
normalizer_fn=None, activation_fn=None)
logits = tf.reshape(logits, [-1, 1])
end_points['logits'] = logits
return logits, end_points
# TODO(joelshor): Use fused batch norm by default. Investigate why some GAN
# setups need the gradient of gradient FusedBatchNormGrad.
def generator(inputs,
depth=64,
final_size=32,
num_outputs=3,
is_training=True,
reuse=None,
scope='Generator',
fused_batch_norm=False):
"""Generator network for DCGAN.
Construct generator network from inputs to the final endpoint.
Args:
inputs: A tensor with any size N. [batch_size, N]
depth: Number of channels in last deconvolution layer.
final_size: The shape of the final output.
num_outputs: Number of output features. For images, this is the number of
channels.
is_training: whether is training or not.
reuse: Whether or not the network has its variables should be reused. scope
must be given to be reused.
scope: Optional variable_scope.
fused_batch_norm: If `True`, use a faster, fused implementation of
batch norm.
Returns:
logits: the pre-softmax activations, a tensor of size
[batch_size, 32, 32, channels]
end_points: a dictionary from components of the network to their activation.
Raises:
ValueError: If `inputs` is not 2-dimensional.
ValueError: If `final_size` isn't a power of 2 or is less than 8.
"""
normalizer_fn = slim.batch_norm
normalizer_fn_args = {
'is_training': is_training,
'zero_debias_moving_mean': True,
'fused': fused_batch_norm,
}
inputs.get_shape().assert_has_rank(2)
if log(final_size, 2) != int(log(final_size, 2)):
raise ValueError('`final_size` (%i) must be a power of 2.' % final_size)
if final_size < 8:
raise ValueError('`final_size` (%i) must be greater than 8.' % final_size)
end_points = {}
num_layers = int(log(final_size, 2)) - 1
with tf.variable_scope(
scope, values=[inputs], reuse=reuse) as scope:
with slim.arg_scope([normalizer_fn], **normalizer_fn_args):
with slim.arg_scope([slim.conv2d_transpose],
normalizer_fn=normalizer_fn,
stride=2,
kernel_size=4):
net = tf.expand_dims(tf.expand_dims(inputs, 1), 1)
# First upscaling is different because it takes the input vector.
current_depth = depth * 2 ** (num_layers - 1)
scope = 'deconv1'
net = slim.conv2d_transpose(
net, current_depth, stride=1, padding='VALID', scope=scope)
end_points[scope] = net
for i in xrange(2, num_layers):
scope = 'deconv%i' % (i)
current_depth = depth * 2 ** (num_layers - i)
net = slim.conv2d_transpose(net, current_depth, scope=scope)
end_points[scope] = net
# Last layer has different normalizer and activation.
scope = 'deconv%i' % (num_layers)
net = slim.conv2d_transpose(
net, depth, normalizer_fn=None, activation_fn=None, scope=scope)
end_points[scope] = net
# Convert to proper channels.
scope = 'logits'
logits = slim.conv2d(
net,
num_outputs,
normalizer_fn=None,
activation_fn=None,
kernel_size=1,
stride=1,
padding='VALID',
scope=scope)
end_points[scope] = logits
logits.get_shape().assert_has_rank(4)
logits.get_shape().assert_is_compatible_with(
[None, final_size, final_size, num_outputs])
return logits, end_points
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/dcgan.py | dcgan.py |
123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/__init__.py | __init__.py |
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
"""MobileNet v1.
MobileNet is a general architecture and can be used for multiple use cases.
Depending on the use case, it can use different input layer size and different
head (for example: embeddings, localization and classification).
As described in https://arxiv.org/abs/1704.04861.
MobileNets: Efficient Convolutional Neural Networks for
Mobile Vision Applications
Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang,
Tobias Weyand, Marco Andreetto, Hartwig Adam
100% Mobilenet V1 (base) with input size 224x224:
See mobilenet_v1()
Layer params macs
--------------------------------------------------------------------------------
MobilenetV1/Conv2d_0/Conv2D: 864 10,838,016
MobilenetV1/Conv2d_1_depthwise/depthwise: 288 3,612,672
MobilenetV1/Conv2d_1_pointwise/Conv2D: 2,048 25,690,112
MobilenetV1/Conv2d_2_depthwise/depthwise: 576 1,806,336
MobilenetV1/Conv2d_2_pointwise/Conv2D: 8,192 25,690,112
MobilenetV1/Conv2d_3_depthwise/depthwise: 1,152 3,612,672
MobilenetV1/Conv2d_3_pointwise/Conv2D: 16,384 51,380,224
MobilenetV1/Conv2d_4_depthwise/depthwise: 1,152 903,168
MobilenetV1/Conv2d_4_pointwise/Conv2D: 32,768 25,690,112
MobilenetV1/Conv2d_5_depthwise/depthwise: 2,304 1,806,336
MobilenetV1/Conv2d_5_pointwise/Conv2D: 65,536 51,380,224
MobilenetV1/Conv2d_6_depthwise/depthwise: 2,304 451,584
MobilenetV1/Conv2d_6_pointwise/Conv2D: 131,072 25,690,112
MobilenetV1/Conv2d_7_depthwise/depthwise: 4,608 903,168
MobilenetV1/Conv2d_7_pointwise/Conv2D: 262,144 51,380,224
MobilenetV1/Conv2d_8_depthwise/depthwise: 4,608 903,168
MobilenetV1/Conv2d_8_pointwise/Conv2D: 262,144 51,380,224
MobilenetV1/Conv2d_9_depthwise/depthwise: 4,608 903,168
MobilenetV1/Conv2d_9_pointwise/Conv2D: 262,144 51,380,224
MobilenetV1/Conv2d_10_depthwise/depthwise: 4,608 903,168
MobilenetV1/Conv2d_10_pointwise/Conv2D: 262,144 51,380,224
MobilenetV1/Conv2d_11_depthwise/depthwise: 4,608 903,168
MobilenetV1/Conv2d_11_pointwise/Conv2D: 262,144 51,380,224
MobilenetV1/Conv2d_12_depthwise/depthwise: 4,608 225,792
MobilenetV1/Conv2d_12_pointwise/Conv2D: 524,288 25,690,112
MobilenetV1/Conv2d_13_depthwise/depthwise: 9,216 451,584
MobilenetV1/Conv2d_13_pointwise/Conv2D: 1,048,576 51,380,224
--------------------------------------------------------------------------------
Total: 3,185,088 567,716,352
75% Mobilenet V1 (base) with input size 128x128:
See mobilenet_v1_075()
Layer params macs
--------------------------------------------------------------------------------
MobilenetV1/Conv2d_0/Conv2D: 648 2,654,208
MobilenetV1/Conv2d_1_depthwise/depthwise: 216 884,736
MobilenetV1/Conv2d_1_pointwise/Conv2D: 1,152 4,718,592
MobilenetV1/Conv2d_2_depthwise/depthwise: 432 442,368
MobilenetV1/Conv2d_2_pointwise/Conv2D: 4,608 4,718,592
MobilenetV1/Conv2d_3_depthwise/depthwise: 864 884,736
MobilenetV1/Conv2d_3_pointwise/Conv2D: 9,216 9,437,184
MobilenetV1/Conv2d_4_depthwise/depthwise: 864 221,184
MobilenetV1/Conv2d_4_pointwise/Conv2D: 18,432 4,718,592
MobilenetV1/Conv2d_5_depthwise/depthwise: 1,728 442,368
MobilenetV1/Conv2d_5_pointwise/Conv2D: 36,864 9,437,184
MobilenetV1/Conv2d_6_depthwise/depthwise: 1,728 110,592
MobilenetV1/Conv2d_6_pointwise/Conv2D: 73,728 4,718,592
MobilenetV1/Conv2d_7_depthwise/depthwise: 3,456 221,184
MobilenetV1/Conv2d_7_pointwise/Conv2D: 147,456 9,437,184
MobilenetV1/Conv2d_8_depthwise/depthwise: 3,456 221,184
MobilenetV1/Conv2d_8_pointwise/Conv2D: 147,456 9,437,184
MobilenetV1/Conv2d_9_depthwise/depthwise: 3,456 221,184
MobilenetV1/Conv2d_9_pointwise/Conv2D: 147,456 9,437,184
MobilenetV1/Conv2d_10_depthwise/depthwise: 3,456 221,184
MobilenetV1/Conv2d_10_pointwise/Conv2D: 147,456 9,437,184
MobilenetV1/Conv2d_11_depthwise/depthwise: 3,456 221,184
MobilenetV1/Conv2d_11_pointwise/Conv2D: 147,456 9,437,184
MobilenetV1/Conv2d_12_depthwise/depthwise: 3,456 55,296
MobilenetV1/Conv2d_12_pointwise/Conv2D: 294,912 4,718,592
MobilenetV1/Conv2d_13_depthwise/depthwise: 6,912 110,592
MobilenetV1/Conv2d_13_pointwise/Conv2D: 589,824 9,437,184
--------------------------------------------------------------------------------
Total: 1,800,144 106,002,432
"""
# Tensorflow mandates these.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from collections import namedtuple
import functools
import tensorflow.compat.v1 as tf
import tf_slim as slim
# Conv and DepthSepConv namedtuple define layers of the MobileNet architecture
# Conv defines 3x3 convolution layers
# DepthSepConv defines 3x3 depthwise convolution followed by 1x1 convolution.
# stride is the stride of the convolution
# depth is the number of channels or filters in a layer
Conv = namedtuple('Conv', ['kernel', 'stride', 'depth'])
DepthSepConv = namedtuple('DepthSepConv', ['kernel', 'stride', 'depth'])
# MOBILENETV1_CONV_DEFS specifies the MobileNet body
MOBILENETV1_CONV_DEFS = [
Conv(kernel=[3, 3], stride=2, depth=32),
DepthSepConv(kernel=[3, 3], stride=1, depth=64),
DepthSepConv(kernel=[3, 3], stride=2, depth=128),
DepthSepConv(kernel=[3, 3], stride=1, depth=128),
DepthSepConv(kernel=[3, 3], stride=2, depth=256),
DepthSepConv(kernel=[3, 3], stride=1, depth=256),
DepthSepConv(kernel=[3, 3], stride=2, depth=512),
DepthSepConv(kernel=[3, 3], stride=1, depth=512),
DepthSepConv(kernel=[3, 3], stride=1, depth=512),
DepthSepConv(kernel=[3, 3], stride=1, depth=512),
DepthSepConv(kernel=[3, 3], stride=1, depth=512),
DepthSepConv(kernel=[3, 3], stride=1, depth=512),
DepthSepConv(kernel=[3, 3], stride=2, depth=1024),
DepthSepConv(kernel=[3, 3], stride=1, depth=1024)
]
def _fixed_padding(inputs, kernel_size, rate=1):
"""Pads the input along the spatial dimensions independently of input size.
Pads the input such that if it was used in a convolution with 'VALID' padding,
the output would have the same dimensions as if the unpadded input was used
in a convolution with 'SAME' padding.
Args:
inputs: A tensor of size [batch, height_in, width_in, channels].
kernel_size: The kernel to be used in the conv2d or max_pool2d operation.
rate: An integer, rate for atrous convolution.
Returns:
output: A tensor of size [batch, height_out, width_out, channels] with the
input, either intact (if kernel_size == 1) or padded (if kernel_size > 1).
"""
kernel_size_effective = [kernel_size[0] + (kernel_size[0] - 1) * (rate - 1),
kernel_size[1] + (kernel_size[1] - 1) * (rate - 1)]
pad_total = [kernel_size_effective[0] - 1, kernel_size_effective[1] - 1]
pad_beg = [pad_total[0] // 2, pad_total[1] // 2]
pad_end = [pad_total[0] - pad_beg[0], pad_total[1] - pad_beg[1]]
padded_inputs = tf.pad(
tensor=inputs,
paddings=[[0, 0], [pad_beg[0], pad_end[0]], [pad_beg[1], pad_end[1]],
[0, 0]])
return padded_inputs
def mobilenet_v1_base(inputs,
final_endpoint='Conv2d_13_pointwise',
min_depth=8,
depth_multiplier=1.0,
conv_defs=None,
output_stride=None,
use_explicit_padding=False,
scope=None):
"""Mobilenet v1.
Constructs a Mobilenet v1 network from inputs to the given final endpoint.
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
final_endpoint: specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_0', 'Conv2d_1_pointwise', 'Conv2d_2_pointwise',
'Conv2d_3_pointwise', 'Conv2d_4_pointwise', 'Conv2d_5'_pointwise,
'Conv2d_6_pointwise', 'Conv2d_7_pointwise', 'Conv2d_8_pointwise',
'Conv2d_9_pointwise', 'Conv2d_10_pointwise', 'Conv2d_11_pointwise',
'Conv2d_12_pointwise', 'Conv2d_13_pointwise'].
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
conv_defs: A list of ConvDef namedtuples specifying the net architecture.
output_stride: An integer that specifies the requested ratio of input to
output spatial resolution. If not None, then we invoke atrous convolution
if necessary to prevent the network from reducing the spatial resolution
of the activation maps. Allowed values are 8 (accurate fully convolutional
mode), 16 (fast fully convolutional mode), 32 (classification mode).
use_explicit_padding: Use 'VALID' padding for convolutions, but prepad
inputs so that the output dimensions are the same as if 'SAME' padding
were used.
scope: Optional variable_scope.
Returns:
tensor_out: output tensor corresponding to the final_endpoint.
end_points: a set of activations for external use, for example summaries or
losses.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
or depth_multiplier <= 0, or the target output_stride is not
allowed.
"""
depth = lambda d: max(int(d * depth_multiplier), min_depth)
end_points = {}
# Used to find thinned depths for each layer.
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
if conv_defs is None:
conv_defs = MOBILENETV1_CONV_DEFS
if output_stride is not None and output_stride not in [8, 16, 32]:
raise ValueError('Only allowed output_stride values are 8, 16, 32.')
padding = 'SAME'
if use_explicit_padding:
padding = 'VALID'
with tf.variable_scope(scope, 'MobilenetV1', [inputs]):
with slim.arg_scope([slim.conv2d, slim.separable_conv2d], padding=padding):
# The current_stride variable keeps track of the output stride of the
# activations, i.e., the running product of convolution strides up to the
# current network layer. This allows us to invoke atrous convolution
# whenever applying the next convolution would result in the activations
# having output stride larger than the target output_stride.
current_stride = 1
# The atrous convolution rate parameter.
rate = 1
net = inputs
for i, conv_def in enumerate(conv_defs):
end_point_base = 'Conv2d_%d' % i
if output_stride is not None and current_stride == output_stride:
# If we have reached the target output_stride, then we need to employ
# atrous convolution with stride=1 and multiply the atrous rate by the
# current unit's stride for use in subsequent layers.
layer_stride = 1
layer_rate = rate
rate *= conv_def.stride
else:
layer_stride = conv_def.stride
layer_rate = 1
current_stride *= conv_def.stride
if isinstance(conv_def, Conv):
end_point = end_point_base
if use_explicit_padding:
net = _fixed_padding(net, conv_def.kernel)
net = slim.conv2d(net, depth(conv_def.depth), conv_def.kernel,
stride=conv_def.stride,
scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
elif isinstance(conv_def, DepthSepConv):
end_point = end_point_base + '_depthwise'
# By passing filters=None
# separable_conv2d produces only a depthwise convolution layer
if use_explicit_padding:
net = _fixed_padding(net, conv_def.kernel, layer_rate)
net = slim.separable_conv2d(net, None, conv_def.kernel,
depth_multiplier=1,
stride=layer_stride,
rate=layer_rate,
scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
end_point = end_point_base + '_pointwise'
net = slim.conv2d(net, depth(conv_def.depth), [1, 1],
stride=1,
scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
else:
raise ValueError('Unknown convolution type %s for layer %d'
% (conv_def.ltype, i))
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def mobilenet_v1(inputs,
num_classes=1000,
dropout_keep_prob=0.999,
is_training=True,
min_depth=8,
depth_multiplier=1.0,
conv_defs=None,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
scope='MobilenetV1',
global_pool=False):
"""Mobilenet v1 model for classification.
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
dropout_keep_prob: the percentage of activation values that are retained.
is_training: whether is training or not.
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
conv_defs: A list of ConvDef namedtuples specifying the net architecture.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape is [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
global_pool: Optional boolean flag to control the avgpooling before the
logits layer. If false or unset, pooling is done with a fixed window
that reduces default-sized inputs to 1x1, while larger inputs lead to
larger outputs. If true, any input size is pooled down to 1x1.
Returns:
net: a 2D Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the non-dropped-out input to the logits layer
if num_classes is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: Input rank is invalid.
"""
input_shape = inputs.get_shape().as_list()
if len(input_shape) != 4:
raise ValueError('Invalid input tensor rank, expected 4, was: %d' %
len(input_shape))
with tf.variable_scope(
scope, 'MobilenetV1', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = mobilenet_v1_base(inputs, scope=scope,
min_depth=min_depth,
depth_multiplier=depth_multiplier,
conv_defs=conv_defs)
with tf.variable_scope('Logits'):
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
else:
# Pooling with a fixed kernel size.
kernel_size = _reduced_kernel_size_for_small_input(net, [7, 7])
net = slim.avg_pool2d(net, kernel_size, padding='VALID',
scope='AvgPool_1a')
end_points['AvgPool_1a'] = net
if not num_classes:
return net, end_points
# 1 x 1 x 1024
net = slim.dropout(net, keep_prob=dropout_keep_prob, scope='Dropout_1b')
logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='Conv2d_1c_1x1')
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
if prediction_fn:
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
mobilenet_v1.default_image_size = 224
def wrapped_partial(func, *args, **kwargs):
partial_func = functools.partial(func, *args, **kwargs)
functools.update_wrapper(partial_func, func)
return partial_func
mobilenet_v1_075 = wrapped_partial(mobilenet_v1, depth_multiplier=0.75)
mobilenet_v1_050 = wrapped_partial(mobilenet_v1, depth_multiplier=0.50)
mobilenet_v1_025 = wrapped_partial(mobilenet_v1, depth_multiplier=0.25)
def _reduced_kernel_size_for_small_input(input_tensor, kernel_size):
"""Define kernel size which is automatically reduced for small input.
If the shape of the input images is unknown at graph construction time this
function assumes that the input images are large enough.
Args:
input_tensor: input tensor of size [batch_size, height, width, channels].
kernel_size: desired kernel size of length 2: [kernel_height, kernel_width]
Returns:
a tensor with the kernel size.
"""
shape = input_tensor.get_shape().as_list()
if shape[1] is None or shape[2] is None:
kernel_size_out = kernel_size
else:
kernel_size_out = [min(shape[1], kernel_size[0]),
min(shape[2], kernel_size[1])]
return kernel_size_out
def mobilenet_v1_arg_scope(
is_training=True,
weight_decay=0.00004,
stddev=0.09,
regularize_depthwise=False,
batch_norm_decay=0.9997,
batch_norm_epsilon=0.001,
batch_norm_updates_collections=tf.GraphKeys.UPDATE_OPS,
normalizer_fn=slim.batch_norm):
"""Defines the default MobilenetV1 arg scope.
Args:
is_training: Whether or not we're training the model. If this is set to
None, the parameter is not added to the batch_norm arg_scope.
weight_decay: The weight decay to use for regularizing the model.
stddev: The standard deviation of the trunctated normal weight initializer.
regularize_depthwise: Whether or not apply regularization on depthwise.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
batch_norm_updates_collections: Collection for the update ops for
batch norm.
normalizer_fn: Normalization function to apply after convolution.
Returns:
An `arg_scope` to use for the mobilenet v1 model.
"""
batch_norm_params = {
'center': True,
'scale': True,
'decay': batch_norm_decay,
'epsilon': batch_norm_epsilon,
'updates_collections': batch_norm_updates_collections,
}
if is_training is not None:
batch_norm_params['is_training'] = is_training
# Set weight_decay for weights in Conv and DepthSepConv layers.
weights_init = tf.truncated_normal_initializer(stddev=stddev)
regularizer = slim.l2_regularizer(weight_decay)
if regularize_depthwise:
depthwise_regularizer = regularizer
else:
depthwise_regularizer = None
with slim.arg_scope([slim.conv2d, slim.separable_conv2d],
weights_initializer=weights_init,
activation_fn=tf.nn.relu6, normalizer_fn=normalizer_fn):
with slim.arg_scope([slim.batch_norm], **batch_norm_params):
with slim.arg_scope([slim.conv2d], weights_regularizer=regularizer):
with slim.arg_scope([slim.separable_conv2d],
weights_regularizer=depthwise_regularizer) as sc:
return sc
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet_v1.py | mobilenet_v1.py |
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for tensorflow.contrib.slim.nets.cyclegan."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
from nets import cyclegan
# TODO(joelshor): Add a test to check generator endpoints.
class CycleganTest(tf.test.TestCase):
def test_generator_inference(self):
"""Check one inference step."""
img_batch = tf.zeros([2, 32, 32, 3])
model_output, _ = cyclegan.cyclegan_generator_resnet(img_batch)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(model_output)
def _test_generator_graph_helper(self, shape):
"""Check that generator can take small and non-square inputs."""
output_imgs, _ = cyclegan.cyclegan_generator_resnet(tf.ones(shape))
self.assertAllEqual(shape, output_imgs.shape.as_list())
def test_generator_graph_small(self):
self._test_generator_graph_helper([4, 32, 32, 3])
def test_generator_graph_medium(self):
self._test_generator_graph_helper([3, 128, 128, 3])
def test_generator_graph_nonsquare(self):
self._test_generator_graph_helper([2, 80, 400, 3])
def test_generator_unknown_batch_dim(self):
"""Check that generator can take unknown batch dimension inputs."""
img = tf.placeholder(tf.float32, shape=[None, 32, None, 3])
output_imgs, _ = cyclegan.cyclegan_generator_resnet(img)
self.assertAllEqual([None, 32, None, 3], output_imgs.shape.as_list())
def _input_and_output_same_shape_helper(self, kernel_size):
img_batch = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
output_img_batch, _ = cyclegan.cyclegan_generator_resnet(
img_batch, kernel_size=kernel_size)
self.assertAllEqual(img_batch.shape.as_list(),
output_img_batch.shape.as_list())
def input_and_output_same_shape_kernel3(self):
self._input_and_output_same_shape_helper(3)
def input_and_output_same_shape_kernel4(self):
self._input_and_output_same_shape_helper(4)
def input_and_output_same_shape_kernel5(self):
self._input_and_output_same_shape_helper(5)
def input_and_output_same_shape_kernel6(self):
self._input_and_output_same_shape_helper(6)
def _error_if_height_not_multiple_of_four_helper(self, height):
self.assertRaisesRegexp(
ValueError, 'The input height must be a multiple of 4.',
cyclegan.cyclegan_generator_resnet,
tf.placeholder(tf.float32, shape=[None, height, 32, 3]))
def test_error_if_height_not_multiple_of_four_height29(self):
self._error_if_height_not_multiple_of_four_helper(29)
def test_error_if_height_not_multiple_of_four_height30(self):
self._error_if_height_not_multiple_of_four_helper(30)
def test_error_if_height_not_multiple_of_four_height31(self):
self._error_if_height_not_multiple_of_four_helper(31)
def _error_if_width_not_multiple_of_four_helper(self, width):
self.assertRaisesRegexp(
ValueError, 'The input width must be a multiple of 4.',
cyclegan.cyclegan_generator_resnet,
tf.placeholder(tf.float32, shape=[None, 32, width, 3]))
def test_error_if_width_not_multiple_of_four_width29(self):
self._error_if_width_not_multiple_of_four_helper(29)
def test_error_if_width_not_multiple_of_four_width30(self):
self._error_if_width_not_multiple_of_four_helper(30)
def test_error_if_width_not_multiple_of_four_width31(self):
self._error_if_width_not_multiple_of_four_helper(31)
if __name__ == '__main__':
tf.test.main()
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/cyclegan_test.py | cyclegan_test.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains definitions for the preactivation form of Residual Networks.
Residual networks (ResNets) were originally proposed in:
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Deep Residual Learning for Image Recognition. arXiv:1512.03385
The full preactivation 'v2' ResNet variant implemented in this module was
introduced by:
[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Identity Mappings in Deep Residual Networks. arXiv: 1603.05027
The key difference of the full preactivation 'v2' variant compared to the
'v1' variant in [1] is the use of batch normalization before every weight layer.
Typical use:
from tf_slim.nets import resnet_v2
ResNet-101 for image classification into 1000 classes:
# inputs has shape [batch, 224, 224, 3]
with slim.arg_scope(resnet_v2.resnet_arg_scope()):
net, end_points = resnet_v2.resnet_v2_101(inputs, 1000, is_training=False)
ResNet-101 for semantic segmentation into 21 classes:
# inputs has shape [batch, 513, 513, 3]
with slim.arg_scope(resnet_v2.resnet_arg_scope()):
net, end_points = resnet_v2.resnet_v2_101(inputs,
21,
is_training=False,
global_pool=False,
output_stride=16)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import resnet_utils
resnet_arg_scope = resnet_utils.resnet_arg_scope
@slim.add_arg_scope
def bottleneck(inputs, depth, depth_bottleneck, stride, rate=1,
outputs_collections=None, scope=None):
"""Bottleneck residual unit variant with BN before convolutions.
This is the full preactivation residual unit variant proposed in [2]. See
Fig. 1(b) of [2] for its definition. Note that we use here the bottleneck
variant which has an extra bottleneck layer.
When putting together two consecutive ResNet blocks that use this unit, one
should use stride = 2 in the last unit of the first block.
Args:
inputs: A tensor of size [batch, height, width, channels].
depth: The depth of the ResNet unit output.
depth_bottleneck: The depth of the bottleneck layers.
stride: The ResNet unit's stride. Determines the amount of downsampling of
the units output compared to its input.
rate: An integer, rate for atrous convolution.
outputs_collections: Collection to add the ResNet unit output.
scope: Optional variable_scope.
Returns:
The ResNet unit's output.
"""
with tf.variable_scope(scope, 'bottleneck_v2', [inputs]) as sc:
depth_in = slim.utils.last_dimension(inputs.get_shape(), min_rank=4)
preact = slim.batch_norm(inputs, activation_fn=tf.nn.relu, scope='preact')
if depth == depth_in:
shortcut = resnet_utils.subsample(inputs, stride, 'shortcut')
else:
shortcut = slim.conv2d(preact, depth, [1, 1], stride=stride,
normalizer_fn=None, activation_fn=None,
scope='shortcut')
residual = slim.conv2d(preact, depth_bottleneck, [1, 1], stride=1,
scope='conv1')
residual = resnet_utils.conv2d_same(residual, depth_bottleneck, 3, stride,
rate=rate, scope='conv2')
residual = slim.conv2d(residual, depth, [1, 1], stride=1,
normalizer_fn=None, activation_fn=None,
scope='conv3')
output = shortcut + residual
return slim.utils.collect_named_outputs(outputs_collections,
sc.name,
output)
def resnet_v2(inputs,
blocks,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
include_root_block=True,
spatial_squeeze=True,
reuse=None,
scope=None):
"""Generator for v2 (preactivation) ResNet models.
This function generates a family of ResNet v2 models. See the resnet_v2_*()
methods for specific model instantiations, obtained by selecting different
block instantiations that produce ResNets of various depths.
Training for image classification on Imagenet is usually done with [224, 224]
inputs, resulting in [7, 7] feature maps at the output of the last ResNet
block for the ResNets defined in [1] that have nominal stride equal to 32.
However, for dense prediction tasks we advise that one uses inputs with
spatial dimensions that are multiples of 32 plus 1, e.g., [321, 321]. In
this case the feature maps at the ResNet output will have spatial shape
[(height - 1) / output_stride + 1, (width - 1) / output_stride + 1]
and corners exactly aligned with the input image corners, which greatly
facilitates alignment of the features to the image. Using as input [225, 225]
images results in [8, 8] feature maps at the output of the last ResNet block.
For dense prediction tasks, the ResNet needs to run in fully-convolutional
(FCN) mode and global_pool needs to be set to False. The ResNets in [1, 2] all
have nominal stride equal to 32 and a good choice in FCN mode is to use
output_stride=16 in order to increase the density of the computed features at
small computational and memory overhead, cf. http://arxiv.org/abs/1606.00915.
Args:
inputs: A tensor of size [batch, height_in, width_in, channels].
blocks: A list of length equal to the number of ResNet blocks. Each element
is a resnet_utils.Block object describing the units in the block.
num_classes: Number of predicted classes for classification tasks.
If 0 or None, we return the features before the logit layer.
is_training: whether batch_norm layers are in training mode.
global_pool: If True, we perform global average pooling before computing the
logits. Set to True for image classification, False for dense prediction.
output_stride: If None, then the output will be computed at the nominal
network stride. If output_stride is not None, it specifies the requested
ratio of input to output spatial resolution.
include_root_block: If True, include the initial convolution followed by
max-pooling, if False excludes it. If excluded, `inputs` should be the
results of an activation-less convolution.
spatial_squeeze: if True, logits is of shape [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
To use this parameter, the input images must be smaller than 300x300
pixels, in which case the output logit layer does not contain spatial
information and can be removed.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
Returns:
net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
If global_pool is False, then height_out and width_out are reduced by a
factor of output_stride compared to the respective height_in and width_in,
else both height_out and width_out equal one. If num_classes is 0 or None,
then net is the output of the last ResNet block, potentially after global
average pooling. If num_classes is a non-zero integer, net contains the
pre-softmax activations.
end_points: A dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: If the target output_stride is not valid.
"""
with tf.variable_scope(
scope, 'resnet_v2', [inputs], reuse=reuse) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
with slim.arg_scope([slim.conv2d, bottleneck,
resnet_utils.stack_blocks_dense],
outputs_collections=end_points_collection):
with slim.arg_scope([slim.batch_norm], is_training=is_training):
net = inputs
if include_root_block:
if output_stride is not None:
if output_stride % 4 != 0:
raise ValueError('The output_stride needs to be a multiple of 4.')
output_stride /= 4
# We do not include batch normalization or activation functions in
# conv1 because the first ResNet unit will perform these. Cf.
# Appendix of [2].
with slim.arg_scope([slim.conv2d],
activation_fn=None, normalizer_fn=None):
net = resnet_utils.conv2d_same(net, 64, 7, stride=2, scope='conv1')
net = slim.max_pool2d(net, [3, 3], stride=2, scope='pool1')
net = resnet_utils.stack_blocks_dense(net, blocks, output_stride)
# This is needed because the pre-activation variant does not have batch
# normalization or activation functions in the residual unit output. See
# Appendix of [2].
net = slim.batch_norm(net, activation_fn=tf.nn.relu, scope='postnorm')
# Convert end_points_collection into a dictionary of end_points.
end_points = slim.utils.convert_collection_to_dict(
end_points_collection)
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], name='pool5', keepdims=True)
end_points['global_pool'] = net
if num_classes:
net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='logits')
end_points[sc.name + '/logits'] = net
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='SpatialSqueeze')
end_points[sc.name + '/spatial_squeeze'] = net
end_points['predictions'] = slim.softmax(net, scope='predictions')
return net, end_points
resnet_v2.default_image_size = 224
def resnet_v2_block(scope, base_depth, num_units, stride):
"""Helper function for creating a resnet_v2 bottleneck block.
Args:
scope: The scope of the block.
base_depth: The depth of the bottleneck layer for each unit.
num_units: The number of units in the block.
stride: The stride of the block, implemented as a stride in the last unit.
All other units have stride=1.
Returns:
A resnet_v2 bottleneck block.
"""
return resnet_utils.Block(scope, bottleneck, [{
'depth': base_depth * 4,
'depth_bottleneck': base_depth,
'stride': 1
}] * (num_units - 1) + [{
'depth': base_depth * 4,
'depth_bottleneck': base_depth,
'stride': stride
}])
resnet_v2.default_image_size = 224
def resnet_v2_50(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
reuse=None,
scope='resnet_v2_50'):
"""ResNet-50 model of [1]. See resnet_v2() for arg and return description."""
blocks = [
resnet_v2_block('block1', base_depth=64, num_units=3, stride=2),
resnet_v2_block('block2', base_depth=128, num_units=4, stride=2),
resnet_v2_block('block3', base_depth=256, num_units=6, stride=2),
resnet_v2_block('block4', base_depth=512, num_units=3, stride=1),
]
return resnet_v2(inputs, blocks, num_classes, is_training=is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
reuse=reuse, scope=scope)
resnet_v2_50.default_image_size = resnet_v2.default_image_size
def resnet_v2_101(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
reuse=None,
scope='resnet_v2_101'):
"""ResNet-101 model of [1]. See resnet_v2() for arg and return description."""
blocks = [
resnet_v2_block('block1', base_depth=64, num_units=3, stride=2),
resnet_v2_block('block2', base_depth=128, num_units=4, stride=2),
resnet_v2_block('block3', base_depth=256, num_units=23, stride=2),
resnet_v2_block('block4', base_depth=512, num_units=3, stride=1),
]
return resnet_v2(inputs, blocks, num_classes, is_training=is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
reuse=reuse, scope=scope)
resnet_v2_101.default_image_size = resnet_v2.default_image_size
def resnet_v2_152(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
reuse=None,
scope='resnet_v2_152'):
"""ResNet-152 model of [1]. See resnet_v2() for arg and return description."""
blocks = [
resnet_v2_block('block1', base_depth=64, num_units=3, stride=2),
resnet_v2_block('block2', base_depth=128, num_units=8, stride=2),
resnet_v2_block('block3', base_depth=256, num_units=36, stride=2),
resnet_v2_block('block4', base_depth=512, num_units=3, stride=1),
]
return resnet_v2(inputs, blocks, num_classes, is_training=is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
reuse=reuse, scope=scope)
resnet_v2_152.default_image_size = resnet_v2.default_image_size
def resnet_v2_200(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
reuse=None,
scope='resnet_v2_200'):
"""ResNet-200 model of [2]. See resnet_v2() for arg and return description."""
blocks = [
resnet_v2_block('block1', base_depth=64, num_units=3, stride=2),
resnet_v2_block('block2', base_depth=128, num_units=24, stride=2),
resnet_v2_block('block3', base_depth=256, num_units=36, stride=2),
resnet_v2_block('block4', base_depth=512, num_units=3, stride=1),
]
return resnet_v2(inputs, blocks, num_classes, is_training=is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
reuse=reuse, scope=scope)
resnet_v2_200.default_image_size = resnet_v2.default_image_size
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/resnet_v2.py | resnet_v2.py |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains model definitions for versions of the Oxford VGG network.
These model definitions were introduced in the following technical report:
Very Deep Convolutional Networks For Large-Scale Image Recognition
Karen Simonyan and Andrew Zisserman
arXiv technical report, 2015
PDF: http://arxiv.org/pdf/1409.1556.pdf
ILSVRC 2014 Slides: http://www.robots.ox.ac.uk/~karen/pdf/ILSVRC_2014.pdf
CC-BY-4.0
More information can be obtained from the VGG website:
www.robots.ox.ac.uk/~vgg/research/very_deep/
Usage:
with slim.arg_scope(vgg.vgg_arg_scope()):
outputs, end_points = vgg.vgg_a(inputs)
with slim.arg_scope(vgg.vgg_arg_scope()):
outputs, end_points = vgg.vgg_16(inputs)
@@vgg_a
@@vgg_16
@@vgg_19
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
def vgg_arg_scope(weight_decay=0.0005):
"""Defines the VGG arg scope.
Args:
weight_decay: The l2 regularization coefficient.
Returns:
An arg_scope.
"""
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(weight_decay),
biases_initializer=tf.zeros_initializer()):
with slim.arg_scope([slim.conv2d], padding='SAME') as arg_sc:
return arg_sc
def vgg_a(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
reuse=None,
scope='vgg_a',
fc_conv_padding='VALID',
global_pool=False):
"""Oxford Net VGG 11-Layers version A Example.
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional scope for the variables.
fc_conv_padding: the type of padding to use for the fully connected layer
that is implemented as a convolutional layer. Use 'SAME' padding if you
are applying the network in a fully convolutional manner and want to
get a prediction map downsampled by a factor of 32 as an output.
Otherwise, the output prediction map will be (input / 32) - 6 in case of
'VALID' padding.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original VGG architecture.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the input to the logits layer (if num_classes is 0 or None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(scope, 'vgg_a', [inputs], reuse=reuse) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.repeat(inputs, 1, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 1, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 2, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 2, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 2, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
net = slim.conv2d(net, 4096, [7, 7], padding=fc_conv_padding, scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(end_points_collection)
if global_pool:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(net, num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
vgg_a.default_image_size = 224
def vgg_16(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
reuse=None,
scope='vgg_16',
fc_conv_padding='VALID',
global_pool=False):
"""Oxford Net VGG 16-Layers version D Example.
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional scope for the variables.
fc_conv_padding: the type of padding to use for the fully connected layer
that is implemented as a convolutional layer. Use 'SAME' padding if you
are applying the network in a fully convolutional manner and want to
get a prediction map downsampled by a factor of 32 as an output.
Otherwise, the output prediction map will be (input / 32) - 6 in case of
'VALID' padding.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original VGG architecture.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the input to the logits layer (if num_classes is 0 or None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(
scope, 'vgg_16', [inputs], reuse=reuse) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
net = slim.conv2d(net, 4096, [7, 7], padding=fc_conv_padding, scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(end_points_collection)
if global_pool:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(net, num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
vgg_16.default_image_size = 224
def vgg_19(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
reuse=None,
scope='vgg_19',
fc_conv_padding='VALID',
global_pool=False):
"""Oxford Net VGG 19-Layers version E Example.
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional scope for the variables.
fc_conv_padding: the type of padding to use for the fully connected layer
that is implemented as a convolutional layer. Use 'SAME' padding if you
are applying the network in a fully convolutional manner and want to
get a prediction map downsampled by a factor of 32 as an output.
Otherwise, the output prediction map will be (input / 32) - 6 in case of
'VALID' padding.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original VGG architecture.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the non-dropped-out input to the logits layer (if num_classes is 0 or
None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(
scope, 'vgg_19', [inputs], reuse=reuse) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 4, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 4, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 4, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
net = slim.conv2d(net, 4096, [7, 7], padding=fc_conv_padding, scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(end_points_collection)
if global_pool:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(net, num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
vgg_19.default_image_size = 224
# Alias
vgg_d = vgg_16
vgg_e = vgg_19
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/vgg.py | vgg.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.