code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
115 Wangpan
===========
|Build| |PyPI version|
115 Wangpan (115网盘 or 115云) is an unofficial Python API and SDK for 115.com. Supported Python verisons are 2.6, 2.7, 3.3, 3.4.
* Documentation: http://115wangpan.readthedocs.org
* GitHub: https://github.com/shichao-an/115wangpan
* PyPI: https://pypi.python.org/pypi/115wangpan/
Features
--------
* Authentication
* Persistent session
* Tasks management: BitTorrent and links
* Files management: uploading, downloading, searching, and editing
Installation
------------
`libcurl <http://curl.haxx.se/libcurl/>`_ is required. Install dependencies before installing the python package:
Ubuntu:
.. code-block:: bash
$ sudo apt-get install build-essential libcurl4-openssl-dev python-dev
Fedora:
.. code-block:: bash
$ sudo yum groupinstall "Development Tools"
$ sudo yum install libcurl libcurl-devel python-devel
Then, you can install with pip:
.. code-block:: bash
$ pip install 115wangpan
Or, if you want to install the latest from GitHub:
.. code-block:: bash
$ pip install git+https://github.com/shichao-an/115wangpan
Usage
-----
.. code-block:: python
>>> import u115
>>> api = u115.API()
>>> api.login('username@example.com', 'password')
True
>>> tasks = api.get_tasks()
>>> task = tasks[0]
>>> print task.name
咲-Saki- 阿知賀編 episode of side-A
>>> print task.status_human
TRANSFERRED
>>> print task.size_human
1.6 GiB
>>> files = task.list()
>>> files
[<File: 第8局 修行.mkv>]
>>> f = files[0]
>>> f.url
u'http://cdnuni.115.com/some-very-long-url.mkv'
>>> f.directory
<Directory: 咲-Saki- 阿知賀編 episode of side-A>
>>> f.directory.parent
<Directory: 离线下载>
CLI commands
------------
* 115 down: for downloading files
* 115 up: for creating tasks from torrents and links
.. |Build| image:: https://api.travis-ci.org/shichao-an/115wangpan.png?branch=master
:target: http://travis-ci.org/shichao-an/115wangpan
.. |PyPI version| image:: https://img.shields.io/pypi/v/115wangpan.png
:target: https://pypi.python.org/pypi/115wangpan/
| 115wangpan | /115wangpan-0.7.6.tar.gz/115wangpan-0.7.6/README.rst | README.rst |
Changelog
=========
0.7.6 (2015-08-01)
------------------
- Fixed DRY_RUN message print by using print_msg that handles PY2 and PY3 strings
- Added -F/--files-only option to 115 down
- Fixed files_only parse error
- Fixed unexpected kwargs for get_tasks
- Fixed Task against added 'url' attr
0.7.5 (2015-07-02)
------------------
- Added environs to make a "workaround" that deals with issue #27
- Fixed Task.is_directory to include 'BEING TRANSFERRED' exception
0.7.4 (2015-06-20)
------------------
- Fixed getting download URL error due to another API change (#23)
0.7.3 (2015-06-16)
------------------
- Fixed previous broken release that does not contain CLI command 115
0.7.2 (2015-06-16)
------------------
- Fixed getting download URL error due to API change (#23)
0.7.1 (2015-06-15)
------------------
- Fixed argparse's required subparser behavior in Python 2.7 (http://bugs.python.org/issue9253)
0.7.0 (2015-06-14)
------------------
- Added public methods: move, edit, mkdir (#13, #19)
- Added Pro API support for getting download URL (#21)
- Added ``receiver_directory``
- Added logging utility and debugging hooks (#22)
- Combined 115down and 115up into a single 115 commands
- Supported Python 3.4 by removing ``__del__``
0.6.0 (2015-05-17)
------------------
- Deprecated ``auto_logout`` argument
- Added cookies support to CLI commands
0.5.1 (2015-04-20)
------------------
- 115down: fixed sub-entry range parser to ordered list
0.5.0 (2015-04-12)
------------------
- 115down: supported both keeping directory structure and flattening
- Fixed ``Task`` to not inherit ``Directory``
0.4.2 (2015-04-03)
------------------
- Fixed broken upload due to source page change (``_parse_src_js_var``)
0.4.1 (2015-04-03)
------------------
- 115down: added range support for argument ``sub_num`` (#14)
- 115down: added size display for file and task entries
0.4.0 (2015-03-23)
------------------
- Added persistent session (cookies) feature
- Added search API
- Added CLI commands: 115down and 115up
- Fixed #10
0.3.1 (2015-02-03)
------------------
- Fixed broken release 0.3.0 due to a missing dependency
0.3.0 (2015-02-03)
------------------
- Used external package "homura" to replace downloader utility
- Merge #8: added add_task_url API
0.2.4 (2014-10-09)
------------------
- Fixed #5: add isatty() so progress refreshes less frequently on non-tty
- Fixed parse_src_js_var
0.2.3 (2014-09-23)
------------------
- Fixed #2: ``show_progress`` argument
- Added resume download feature
0.2.2 (2014-09-20)
------------------
- Added system dependencies to documentation
0.2.1 (2014-09-20)
------------------
- Fixed ``Task.status_human`` error
0.2.0 (2014-09-20)
------------------
- Added download feature to the API and ``download`` method to ``u115.File``
- Added elaborate exceptions
- Added ``auto_logout`` optional argument to ``u115.API.__init__``
- Updated Task status info
0.1.1 (2014-09-11)
------------------
- Fixed broken sdist release of v0.1.0.
0.1.0 (2014-09-11)
------------------
- Initial release.
| 115wangpan | /115wangpan-0.7.6.tar.gz/115wangpan-0.7.6/CHANGELOG.rst | CHANGELOG.rst |
from __future__ import print_function, absolute_import
import humanize
import inspect
import json
import logging
import os
import re
import requests
import time
from hashlib import sha1
from bs4 import BeautifulSoup
from requests.cookies import RequestsCookieJar
from u115 import conf
from u115.utils import (get_timestamp, get_utcdatetime, string_to_datetime,
eval_path, quote, unquote, utf8_encode, txt_type, PY3)
from homura import download
if PY3:
from http import cookiejar as cookielib
else:
import cookielib
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.94 Safari/537.36'
LOGIN_URL = 'http://passport.115.com/?ct=login&ac=ajax&is_ssl=1'
LOGOUT_URL = 'http://passport.115.com/?ac=logout'
CHECKPOINT_URL = 'http://passport.115.com/?ct=ajax&ac=ajax_check_point'
class RequestsLWPCookieJar(cookielib.LWPCookieJar, RequestsCookieJar):
""":class:`requests.cookies.RequestsCookieJar` compatible
:class:`cookielib.LWPCookieJar`"""
pass
class RequestsMozillaCookieJar(cookielib.MozillaCookieJar, RequestsCookieJar):
""":class:`requests.cookies.RequestsCookieJar` compatible
:class:`cookielib.MozillaCookieJar`"""
pass
class RequestHandler(object):
"""
Request handler that maintains session
:ivar session: underlying :class:`requests.Session` instance
"""
def __init__(self):
self.session = requests.Session()
self.session.headers['User-Agent'] = USER_AGENT
def get(self, url, params=None):
"""
Initiate a GET request
"""
r = self.session.get(url, params=params)
return self._response_parser(r, expect_json=False)
def post(self, url, data, params=None):
"""
Initiate a POST request
"""
r = self.session.post(url, data=data, params=params)
return self._response_parser(r, expect_json=False)
def send(self, request, expect_json=True, ignore_content=False):
"""
Send a formatted API request
:param request: a formatted request object
:type request: :class:`.Request`
:param bool expect_json: if True, raise :class:`.InvalidAPIAccess` if
response is not in JSON format
:param bool ignore_content: whether to ignore setting content of the
Response object
"""
r = self.session.request(method=request.method,
url=request.url,
params=request.params,
data=request.data,
files=request.files,
headers=request.headers)
return self._response_parser(r, expect_json, ignore_content)
def _response_parser(self, r, expect_json=True, ignore_content=False):
"""
:param :class:`requests.Response` r: a response object of the Requests
library
:param bool expect_json: if True, raise :class:`.InvalidAPIAccess` if
response is not in JSON format
:param bool ignore_content: whether to ignore setting content of the
Response object
"""
if r.ok:
try:
j = r.json()
return Response(j.get('state'), j)
except ValueError:
# No JSON-encoded data returned
if expect_json:
logger = logging.getLogger(conf.LOGGING_API_LOGGER)
logger.debug(r.text)
raise InvalidAPIAccess('Invalid API access.')
# Raw response
if ignore_content:
res = Response(True, None)
else:
res = Response(True, r.text)
return res
else:
r.raise_for_status()
class Request(object):
"""Formatted API request class"""
def __init__(self, url, method='GET', params=None, data=None,
files=None, headers=None):
"""
Create a Request object
:param str url: URL
:param str method: request method
:param dict params: request parameters
:param dict data: form data
:param dict files: mulitpart form data
:param dict headers: custom request headers
"""
self.url = url
self.method = method
self.params = params
self.data = data
self.files = files
self.headers = headers
self._debug()
def _debug(self):
logger = logging.getLogger(conf.LOGGING_API_LOGGER)
level = logger.getEffectiveLevel()
if level == logging.DEBUG:
func = inspect.stack()[2][3]
msg = conf.DEBUG_REQ_FMT % (func, self.url, self.method,
self.params, self.data)
logger.debug(msg)
class Response(object):
"""
Formatted API response class
:ivar bool state: whether API access is successful
:ivar dict content: result content
"""
def __init__(self, state, content):
self.state = state
self.content = content
self._debug()
def _debug(self):
logger = logging.getLogger(conf.LOGGING_API_LOGGER)
level = logger.getEffectiveLevel()
if level == logging.DEBUG:
func = inspect.stack()[4][3]
msg = conf.DEBUG_RES_FMT % (func, self.state, self.content)
logger.debug(msg)
class API(object):
"""
Request and response interface
:ivar passport: :class:`.Passport` object associated with this interface
:ivar http: :class:`.RequestHandler` object associated with this
interface
:cvar int num_tasks_per_page: default number of tasks per page/request
:cvar str web_api_url: files API url
:cvar str aps_natsort_url: natural sort files API url
:cvar str proapi_url: pro API url for downloads
"""
num_tasks_per_page = 30
web_api_url = 'http://web.api.115.com/files'
aps_natsort_url = 'http://aps.115.com/natsort/files.php'
proapi_url = 'http://proapi.115.com/app/chrome/down'
referer_url = 'http://115.com'
def __init__(self, persistent=False,
cookies_filename=None, cookies_type='LWPCookieJar'):
"""
:param bool auto_logout: whether to logout automatically when
:class:`.API` object is destroyed
.. deprecated:: 0.6.0
Call :meth:`.API.logout` explicitly
:param bool persistent: whether to use persistent session that stores
cookies on disk
:param str cookies_filename: path to the cookies file, use default
path (`~/.115cookies`) if None
:param str cookies_type: a string representing
:class:`cookielib.FileCookieJar` subclass,
`LWPCookieJar` (default) or `MozillaCookieJar`
"""
self.persistent = persistent
self.cookies_filename = cookies_filename
self.cookies_type = cookies_type
self.passport = None
self.http = RequestHandler()
self.logger = logging.getLogger(conf.LOGGING_API_LOGGER)
# Cache attributes to decrease API hits
self._user_id = None
self._username = None
self._signatures = {}
self._upload_url = None
self._lixian_timestamp = None
self._root_directory = None
self._downloads_directory = None
self._receiver_directory = None
self._torrents_directory = None
self._task_count = None
self._task_quota = None
if self.persistent:
self.load_cookies()
def _reset_cache(self):
self._user_id = None
self._username = None
self._signatures = {}
self._upload_url = None
self._lixian_timestamp = None
self._root_directory = None
self._downloads_directory = None
self._receiver_directory = None
self._torrents_directory = None
self._task_count = None
self._task_quota = None
def _init_cookies(self):
# RequestsLWPCookieJar or RequestsMozillaCookieJar
cookies_class = globals()['Requests' + self.cookies_type]
f = self.cookies_filename or conf.COOKIES_FILENAME
self.cookies = cookies_class(f)
def load_cookies(self, ignore_discard=True, ignore_expires=True):
"""Load cookies from the file :attr:`.API.cookies_filename`"""
self._init_cookies()
if os.path.exists(self.cookies.filename):
self.cookies.load(ignore_discard=ignore_discard,
ignore_expires=ignore_expires)
self._reset_cache()
def save_cookies(self, ignore_discard=True, ignore_expires=True):
"""Save cookies to the file :attr:`.API.cookies_filename`"""
if not isinstance(self.cookies, cookielib.FileCookieJar):
m = 'Cookies must be a cookielib.FileCookieJar object to be saved.'
raise APIError(m)
self.cookies.save(ignore_discard=ignore_discard,
ignore_expires=ignore_expires)
@property
def cookies(self):
"""
Cookies of the current API session (cookies getter shortcut)
"""
return self.http.session.cookies
@cookies.setter
def cookies(self, cookies):
"""
Cookies of the current API session (cookies setter shortcut)
"""
self.http.session.cookies = cookies
def login(self, username=None, password=None,
section='default'):
"""
Created the passport with ``username`` and ``password`` and log in.
If either ``username`` or ``password`` is None or omitted, the
credentials file will be parsed.
:param str username: username to login (email, phone number or user ID)
:param str password: password
:param str section: section name in the credential file
:raise: raises :class:`.AuthenticationError` if failed to login
"""
if self.has_logged_in:
return True
if username is None or password is None:
credential = conf.get_credential(section)
username = credential['username']
password = credential['password']
passport = Passport(username, password)
r = self.http.post(LOGIN_URL, passport.form)
if r.state is True:
# Bind this passport to API
self.passport = passport
passport.data = r.content['data']
self._user_id = r.content['data']['USER_ID']
return True
else:
msg = None
if 'err_name' in r.content:
if r.content['err_name'] == 'account':
msg = 'Account does not exist.'
elif r.content['err_name'] == 'passwd':
msg = 'Password is incorrect.'
raise AuthenticationError(msg)
def get_user_info(self):
"""
Get user info
:return: a dictionary of user information
:rtype: dict
"""
return self._req_get_user_aq()
@property
def user_id(self):
"""
User id of the current API user
"""
if self._user_id is None:
if self.has_logged_in:
self._user_id = self._req_get_user_aq()['data']['uid']
else:
raise AuthenticationError('Not logged in.')
return self._user_id
@property
def username(self):
"""
Username of the current API user
"""
if self._username is None:
if self.has_logged_in:
self._username = self._get_username()
else:
raise AuthenticationError('Not logged in.')
return self._username
@property
def has_logged_in(self):
"""Check whether the API has logged in"""
r = self.http.get(CHECKPOINT_URL)
if r.state is False:
return True
# If logged out, flush cache
self._reset_cache()
return False
def logout(self):
"""Log out"""
self.http.get(LOGOUT_URL)
self._reset_cache()
return True
@property
def root_directory(self):
"""Root directory"""
if self._root_directory is None:
self._load_root_directory()
return self._root_directory
@property
def downloads_directory(self):
"""Default directory for downloaded files"""
if self._downloads_directory is None:
self._load_downloads_directory()
return self._downloads_directory
@property
def receiver_directory(self):
"""Parent directory of the downloads directory"""
if self._receiver_directory is None:
self._receiver_directory = self.downloads_directory.parent
return self._receiver_directory
@property
def torrents_directory(self):
"""Default directory that stores uploaded torrents"""
if self._torrents_directory is None:
self._load_torrents_directory()
return self._torrents_directory
@property
def task_count(self):
"""
Number of tasks created
"""
self._req_lixian_task_lists()
return self._task_count
@property
def task_quota(self):
"""
Task quota (monthly)
"""
self._req_lixian_task_lists()
return self._task_quota
def get_tasks(self, count=30):
"""
Get ``count`` number of tasks
:param int count: number of tasks to get
:return: a list of :class:`.Task` objects
"""
return self._load_tasks(count)
def add_task_bt(self, filename, select=False):
"""
Add a new BT task
:param str filename: path to torrent file to upload
:param bool select: whether to select files in the torrent.
* True: it returns the opened torrent (:class:`.Torrent`) and
can then iterate files in :attr:`.Torrent.files` and
select/unselect them before calling :meth:`.Torrent.submit`
* False: it will submit the torrent with default selected files
"""
filename = eval_path(filename)
u = self.upload(filename, self.torrents_directory)
t = self._load_torrent(u)
if select:
return t
return t.submit()
def add_task_url(self, target_url):
"""
Add a new URL task
:param str target_url: the URL of the file that to be downloaded
"""
return self._req_lixian_add_task_url(target_url)
def get_storage_info(self, human=False):
"""
Get storage info
:param bool human: whether return human-readable size
:return: total and used storage
:rtype: dict
"""
res = self._req_get_storage_info()
if human:
res['total'] = humanize.naturalsize(res['total'], binary=True)
res['used'] = humanize.naturalsize(res['used'], binary=True)
return res
def upload(self, filename, directory=None):
"""
Upload a file ``filename`` to ``directory``
:param str filename: path to the file to upload
:param directory: destionation :class:`.Directory`, defaults to
:attribute:`.API.downloads_directory` if None
:return: the uploaded file
:rtype: :class:`.File`
"""
filename = eval_path(filename)
if directory is None:
directory = self.downloads_directory
# First request
res1 = self._req_upload(filename, directory)
data1 = res1['data']
file_id = data1['file_id']
# Second request
res2 = self._req_file(file_id)
data2 = res2['data'][0]
data2.update(**data1)
return _instantiate_uploaded_file(self, data2)
def download(self, obj, path=None, show_progress=True, resume=True,
auto_retry=True, proapi=False):
"""
Download a file
:param obj: :class:`.File` object
:param str path: local path
:param bool show_progress: whether to show download progress
:param bool resume: whether to resume on unfinished downloads
identified by filename
:param bool auto_retry: whether to retry automatically upon closed
transfer until the file's download is finished
:param bool proapi: whether to use pro API
"""
url = obj.get_download_url(proapi)
download(url, path=path, session=self.http.session,
show_progress=show_progress, resume=resume,
auto_retry=auto_retry)
def search(self, keyword, count=30):
"""
Search files or directories
:param str keyword: keyword
:param int count: number of entries to be listed
"""
kwargs = {}
kwargs['search_value'] = keyword
root = self.root_directory
entries = root._load_entries(func=self._req_files_search,
count=count, page=1, **kwargs)
res = []
for entry in entries:
if 'pid' in entry:
res.append(_instantiate_directory(self, entry))
else:
res.append(_instantiate_file(self, entry))
return res
def move(self, entries, directory):
"""
Move one or more entries (file or directory) to the destination
directory
:param list entries: a list of source entries (:class:`.BaseFile`
object)
:param directory: destination directory
:return: whether the action is successful
:raise: :class:`.APIError` if something bad happened
"""
fcids = []
for entry in entries:
if isinstance(entry, File):
fcid = entry.fid
elif isinstance(entry, Directory):
fcid = entry.cid
else:
raise APIError('Invalid BaseFile instance for an entry.')
fcids.append(fcid)
if not isinstance(directory, Directory):
raise APIError('Invalid destination directory.')
if self._req_files_move(directory.cid, fcids):
for entry in entries:
if isinstance(entry, File):
entry.cid = directory.cid
entry.reload()
return True
else:
raise APIError('Error moving entries.')
def edit(self, entry, name, mark=False):
"""
Edit an entry (file or directory)
:param entry: :class:`.BaseFile` object
:param str name: new name for the entry
:param bool mark: whether to bookmark the entry
"""
fcid = None
if isinstance(entry, File):
fcid = entry.fid
elif isinstance(entry, Directory):
fcid = entry.cid
else:
raise APIError('Invalid BaseFile instance for an entry.')
is_mark = 0
if mark is True:
is_mark = 1
if self._req_files_edit(fcid, name, is_mark):
entry.reload()
return True
else:
raise APIError('Error editing the entry.')
def mkdir(self, parent, name):
"""
Create a directory
:param parent: the parent directory
:param str name: the name of the new directory
:return: the new directory
:rtype: :class:`.Directory`
"""
pid = None
cid = None
if isinstance(parent, Directory):
pid = parent.cid
else:
raise('Invalid Directory instance.')
cid = self._req_files_add(pid, name)['cid']
return self._load_directory(cid)
def _req_offline_space(self):
"""Required before accessing lixian tasks"""
url = 'http://115.com/'
params = {
'ct': 'offline',
'ac': 'space',
'_': get_timestamp(13)
}
_sign = os.environ.get('U115_BROWSER_SIGN')
if _sign is not None:
_time = os.environ.get('U115_BROWSER_TIME')
if _time is None:
msg = 'U115_BROWSER_TIME is required given U115_BROWSER_SIGN.'
raise APIError(msg)
params['sign'] = _sign
params['time'] = _time
params['uid'] = self.user_id
req = Request(url=url, params=params)
r = self.http.send(req)
if r.state:
self._signatures['offline_space'] = r.content['sign']
self._lixian_timestamp = r.content['time']
else:
msg = 'Failed to retrieve signatures.'
raise RequestFailure(msg)
def _req_lixian_task_lists(self, page=1):
"""
This request will cause the system to create a default downloads
directory if it does not exist
"""
url = 'http://115.com/lixian/'
params = {'ct': 'lixian', 'ac': 'task_lists'}
self._load_signatures()
data = {
'page': page,
'uid': self.user_id,
'sign': self._signatures['offline_space'],
'time': self._lixian_timestamp,
}
req = Request(method='POST', url=url, params=params, data=data)
res = self.http.send(req)
if res.state:
self._task_count = res.content['count']
self._task_quota = res.content['quota']
return res.content['tasks']
else:
msg = 'Failed to get tasks.'
raise RequestFailure(msg)
def _req_lixian_get_id(self, torrent=False):
"""Get `cid` of lixian space directory"""
url = 'http://115.com/'
params = {
'ct': 'lixian',
'ac': 'get_id',
'torrent': 1 if torrent else None,
'_': get_timestamp(13)
}
req = Request(method='GET', url=url, params=params)
res = self.http.send(req)
return res.content
def _req_lixian_torrent(self, u):
"""
:param u: uploaded torrent file
"""
self._load_signatures()
url = 'http://115.com/lixian/'
params = {
'ct': 'lixian',
'ac': 'torrent',
}
data = {
'pickcode': u.pickcode,
'sha1': u.sha,
'uid': self.user_id,
'sign': self._signatures['offline_space'],
'time': self._lixian_timestamp,
}
req = Request(method='POST', url=url, params=params, data=data)
res = self.http.send(req)
if res.state:
return res.content
else:
msg = res.content.get('error_msg')
self.logger.error(msg)
raise RequestFailure('Failed to open torrent.')
def _req_lixian_add_task_bt(self, t):
self._load_signatures()
url = 'http://115.com/lixian/'
params = {'ct': 'lixian', 'ac': 'add_task_bt'}
_wanted = []
for i, b in enumerate(t.files):
if b.selected:
_wanted.append(str(i))
wanted = ','.join(_wanted)
data = {
'info_hash': t.info_hash,
'wanted': wanted,
'savepath': t.name,
'uid': self.user_id,
'sign': self._signatures['offline_space'],
'time': self._lixian_timestamp,
}
req = Request(method='POST', url=url, params=params, data=data)
res = self.http.send(req)
if res.state:
return True
else:
msg = res.content.get('error_msg')
self.logger.error(msg)
raise RequestFailure('Failed to create new task.')
def _req_lixian_add_task_url(self, target_url):
self._load_signatures()
url = 'http://115.com/lixian/'
params = {'ct': 'lixian', 'ac': 'add_task_url'}
data = {
'url': target_url,
'uid': self.user_id,
'sign': self._signatures['offline_space'],
'time': self._lixian_timestamp,
}
req = Request(method='POST', url=url, params=params, data=data)
res = self.http.send(req)
if res.state:
return True
else:
msg = res.content.get('error_msg')
self.logger.error(msg)
raise RequestFailure('Failed to create new task.')
def _req_lixian_task_del(self, t):
self._load_signatures()
url = 'http://115.com/lixian/'
params = {'ct': 'lixian', 'ac': 'task_del'}
data = {
'hash[0]': t.info_hash,
'uid': self.user_id,
'sign': self._signatures['offline_space'],
'time': self._lixian_timestamp,
}
req = Request(method='POST', url=url, params=params, data=data)
res = self.http.send(req)
if res.state:
return True
else:
raise RequestFailure('Failed to delete the task.')
def _req_file_userfile(self):
url = 'http://115.com/'
params = {
'ct': 'file',
'ac': 'userfile',
'is_wl_tpl': 1,
}
req = Request(method='GET', url=url, params=params)
self.http.send(req, expect_json=False, ignore_content=True)
def _req_aps_natsort_files(self, cid, offset, limit, o='file_name',
asc=1, aid=1, show_dir=1, code=None, scid=None,
snap=0, natsort=1, source=None, type=0,
format='json', star=None, is_share=None):
"""
When :meth:`.API._req_files` is called with `o='filename'` and
`natsort=1`, API access will fail
and :meth:`.API._req_aps_natsort_files` is subsequently called with
the same kwargs. Refer to the implementation in
:meth:`.Directory.list`
"""
params = locals()
del params['self']
req = Request(method='GET', url=self.aps_natsort_url, params=params)
res = self.http.send(req)
if res.state:
return res.content
else:
raise RequestFailure('Failed to access files API.')
def _req_files(self, cid, offset, limit, o='user_ptime', asc=1, aid=1,
show_dir=1, code=None, scid=None, snap=0, natsort=1,
source=None, type=0, format='json', star=None,
is_share=None):
"""
:param int type: type of files to be displayed
* '' (empty string): marked
* None: all
* 0: all
* 1: documents
* 2: images
* 3: music
* 4: video
* 5: zipped
* 6: applications
* 99: files only
"""
params = locals()
del params['self']
req = Request(method='GET', url=self.web_api_url, params=params)
res = self.http.send(req)
if res.state:
return res.content
else:
raise RequestFailure('Failed to access files API.')
def _req_files_search(self, offset, limit, search_value, aid=-1,
date=None, pick_code=None, source=None, type=0,
format='json'):
params = locals()
del params['self']
url = self.web_api_url + '/search'
req = Request(method='GET', url=url, params=params)
res = self.http.send(req)
if res.state:
return res.content
else:
raise RequestFailure('Failed to access files API.')
def _req_files_edit(self, fid, file_name=None, is_mark=0):
"""Edit a file or directory"""
url = self.web_api_url + '/edit'
data = locals()
del data['self']
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return True
else:
raise RequestFailure('Failed to access files API.')
def _req_files_add(self, pid, cname):
"""
Add a directory
:param str pid: parent directory id
:param str cname: directory name
"""
url = self.web_api_url + '/add'
data = locals()
del data['self']
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return res.content
else:
raise RequestFailure('Failed to access files API.')
def _req_files_move(self, pid, fids):
"""
Move files or directories
:param str pid: destination directory id
:param list fids: a list of ids of files or directories to be moved
"""
url = self.web_api_url + '/move'
data = {}
data['pid'] = pid
for i, fid in enumerate(fids):
data['fid[%d]' % i] = fid
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return True
else:
raise RequestFailure('Failed to access files API.')
def _req_file(self, file_id):
url = self.web_api_url + '/file'
data = {'file_id': file_id}
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return res.content
else:
raise RequestFailure('Failed to access files API.')
def _req_directory(self, cid):
"""Return name and pid of by cid"""
res = self._req_files(cid=cid, offset=0, limit=1, show_dir=1)
path = res['path']
count = res['count']
for d in path:
if str(d['cid']) == str(cid):
res = {
'cid': d['cid'],
'name': d['name'],
'pid': d['pid'],
'count': count,
}
return res
else:
raise RequestFailure('No directory found.')
def _req_files_download_url(self, pickcode, proapi=False):
if '_115_curtime' not in self.cookies:
self._req_file_userfile()
if not proapi:
url = self.web_api_url + '/download'
params = {'pickcode': pickcode, '_': get_timestamp(13)}
else:
url = self.proapi_url
params = {'pickcode': pickcode, 'method': 'get_file_url'}
headers = {
'Referer': self.referer_url,
}
req = Request(method='GET', url=url, params=params,
headers=headers)
res = self.http.send(req)
if res.state:
if not proapi:
return res.content['file_url']
else:
fid = res.content['data'].keys()[0]
return res.content['data'][fid]['url']['url']
else:
raise RequestFailure('Failed to get download URL.')
def _req_get_storage_info(self):
url = 'http://115.com'
params = {
'ct': 'ajax',
'ac': 'get_storage_info',
'_': get_timestamp(13),
}
req = Request(method='GET', url=url, params=params)
res = self.http.send(req)
return res.content['1']
def _req_upload(self, filename, directory):
"""Raw request to upload a file ``filename``"""
self._upload_url = self._load_upload_url()
self.http.get('http://upload.115.com/crossdomain.xml')
b = os.path.basename(filename)
target = 'U_1_' + str(directory.cid)
files = {
'Filename': ('', quote(b), ''),
'target': ('', target, ''),
'Filedata': (quote(b), open(filename, 'rb'), ''),
'Upload': ('', 'Submit Query', ''),
}
req = Request(method='POST', url=self._upload_url, files=files)
res = self.http.send(req)
if res.state:
return res.content
else:
msg = None
if res.content['code'] == 990002:
msg = 'Invalid parameter.'
elif res.content['code'] == 1001:
msg = 'Torrent upload failed. Please try again later.'
raise RequestFailure(msg)
def _req_rb_delete(self, fcid, pid):
url = 'http://web.api.115.com/rb/delete'
data = {
'pid': pid,
'fid[0]': fcid,
}
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return True
else:
msg = 'Failed to delete this file or directory.'
if 'errno' in res.content:
if res.content['errno'] == 990005:
raise JobError()
self.logger.error(res.content['error'])
raise APIError(msg)
def _req_get_user_aq(self):
url = 'http://my.115.com/'
data = {
'ct': 'ajax',
'ac': 'get_user_aq'
}
req = Request(method='POST', url=url, data=data)
res = self.http.send(req)
if res.state:
return res.content
def _load_signatures(self, force=True):
if not self._signatures or force:
self._req_offline_space()
def _load_tasks(self, count, page=1, tasks=None):
if tasks is None:
tasks = []
req_tasks = self._req_lixian_task_lists(page)
loaded_tasks = []
if req_tasks is not None:
loaded_tasks = [
_instantiate_task(self, t) for t in req_tasks[:count]
]
if count <= self.num_tasks_per_page or req_tasks is None:
return tasks + loaded_tasks
else:
return self._load_tasks(count - self.num_tasks_per_page,
page + 1, tasks + loaded_tasks)
def _load_directory(self, cid):
kwargs = self._req_directory(cid)
if str(kwargs['pid']) != str(cid):
return Directory(api=self, **kwargs)
def _load_root_directory(self):
"""
Load root directory, which has a cid of 0
"""
kwargs = self._req_directory(0)
self._root_directory = Directory(api=self, **kwargs)
def _load_torrents_directory(self):
"""
Load torrents directory
If it does not exist yet, this request will cause the system to create
one
"""
r = self._req_lixian_get_id(torrent=True)
self._downloads_directory = self._load_directory(r['cid'])
def _load_downloads_directory(self):
"""
Load downloads directory
If it does not exist yet, this request will cause the system to create
one
"""
r = self._req_lixian_get_id(torrent=False)
self._downloads_directory = self._load_directory(r['cid'])
def _load_upload_url(self):
res = self._parse_src_js_var('upload_config_h5')
return res['url']
def _load_torrent(self, u):
res = self._req_lixian_torrent(u)
return _instantiate_torrent(self, res)
def _parse_src_js_var(self, variable):
"""Parse JavaScript variables in the source page"""
src_url = 'http://115.com'
r = self.http.get(src_url)
soup = BeautifulSoup(r.content)
scripts = [script.text for script in soup.find_all('script')]
text = '\n'.join(scripts)
pattern = "%s\s*=\s*(.*);" % (variable.upper())
m = re.search(pattern, text)
if not m:
msg = 'Cannot parse source JavaScript for %s.' % variable
raise APIError(msg)
return json.loads(m.group(1).strip())
def _get_username(self):
return unquote(self.cookies.get('OOFL'))
class Base(object):
def __repr__(self):
try:
u = self.__str__()
except (UnicodeEncodeError, UnicodeDecodeError):
u = '[Bad Unicode data]'
repr_type = type(u)
return repr_type('<%s: %s>' % (self.__class__.__name__, u))
def __str__(self):
if hasattr(self, '__unicode__'):
if PY3:
return self.__unicode__()
else:
return unicode(self).encode('utf-8')
return txt_type('%s object' % self.__class__.__name__)
class Passport(Base):
"""
Passport for user authentication
:ivar str username: username
:ivar str password: user password
:ivar dict form: a dictionary of POST data to login
:ivar int user_id: user ID of the authenticated user
:ivar dict data: data returned upon login
"""
def __init__(self, username, password):
self.username = username
self.password = password
self.form = self._form()
self.data = None
def _form(self):
vcode = self._vcode()
f = {
'login[ssoent]': 'A1',
'login[version]': '2.0',
'login[ssoext]': vcode,
'login[ssoln]': self.username,
'login[ssopw]': self._ssopw(vcode),
'login[ssovcode]': vcode,
'login[safe]': '1',
'login[time]': '0',
'login[safe_login]': '0',
'goto': 'http://115.com/',
}
return f
def _vcode(self):
s = '%.6f' % time.time()
whole, frac = map(int, s.split('.'))
res = '%.8x%.5x' % (whole, frac)
return res
def _ssopw(self, vcode):
p = sha1(utf8_encode(self.password)).hexdigest()
u = sha1(utf8_encode(self.username)).hexdigest()
v = vcode.upper()
pu = sha1(utf8_encode(p + u)).hexdigest()
return sha1(utf8_encode(pu + v)).hexdigest()
def __unicode__(self):
return self.username
class BaseFile(Base):
def __init__(self, api, cid, name):
"""
:param API api: associated API object
:param str cid: directory id
* For file: this represents the directory it belongs to;
* For directory: this represents itself
:param str name: originally named `n`
NOTICE
cid, fid and pid are in string format at this time
"""
self.api = api
self.cid = cid
self.name = name
self._deleted = False
def delete(self):
"""
Delete this file or directory
:return: whether deletion is successful
:raise: :class:`.APIError` if this file or directory is already deleted
"""
fcid = None
pid = None
if isinstance(self, File):
fcid = self.fid
pid = self.cid
elif isinstance(self, Directory):
fcid = self.cid
pid = self.pid
else:
raise APIError('Invalid BaseFile instance.')
if not self._deleted:
if self.api._req_rb_delete(fcid, pid):
self._deleted = True
return True
else:
raise APIError('This file or directory is already deleted.')
def move(self, directory):
"""
Move this file or directory to the destination directory
:param directory: destination directory
:return: whether the action is successful
:raise: :class:`.APIError` if something bad happened
"""
self.api.move([self], directory)
def edit(self, name, mark=False):
"""
Edit this file or directory
:param str name: new name for this entry
:param bool mark: whether to bookmark this entry
"""
self.api.edit(self, name, mark)
@property
def is_deleted(self):
"""Whether this file or directory is deleted"""
return self._deleted
def __eq__(self, other):
if isinstance(self, File):
if isinstance(other, File):
return self.fid == other.fid
elif isinstance(self, Directory):
if isinstance(other, Directory):
return self.cid == other.cid
return False
def __ne__(self, other):
return not self.__eq__(other)
def __unicode__(self):
return self.name
class File(BaseFile):
"""
File in a directory
:ivar int fid: file id
:ivar str cid: cid of the current directory
:ivar int size: size in bytes
:ivar str size_human: human-readable size
:ivar str file_type: originally named `ico`
:ivar str sha: SHA1 hash
:ivar datetime.datetime date_created: in "%Y-%m-%d %H:%M:%S" format,
originally named `t`
:ivar str thumbnail: thumbnail URL, originally named `u`
:ivar str pickcode: originally named `pc`
"""
def __init__(self, api, fid, cid, name, size, file_type, sha,
date_created, thumbnail, pickcode, *args, **kwargs):
super(File, self).__init__(api, cid, name)
self.fid = fid
self.size = size
self.size_human = humanize.naturalsize(size, binary=True)
self.file_type = file_type
self.sha = sha
self.date_created = date_created
self.thumbnail = thumbnail
self.pickcode = pickcode
self._directory = None
self._download_url = None
@property
def directory(self):
"""Directory that holds this file"""
if self._directory is None:
self._directory = self.api._load_directory(self.cid)
return self._directory
def get_download_url(self, proapi=False):
"""
Get this file's download URL
:param bool proapi: whether to use pro API
"""
if self._download_url is None:
self._download_url = \
self.api._req_files_download_url(self.pickcode, proapi)
return self._download_url
@property
def url(self):
"""Alias for :meth:`.File.get_download_url` with `proapi=False`"""
return self.get_download_url()
def download(self, path=None, show_progress=True, resume=True,
auto_retry=True, proapi=False):
"""Download this file"""
self.api.download(self, path, show_progress, resume, auto_retry,
proapi)
@property
def is_torrent(self):
"""Whether the file is a torrent"""
return self.file_type == 'torrent'
def open_torrent(self):
"""
Open the torrent (if it is a torrent)
:return: opened torrent
:rtype: :class:`.Torrent`
"""
if self.is_torrent:
return self.api._load_torrent(self)
def reload(self):
"""
Reload file info and metadata
* name
* sha
* pickcode
"""
res = self.api._req_file(self.fid)
data = res['data'][0]
self.name = data['file_name']
self.sha = data['sha1']
self.pickcode = data['pick_code']
class Directory(BaseFile):
"""
:ivar str cid: cid of this directory
:ivar str pid: represents the parent directory it belongs to
:ivar int count: number of entries in this directory
:ivar datetime.datetime date_created: integer, originally named `t`
:ivar str pickcode: string, originally named `pc`
"""
max_entries_per_load = 24 # Smaller than 24 may cause abnormal result
def __init__(self, api, cid, name, pid, count=-1,
date_created=None, pickcode=None, is_root=False,
*args, **kwargs):
super(Directory, self).__init__(api, cid, name)
self.pid = pid
self._count = count
if date_created is not None:
self.date_created = date_created
self.pickcode = pickcode
self._parent = None
@property
def is_root(self):
"""Whether this directory is the root directory"""
return int(self.cid) == 0
@property
def parent(self):
"""Parent directory that holds this directory"""
if self._parent is None:
if self.pid is not None:
self._parent = self.api._load_directory(self.pid)
return self._parent
@property
def count(self):
"""Number of entries in this directory"""
if self._count == -1:
self.reload()
return self._count
def reload(self):
"""
Reload directory info and metadata
* `name`
* `pid`
* `count`
"""
r = self.api._req_directory(self.cid)
self.pid = r['pid']
self.name = r['name']
self._count = r['count']
def _load_entries(self, func, count, page=1, entries=None, **kwargs):
"""
Load entries
:param function func: function (:meth:`.API._req_files` or
:meth:`.API._req_search`) that returns entries
:param int count: number of entries to load. This value should never
be greater than self.count
:param int page: page number (starting from 1)
"""
if entries is None:
entries = []
res = \
func(offset=(page - 1) * self.max_entries_per_load,
limit=self.max_entries_per_load,
**kwargs)
loaded_entries = [
entry for entry in res['data'][:count]
]
#total_count = res['count']
total_count = self.count
# count should never be greater than total_count
if count > total_count:
count = total_count
if count <= self.max_entries_per_load:
return entries + loaded_entries
else:
cur_count = count - self.max_entries_per_load
return self._load_entries(
func=func, count=cur_count, page=page + 1,
entries=entries + loaded_entries, **kwargs)
def list(self, count=30, order='user_ptime', asc=False, show_dir=True,
natsort=True):
"""
List directory contents
:param int count: number of entries to be listed
:param str order: order of entries, originally named `o`. This value
may be one of `user_ptime` (default), `file_size` and `file_name`
:param bool asc: whether in ascending order
:param bool show_dir: whether to show directories
:param bool natsort: whether to use natural sort
Return a list of :class:`.File` or :class:`.Directory` objects
"""
if self.cid is None:
return False
self.reload()
kwargs = {}
# `cid` is the only required argument
kwargs['cid'] = self.cid
kwargs['asc'] = 1 if asc is True else 0
kwargs['show_dir'] = 1 if show_dir is True else 0
kwargs['natsort'] = 1 if natsort is True else 0
kwargs['o'] = order
# When the downloads directory exists along with its parent directory,
# the receiver directory, its parent's count (receiver directory's
# count) does not include the downloads directory. This behavior is
# similar to its parent's parent (root), the count of which does not
# include the receiver directory.
# The following code fixed this behavior so that a directory's
# count correctly reflects the actual number of entries in it
# The side-effect that this code may ensure that downloads directory
# exists, causing the system to create the receiver directory and
# downloads directory, if they do not exist.
if self.is_root or self == self.api.receiver_directory:
self._count += 1
if self.count <= count:
# count should never be greater than self.count
count = self.count
try:
entries = self._load_entries(func=self.api._req_files,
count=count, page=1, **kwargs)
# When natsort=1 and order='file_name', API access will fail
except RequestFailure as e:
if natsort is True and order == 'file_name':
entries = \
self._load_entries(func=self.api._req_aps_natsort_files,
count=count, page=1, **kwargs)
else:
raise e
res = []
for entry in entries:
if 'pid' in entry:
res.append(_instantiate_directory(self.api, entry))
else:
res.append(_instantiate_file(self.api, entry))
return res
def mkdir(self, name):
"""
Create a new directory in this directory
"""
self.api.mkdir(self, name)
class Task(Base):
"""
BitTorrent or URL task
:ivar datetime.datetime add_time: added time
:ivar str cid: associated directory id, if any. For a directory task (
e.g. BT task), this is its associated directory's cid. For a file
task (e.g. HTTP url task), this is the cid of the downloads directory.
This value may be None if the task is failed and has no corresponding
directory
:ivar str file_id: equivalent to `cid` of :class:`.Directory`. This value
may be None if the task is failed and has no corresponding directory
:ivar str info_hash: hashed value
:ivar datetime.datetime last_update: last updated time
:ivar int left_time: left time ()
:ivar int move: moving state
* 0: not transferred
* 1: transferred
* 2: partially transferred
:ivar str name: name of this task
:ivar int peers: number of peers
:ivar int percent_done: <=100, originally named `percentDone`
:ivar int rate_download: download rate (B/s), originally named
`rateDownload`
:ivar int size: size of task
:ivar str size_human: human-readable size
:ivar int status: status code
* -1: failed
* 1: downloading
* 2: downloaded
* 4: searching resources
"""
def __init__(self, api, add_time, file_id, info_hash, last_update,
left_time, move, name, peers, percent_done, rate_download,
size, status, cid, pid, url, *args, **kwargs):
self.api = api
self.cid = cid
self.name = name
self.add_time = add_time
self.file_id = file_id
self.info_hash = info_hash
self.last_update = last_update
self.left_time = left_time
self.move = move
self.peers = peers
self.percent_done = percent_done
self.rate_download = rate_download
self.size = size
self.size_human = humanize.naturalsize(size, binary=True)
self.status = status
self.url = url
self._directory = None
self._deleted = False
self._count = -1
@property
def is_directory(self):
"""
:return: whether this task is associated with a directory.
:rtype: bool
"""
if self.cid is None:
msg = 'Cannot determine whether this task is a directory.'
if not self.is_transferred:
msg += ' This task has not been transferred.'
raise TaskError(msg)
return self.api.downloads_directory.cid != self.cid
@property
def is_bt(self):
"""Alias of `is_directory`"""
return self.is_directory
def delete(self):
"""
Delete task (does not influence its corresponding directory)
:return: whether deletion is successful
:raise: :class:`.TaskError` if the task is already deleted
"""
if not self._deleted:
if self.api._req_lixian_task_del(self):
self._deleted = True
return True
raise TaskError('This task is already deleted.')
@property
def is_deleted(self):
"""
:return: whether this task is deleted
:rtype: bool
"""
return self._deleted
@property
def is_transferred(self):
"""
:return: whether this tasks has been transferred
:rtype: bool
"""
return self.move == 1
@property
def status_human(self):
"""
Human readable status
:return:
* `DOWNLOADING`: the task is downloading files
* `BEING TRANSFERRED`: the task is being transferred
* `TRANSFERRED`: the task has been transferred to downloads \
directory
* `SEARCHING RESOURCES`: the task is searching resources
* `FAILED`: the task is failed
* `DELETED`: the task is deleted
* `UNKNOWN STATUS`
:rtype: str
"""
res = None
if self._deleted:
return 'DELETED'
if self.status == 1:
res = 'DOWNLOADING'
elif self.status == 2:
if self.move == 0:
res = 'BEING TRANSFERRED'
elif self.move == 1:
res = 'TRANSFERRED'
elif self.move == 2:
res = 'PARTIALLY TRANSFERRED'
elif self.status == 4:
res = 'SEARCHING RESOURCES'
elif self.status == -1:
res = 'FAILED'
if res is not None:
return res
return 'UNKNOWN STATUS'
@property
def directory(self):
"""Associated directory, if any, with this task"""
if not self.is_directory:
msg = 'This task is a file task with no associated directory.'
raise TaskError(msg)
if self._directory is None:
if self.is_transferred:
self._directory = self.api._load_directory(self.cid)
if self._directory is None:
msg = 'No directory assciated with this task: Task is %s.' % \
self.status_human.lower()
raise TaskError(msg)
return self._directory
@property
def parent(self):
"""Parent directory of the associated directory"""
return self.directory.parent
@property
def count(self):
"""Number of entries in the associated directory"""
return self.directory.count
def list(self, count=30, order='user_ptime', asc=False, show_dir=True,
natsort=True):
"""
List files of the associated directory to this task.
:param int count: number of entries to be listed
:param str order: originally named `o`
:param bool asc: whether in ascending order
:param bool show_dir: whether to show directories
"""
return self.directory.list(count, order, asc, show_dir, natsort)
def __unicode__(self):
return self.name
class Torrent(Base):
"""
Opened torrent before becoming a task
:ivar api: associated API object
:ivar str name: task name, originally named `torrent_name`
:ivar int size: task size, originally named `torrent_size`
:ivar str info_hash: hashed value
:ivar int file_count: number of files included
:ivar list files: files included (list of :class:`.TorrentFile`),
originally named `torrent_filelist_web`
"""
def __init__(self, api, name, size, info_hash, file_count, files=None,
*args, **kwargs):
self.api = api
self.name = name
self.size = size
self.size_human = humanize.naturalsize(size, binary=True)
self.info_hash = info_hash
self.file_count = file_count
self.files = files
self.submitted = False
def submit(self):
"""Submit this torrent and create a new task"""
if self.api._req_lixian_add_task_bt(self):
self.submitted = True
return True
return False
@property
def selected_files(self):
"""List of selected :class:`.TorrentFile` objects of this torrent"""
return [f for f in self.files if f.selected]
@property
def unselected_files(self):
"""List of unselected :class:`.TorrentFile` objects of this torrent"""
return [f for f in self.files if not f.selected]
def __unicode__(self):
return self.name
class TorrentFile(Base):
"""
File in the torrent file list
:param torrent: the torrent that holds this file
:type torrent: :class:`.Torrent`
:param str path: file path in the torrent
:param int size: file size
:param bool selected: whether this file is selected
"""
def __init__(self, torrent, path, size, selected, *args, **kwargs):
self.torrent = torrent
self.path = path
self.size = size
self.size_human = humanize.naturalsize(size, binary=True)
self.selected = selected
def select(self):
"""Select this file"""
self.selected = True
def unselect(self):
"""Unselect this file"""
self.selected = False
def __unicode__(self):
return '[%s] %s' % ('*' if self.selected else ' ', self.path)
def _instantiate_task(api, kwargs):
"""Create a Task object from raw kwargs"""
file_id = kwargs['file_id']
kwargs['file_id'] = file_id if str(file_id).strip() else None
kwargs['cid'] = kwargs['file_id'] or None
kwargs['rate_download'] = kwargs['rateDownload']
kwargs['percent_done'] = kwargs['percentDone']
kwargs['add_time'] = get_utcdatetime(kwargs['add_time'])
kwargs['last_update'] = get_utcdatetime(kwargs['last_update'])
is_transferred = (kwargs['status'] == 2 and kwargs['move'] == 1)
if is_transferred:
kwargs['pid'] = api.downloads_directory.cid
else:
kwargs['pid'] = None
del kwargs['rateDownload']
del kwargs['percentDone']
if 'url' in kwargs:
if not kwargs['url']:
kwargs['url'] = None
else:
kwargs['url'] = None
task = Task(api, **kwargs)
if is_transferred:
task._parent = api.downloads_directory
return task
def _instantiate_file(api, kwargs):
kwargs['file_type'] = kwargs['ico']
kwargs['date_created'] = string_to_datetime(kwargs['t'])
kwargs['pickcode'] = kwargs['pc']
kwargs['name'] = kwargs['n']
kwargs['thumbnail'] = kwargs.get('u')
kwargs['size'] = kwargs['s']
del kwargs['ico']
del kwargs['t']
del kwargs['pc']
del kwargs['s']
if 'u' in kwargs:
del kwargs['u']
return File(api, **kwargs)
def _instantiate_directory(api, kwargs):
kwargs['name'] = kwargs['n']
kwargs['date_created'] = get_utcdatetime(float(kwargs['t']))
kwargs['pickcode'] = kwargs.get('pc')
return Directory(api, **kwargs)
def _instantiate_uploaded_file(api, kwargs):
kwargs['fid'] = kwargs['file_id']
kwargs['name'] = kwargs['file_name']
kwargs['pickcode'] = kwargs['pick_code']
kwargs['size'] = kwargs['file_size']
kwargs['sha'] = kwargs['sha1']
kwargs['date_created'] = get_utcdatetime(kwargs['file_ptime'])
kwargs['thumbnail'] = None
_, ft = os.path.splitext(kwargs['name'])
kwargs['file_type'] = ft[1:]
return File(api, **kwargs)
def _instantiate_torrent(api, kwargs):
kwargs['size'] = kwargs['file_size']
kwargs['name'] = kwargs['torrent_name']
file_list = kwargs['torrent_filelist_web']
del kwargs['file_size']
del kwargs['torrent_name']
del kwargs['torrent_filelist_web']
torrent = Torrent(api, **kwargs)
torrent.files = [_instantiate_torrent_file(torrent, f) for f in file_list]
return torrent
def _instantiate_torrent_file(torrent, kwargs):
kwargs['selected'] = True if kwargs['wanted'] == 1 else False
del kwargs['wanted']
return TorrentFile(torrent, **kwargs)
class APIError(Exception):
"""General error related to API"""
def __init__(self, *args, **kwargs):
content = kwargs.pop('content', None)
self.content = content
super(APIError, self).__init__(*args, **kwargs)
class TaskError(APIError):
"""Task has unstable status or no directory operation"""
pass
class AuthenticationError(APIError):
"""Authentication error"""
pass
class InvalidAPIAccess(APIError):
"""Invalid and forbidden API access"""
pass
class RequestFailure(APIError):
"""Request failure"""
pass
class JobError(APIError):
"""Job running error (request multiple similar jobs simultaneously)"""
def __init__(self, *args, **kwargs):
content = kwargs.pop('content', None)
self.content = content
if not args:
msg = 'Your account has a similar job running. Try again later.'
args = (msg,)
super(JobError, self).__init__(*args, **kwargs) | 115wangpan | /115wangpan-0.7.6.tar.gz/115wangpan-0.7.6/u115/api.py | api.py |
import functools
import os
import pickle
import subprocess
import re
from collections import UserDict
from typing import Callable
from colorit import *
from prompt_toolkit import prompt
from prompt_toolkit.completion import WordCompleter
from prompt_toolkit.shortcuts import yes_no_dialog
from greeting import *
from help import *
colorit.init_colorit()
class MyException(Exception):
pass
class Notepad(UserDict):
def __getitem__(self, title):
if not title in self.data.keys():
raise MyException(color("This article isn't in the Notepad",Colors.red))
note = self.data[title]
return note
def add_note(self, note) -> str:
self.data.update({note.title.value:note})
return color('Done!',Colors.blue)
def delete_note(self, title):
try:
self.data.pop(title)
return color(f"{title} was removed",Colors.purple)
except KeyError:
return color("This note isn't in the Notepad",Colors.blue)
def get_notes(self, file_name):
with open(file_name, 'ab+') as fh:
fh.seek(0)
try:
self.data = pickle.load(fh)
except EOFError:
pass
def show_notes_titles(self):
res = "\n".join([note for note in notes])
return color(res,Colors.orange)
def write_notes(self, file_name):
with open(file_name, "wb") as fh:
pickle.dump(self, fh)
class Field:
def __init__(self, value):
self.__value = None
self.value = value
class NoteTag(Field):
pass
class NoteTitle(Field):
@property
def value(self):
return self.__value
@value.setter
def value(self, title):
if len(title) == 0:
raise ValueError(color("The title wasn't added. It should have at least 1 character.",Colors.red))
self.__value = title
class NoteBody(Field):
pass
class Note:
def __init__(self, title: NoteTitle, body: NoteBody, tags: list[NoteTag]=None) -> None:
self.title = title
self.body = body if body else ''
self.tags = tags if tags else ''
def edit_tags(self, tags: list[NoteTag]):
self.tags = tags
def edit_title(self, title: NoteTitle):
self.title = title
def edit_body(self, body: NoteBody):
self.body = body
def show_note(self):
return '\n'.join([f"Title: {self.title.value}", f"Body: {self.body.value}", f"Tags: {self.show_tags()}"])
def show_tags(self):
if self.tags == []:
return "Tags: Empty",Colors.red
return ', '.join([tag.value for tag in self.tags])
def decorator_input(func: Callable) -> Callable:
@functools.wraps(func)
def wrapper(*words):
try:
return func(*words)
except KeyError as err:
return err
except IndexError:
return color("You didn't enter the title or keywords",Colors.red)
except TypeError:
return color("Sorry, this command doesn't exist",Colors.red)
except Exception as err:
return err
return wrapper
@decorator_input
def add_note(*args) -> str:
title = NoteTitle(input(color("Enter the title: ",Colors.yellow)))
if title.value in notes.data.keys():
raise MyException(color('This title already exists',Colors.red))
body = NoteBody(input(color("Enter the note: ",Colors.yellow)))
tags = input(color("Enter tags (separate them with ',') or press Enter to skip this step: ",Colors.yellow))
tags = [NoteTag(t.strip()) for t in tags.split(',')]
note = Note(title, body, tags)
return notes.add_note(note)
@decorator_input
def delete_note(*args: str) -> str:
return notes.delete_note(args[0])
@decorator_input
def edit_note(*args) -> str:
title = args[0]
if title in notes.data.keys():
note = notes.data.get(title)
user_title = input(color("Enter new title or press 'enter' to skip this step: ",Colors.yellow))
if user_title:
if not user_title in notes.data.keys():
notes.data[user_title] = notes.data.pop(title)
note.edit_title(NoteTitle(user_title))
else:
raise MyException(color('This title already exists.',Colors.red))
try:
body = edit(note.body.value, 'body')
if body:
body = NoteBody(body)
note.edit_body(body)
except Exception as err:
print(err)
try:
tags = edit(note.show_tags(), 'tags')
if tags:
tags = [NoteTag(t.strip()) for t in tags.split(',')]
note.edit_tags(tags)
except Exception as err:
print(err)
return "Done!"
@decorator_input
def edit(text: str, part) -> str:
user_input = input(color(f"Enter any letter if you want to edit {part} or press 'enter' to skip this step. ",Colors.green))
if user_input:
with open('edit_note.txt', 'w') as fh:
fh.write(text)
run_app()
mes = ''
if part == 'tags':
mes = color("Separate tags with ','",Colors.green)
input(color(f'Press enter or any letter if you finished editing. Please, make sure you closed the text editor. {mes}',Colors.green))
with open('edit_note.txt', 'r') as fh:
edited_text = fh.read()
return edited_text
@decorator_input
def find(*args) -> str:
try:
re.match(r'^\s*$', args)
except TypeError:
args = input(color("Enter the phrase you want to find: ",Colors.yellow))
notes_list = []
for note in notes.data.values():
if re.search(args, note.body.value) or re.search(args, note.title.value, flags=re.IGNORECASE):
notes_list.append(note.title.value)
if len(notes_list) == 0:
return "No matches"
return '\n'.join([title for title in notes_list])
@decorator_input
def find_tags(*args: str) -> str:
if len(args) == 0:
return "You didn't enter any tags."
all_notes = [note for note in notes.data.values()]
notes_dict = {title:[] for title in notes.data.keys()}
for arg in args:
for note in all_notes:
if arg in [tag.value for tag in note.tags]:
notes_dict[note.title.value].append(arg)
sorted_dict = sorted(notes_dict, key=lambda k: len(notes_dict[k]), reverse=True)
return '\n'.join([f"{key}:{notes_dict[key]}" for key in sorted_dict if len(notes_dict[key]) > 0])
@decorator_input
def goodbye() -> str:
return 'Goodbye!'
def get_command(words: str) -> Callable:
if words[0] == '':
raise KeyError ("This command doesn't exist")
for key in commands_dict.keys():
try:
if re.search(fr'\b{words[0].lower()}\b', str(key)):
func = commands_dict[key]
return func
except (re.error):
break
raise KeyError ("This command doesn't exist")
def run_app():
if os.name == "nt": # For Windows
os.startfile('edit_note.txt')
else: # For Mac
subprocess.call(["open", 'edit_note.txt'])
@decorator_input
def show_note(*args:str) -> str:
note = notes.data.get(args[0])
return note.show_note()
notes = Notepad()
notes.get_notes('notes.bin')
commands_dict = {('add', 'add_note'):add_note,
('edit', 'edit_note'):edit_note,
('show', 'show_note'):show_note,
('showall',):notes.show_notes_titles,
('find_tags',):find_tags,
('find',):find,
('delete',):delete_note,
('goodbye','close','exit','quit'):goodbye
}
word_completer = WordCompleter(["add", "add_note", "edit", "edit_note", "show", "show_note", "showall" ,"find_tags", "find", "delete" ,"."])
def main_notes():
print(color(greeting,Colors.green))
print(background(color("WRITE HELP TO SEE ALL COMMANDS ",Colors.yellow),Colors.blue))
print(background(color("WRITE 'exit', 'close' or 'bye' to close the bot ",Colors.blue),Colors.yellow))
while True:
words = prompt('Enter your command: ', completer=word_completer).split(" ")
if words[0].lower() == "help":
print(pers_assistant_help())
try:
func = get_command(words)
except KeyError as error:
print(error)
continue
print(func(*words[1:]))
if func.__name__ == 'goodbye':
exit = yes_no_dialog(
title='EXIT',
text='Do you want to close the bot?').run()
if exit:
notes.write_notes('notes.bin')
print(color("Bye, see you soon...",Colors.yellow))
break
else:
continue | 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/Notepad.py | Notepad.py |
import pickle
import re
from datetime import datetime, timedelta
from colorit import *
from prompt_toolkit import prompt
from prompt_toolkit.completion import WordCompleter
from prompt_toolkit.shortcuts import yes_no_dialog
from Notepad import *
from addressbook import *
from greeting import greeting
from help import *
from sort import *
colorit.init_colorit()
class Error(Exception):
pass
STOPLIST =[".", "end", "close","exit","bye","good bye"]
users = []
def verificate_email(text:str):
email_re = re.findall(r"[\w+3\@{1}\w+\.\w+]", text)
email = "".join(email_re)
if bool(email) == True:
return email
else:
raise Error
def verificate_birthday(text:str):
date_re = re.findall(r"\d{4}\.\d{2}\.\d{2}", text)
date = "".join(date_re)
if bool(date) == True:
return date
else:
raise Error
def verificate_number(num): #Done
flag = True
try:
number = re.sub(r"[\+\(\)A-Za-z\ ]", "", num)
if len(number) == 12:
number = "+" + number
elif len(number) == 10:
number = "+38" + number
elif len(number) == 9:
number = "+380" + number
else:
flag = False
raise Error
except Error:
print(color(f"This number dont correct {number}",Colors.red))
return number if flag else ""
def add_user(text:str): #Done
text = text.split()
name = text[0]
phone = text[1]
if name in ad:
return "this user already exist"
else:
name = Name(name)
phone = Phone(phone)
rec = Record(name, phone)
ad.add_record(rec)
return color("Done",Colors.blue)
def show_all(nothing= ""): # Done
if len(ad) == 0:
return (color("AddressBook is empty", Colors.red))
else:
number = len(ad)
ad.iterator(number)
return color("Done",Colors.blue)
def add_phone(text:str):
text = text.split()
name = text[0]
phone = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
phone = Phone(phone)
adding.add_phone(phone)
return color("Done",Colors.blue)
def add_email(text:str):
text = text.split()
name = text[0]
email = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
email = Email(email)
adding.add_email(email)
return color("Done",Colors.blue)
def add_birthday(text:str):
text = text.split()
name = text[0]
birthday = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
birthday = Birthday(birthday)
adding.add_birthday(birthday)
return color("Done",Colors.blue)
def add_tags(text:str):
text = text.split()
name = text[0]
tags = " ".join(text[1:])
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
tags = Tags(tags)
adding.add_tags(tags)
return color("Done",Colors.blue)
def add_adress(text:str):
text = text.split()
name = text[0]
adress = " ".join(text[1:])
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adress = Adress(adress)
adding.add_adress(adress)
return color("Done",Colors.blue)
def change_adress(text:str):
text = text.split()
name = text[0]
adress = " ".join(text[1:])
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adress = Adress(adress)
adding.change_adress(adress)
return color("Done",Colors.blue)
def change_phone(text:str):
text = text.split()
name = text[0]
oldphone = text[1]
newphone = text[2]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
# oldphone = Phone(oldphone)
# newphone = Phone(newphone)
adding.change_phone(oldphone,newphone)
return color("Done",Colors.blue)
def change_email(text:str):
text = text.split()
name = text[0]
newemail = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
newemail = Email(newemail)
adding.change_email(newemail)
return color("Done",Colors.blue)
def change_birthday(text:str):
text = text.split()
name = text[0]
birthday = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
birthday = Birthday(birthday)
adding.change_birthday(birthday)
return color("Done",Colors.blue)
def remove_phone(text:str):
text = text.split()
name = text[0]
phone = text[1]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
if phone == "-":
adding = ad[name]
adding.remove_phone(phone)
elif name in ad:
adding = ad[name]
phone = Phone(phone)
adding.remove_phone(phone)
return color("Done",Colors.blue)
def remove_email(text:str):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adding.remove_email()
return color("Done",Colors.blue)
def remove_birthday(text:str):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adding.remove_birthday()
return color("Done",Colors.blue)
def remove_tags(text:str):
text = text.split()
name = text[0]
tags = " ".join(text[1:]).strip()
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adding.remove_tags(tags)
return color("Done",Colors.blue)
def remove_user(text:str):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
del ad[name]
return color("Done",Colors.blue)
def remove_adress(text:str):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
adding.remove_adress()
return color("Done",Colors.blue)
def find_name(text):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
print(ad.find_name(name))
return color("Done",Colors.blue)
def find_tags(text:str):
text = text.split()
tags = text[0:]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
print(ad.find_tags(tags))
return color("Done",Colors.blue)
def find_phone(text:str):
text = text.split()
phone = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
print(ad.find_phone(phone))
return color("Done",Colors.blue)
def when_birthday(text:str):
text = text.split()
name = text[0]
if len(ad) == 0:
return color("Addressbook is empty", Colors.red)
if name not in ad:
return color("This user dont exist in addressbook", Colors.red)
elif name in ad:
adding = ad[name]
print(adding.days_to_birthday())
return color("Done",Colors.blue)
def birthdays_within(text:str):
days = int(text.split()[0])
flag = False
current = datetime.now()
future = current + timedelta(days=days)
for name, record in ad.items():
if record.get_birthday() is None:
pass
else:
userdate = datetime.strptime(record.get_birthday(), "%Y.%m.%d").date()
userdate = userdate.replace(year=current.year)
if current.date() < userdate < future.date():
flag = True
print(color(f"\n{name.title()} has birthday {record.get_birthday()}",Colors.yellow))
return color("Done",Colors.blue) if flag == True else color("Nobody have birthday in this period",Colors.red)
def help(tst=""):
instruction = color("""
\nCOMMANDS\n
show all
add user <FirstName_LastName> <phone>
add phone <user> <phone>
add email <user> <email>
add birthday <user> <date>
add tags <user> <tags>
add adress <user> <adress>
change adress <user> <new_adress>
change email <user> <newEmail>
change birthday <user> <newBirthday>
remove phone <user> <phone>
remove email <user> <email>
remove birthday <user>
remove phone <user> <phone>
remove email <user> <email>
remove tags <user> <tags>
remove user <user>
remove adress <user>
find name <name>
find tags <tags>
find phone <phone>
sort directory <path to folder>
when birthday <name>
birthdays within <days-must be integer>
""",Colors.orange)
return instruction
commands = {
"help": pers_assistant_help,
"add phone": add_phone,
"add user": add_user,
"show all": show_all,
"add email": add_email,
"add birthday": add_birthday,
"add tags": add_tags,
"add adress": add_adress,
"change adress": change_adress,
"change phone": change_phone,
"change email": change_email,
"change birthday": change_birthday,
"remove phone": remove_phone,
"remove email" :remove_email,
"remove birthday": remove_birthday,
"remove tags": remove_tags,
"remove user": remove_user,
"remove adress": remove_adress,
"find name": find_name,
"find tags": find_tags,
"find phone": find_phone,
"sort directory": sorting,
"when birthday": when_birthday,
"birthdays within": birthdays_within,
}
word_completer = WordCompleter([comm for comm in commands.keys()])
def parser(userInput:str):
if len(userInput.split()) == 2:
return commands[userInput.strip()], "None"
for command in commands.keys():
if userInput.startswith(str(command)):
text = userInput.replace(command, "")
command = commands[command]
# print(text.strip().split())
return command, text.strip()
def main():
print(color(greeting,Colors.green))
print(background(color("WRITE HELP TO SEE ALL COMMANDS ",Colors.yellow),Colors.blue))
print(background(color("WRITE 'exit', 'close' or 'bye' for close bot ",Colors.blue),Colors.yellow))
ad.load_contacts_from_file()
while True:
# user_input = input(color("Enter your command: ",Colors.green)).strip().lower()
user_input = prompt('Enter your command: ', completer=word_completer)
if user_input in STOPLIST:
exit = yes_no_dialog(
title='EXIT',
text='Do you want to close the bot?').run()
if exit:
print(color("Bye,see tou soon...",Colors.yellow))
break
else:
continue
elif user_input.startswith("help"):
print(color(pers_assistant_help(),Colors.green))
continue
elif (len(user_input.split())) == 1:
print(color("Please write full command", Colors.red))
continue
else:
try:
command, text = parser(user_input)
print(command(text))
ad.save_contacts_to_file()
except KeyError:
print(color("You enter wrong command", Colors.red))
except Error:
print(color("You enter wrong command Error", Colors.red))
except TypeError:
print(color("You enter wrong command TypeError", Colors.red))
except IndexError:
print(color("You enter wrong command or name", Colors.red))
except ValueError:
print(color("You enter wrong information", Colors.red))
if __name__ == "__main__":
choice = input(color(f"SELECT WHICH BOT YOU WANT TO USE \nEnter 'notes' for use Notes\nEnter 'contacts' for use AdressBook\nEnter >>> ",Colors.green))
if choice == "notes":
main_notes()
elif choice == "contacts":
main()
else:
user_error = input(color("You choose wrong name push enter to close the bot",Colors.red)) | 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/main.py | main.py |
import pickle
import re
from collections import UserDict
from datetime import datetime
from colorit import *
colorit.init_colorit()
class Error(Exception): #власне виключення
pass
# def __str__(self) -> str:
# return "\n \nSomething went wrong\n Try again!\n"
class Field:
def __init__(self, value) -> None:
self._value = value
def __str__(self) -> str:
return self._value
@property
def value(self):
return self._value
@value.setter
def value(self, value):
self._value = value
class Name(Field): #клас для створення поля name
def __str__(self) -> str:
self._value : str
return self._value.title()
class Phone(Field): #клас для створення поля phone
@staticmethod #метод який не звязаний з класом
def verify(number): #перевирка номеру телефона
number = re.sub(r"[\-\(\)\+\ a-zA-Zа-яА-я]", "", number)
try:
if len(number) == 12:
number = "+" + number
elif len(number) == 10:
number = "+38" + number
elif len(number) == 9:
number = "+380" + number
else:
number = False
raise Error
except Error:
print(color("\nYou enter wrong number\n Try again!\n", Colors.red))
if number:
return number
else:
return "-"
def __init__(self, value) -> None:
self._value = Phone.verify(value)
@Field.value.setter
def value(self, value):
self._value =Phone.verify(value)
def __repr__(self) -> str:
return self._value
def __str__(self) -> str:
return self._value
class Birthday:
@staticmethod #метод який не звязаний з класом
def verify_date(birth_date: str):
try:
birthdate = re.findall(r"\d{4}\.\d{2}\.\d{2}", birth_date)
if bool(birthdate) == False:
raise Error
except Error:
print(color("\nYou enter wrong date.\nUse this format - YYYY.MM.DD \nTry again!\n", Colors.red))
if birthdate:
return birthdate[0]
else:
return "-"
def __init__(self, birthday) -> None:
self.__birthday = self.verify_date(birthday)
@property
def birthday(self):
return self.__birthday
@birthday.setter
def birthday(self,birthday):
self.__birthday = self.verify_date(birthday)
def __repr__(self) -> str:
return self.__birthday
def __str__(self) -> str:
return self.__birthday
class Email:
@staticmethod #метод який не звязаний з класом
def verificate_email(text:str):
email_re = re.findall(r"\w+3\@{1}\w+\.\w+", text)
email = "".join(email_re)
try:
if bool(email) == True:
return email
else:
raise Error
except Error:
print(color("\nYou enter wrong email\n Try again!\n", Colors.red))
return "-"
def __init__(self, email) -> None:
self.__email = self.verificate_email(email)
@property
def email(self):
return self.__email
@email.setter
def email(self,email):
self.__email = self.verificate_email(email)
def __repr__(self) -> str:
return self.__email
def __str__(self) -> str:
return self.__email
class Adress:
def __init__(self, adress) -> None:
self.__adress = adress
@property
def adress(self):
return self.__adress
@adress.setter
def adress(self,adress):
self.__adress = self.adress
def __repr__(self) -> str:
return self.__adress
def __str__(self) -> str:
return self.__adress
class Tags:
def __init__(self, tags) -> None:
self.__tags = tags
@property
def tags(self):
return self.__tags
@tags.setter
def tags(self,tags):
self.__tags = self.tags
def __repr__(self) -> str:
return self.__tags
def __str__(self) -> str:
return self.__tags
class Record: #клас для запису инфи
def __init__ (self, name : Name, phone: Phone = None, birthday: Birthday = None, email: Email = None, adress: Adress = None, tags :Tags = None):
self.name = name
self.phone = phone
self.birthday = birthday
self.email = email
self.adress = adress
self.tags = []
self.phones = []
if phone:
self.phones.append(phone)
def get_birthday(self):
if self.birthday is None:
return None
else:
return str(self.birthday)
def get_tags(self):
return self.tags
def get_phone(self):
return self.phones
def add_phone(self, phone: Phone): # додати телефон
self.phones.append(phone)
def add_birthday(self, birthday: Birthday): # додати телефон
if self.birthday is None:
self.birthday = birthday
else:
print(color("This user already have birthday date",Colors.red))
def add_email(self, email:Email): # додати телефон
if self.email is None:
self.email = email
else:
print(color("This user already have email",Colors.red))
def add_tags(self, tags:Tags): # додати телефон
self.tags.append(tags)
def add_adress(self, adress):
if self.adress is None:
self.adress = adress
else:
print(color("This user already have adress",Colors.red))
def change_adress(self,adress):
# adress = Adress(adress)
if self.adress is None:
print(color("This user doesnt have adress", Colors.red))
else:
self.adress = adress
def change_email(self,email):
# email = Email(email)
if self.email is None:
print(color("This user doesnt have adress", Colors.red))
else:
self.email = email
def change_birthday(self,birthday):
# birthday = Birthday(birthday)
if self.birthday is None:
print(color("This user doesnt have birthday", Colors.red))
else:
self.birthday = birthday
def remove_email(self):
if self.email is None:
print(color("This user doesnt have email",Colors.red))
else:
self.email = None
def remove_birthday(self):
if self.birthday is None:
print(color("This user doesnt have birthday date",Colors.red))
else:
self.birthday = None
def remove_phone(self, phone): # видалити телефон
# phone = Phone(phone)
for ph in self.phones:
if str(ph) == str(phone):
self.phones.remove(ph)
else:
print(color("This user doesnt have this phone",Colors.red))
def remove_tags(self, tags):
for tag in self.tags:
if str(tag) == str(tags):
self.tags.remove(tag)
else:
print(color("This user doesnt have tags which you want to remove",Colors.red))
def remove_adress(self):
if self.adress is None:
print(color("This user doesnt have adress",Colors.red))
else:
self.adress = None
def change_phone(self, oldphone, newphone): # зминити телефон користувача
oldphone = Phone(oldphone)
newphone = Phone(newphone)
for phone in self.phones:
if str(phone) == str(oldphone):
self.phones.remove(phone)
self.phones.append(newphone)
else:
print(color("This user doesnt have oldphone which you want to change",Colors.red))
def days_to_birthday(self): #функция яка показуе скильки днив до наступного др
# потрибно допрацювати
try:
if str(self.birthday) == None:
return None
current = datetime.now().date()
current : datetime
user_date = datetime.strptime(str(self.birthday), "%Y.%m.%d")
user_date: datetime
user_date = user_date.replace(year=current.year).date()
if user_date < current:
user_date = user_date.replace(year= current.year +1)
res = user_date - current
return color(f"{res.days} days before next birthday", Colors.purple)
else:
res = user_date - current
return color(f"{res.days} days before next birthday", Colors.purple)
except ValueError:
return (color("You set wrong date or user doesnt have birthday date\nTry again set new date in format YYYY.MM.DD", Colors.red))
def __repr__(self) -> str:
return f"\nPhone - {[str(i) for i in self.phones]},\nBirthday - {self.birthday},\nEmail - {self.email},\nAdress - {self.adress},\nTags - {self.tags}"
separator = "___________________________________________________________"
class AdressBook(UserDict): #адресна книга
def add_record(self, record: Record):
self.data[record.name.value] = record
def generator(self): # генератор з yield
for name, info in self.data.items():
print(color(separator,Colors.purple))
yield color(f"Name - {name.title()} : ",Colors.blue)+ color(f"{info}",Colors.yellow)
print(color(separator,Colors.purple))
def iterator(self, value): # функция яка показуе килькисть контактив яку введе користувач
value = value
gen = self.generator()
try:
if value > len(self.data):
raise Error
except:
print(color("You set big value, list has less users. Try again.\n", Colors.red))
while value > 0:
try:
print(next(gen))
value -= 1
except StopIteration:
print(color(f"Try enter value less on {value}. Dict has {len(self.data)} contacts",Colors.purple))
return ""
return color("Thats all!",Colors.orange)
# def save(self): #функция збереження даних адресбук у csv файл
# if len(self.data) == 0:
# print(color("Your AddressBook is empty",Colors.red))
# with open("savebook.csv", "w", newline="") as file:
# fields = ["Name", "Info"]
# writer = csv.DictWriter(file, fields)
# writer.writeheader()
# for name, info in self.data.items():
# name :str
# writer.writerow({"Name": name.title(), "Info": str(info)})
# return color("Succesfull save your AddressBook",Colors.green)
# def load(self): # функция яка завантажуе контакти з збереженого csv файлу, якшо такого нема буде про це повидомлено
# try:
# with open("savebook.csv", "r", newline="") as file:
# reader = csv.DictReader(file)
# for row in reader:
# saved = {row["Name"]: row["Info"]}
# self.data.update(saved)
# print(color("\nSuccesfull load saved AddressBook", Colors.purple))
# except:
# print(color("\nDont exist file with saving contacts",Colors.blue))
# return ""
def find_tags(self,tags):
res = ""
finder = False
tags = tags[0]
for user, info in self.data.items():
for tag in info.get_tags():
if str(tag) == str(tags):
finder = True
print(color(f"\nFind tags\nUser - {user.title()}{info}",Colors.purple))
return color("Found users",Colors.green) if finder == True else color("Dont find any user",Colors.green)
def find_name(self, name: str): #функция для пошуку по имя або телефону
res= ""
fail = color("Finder not find any matches in AddressBook",Colors.red)
for user, info in self.data.items():
if str(user) == name:
res += color(f"Find similar contacts:\n\nUser - {user.title()}{info}\n",Colors.purple)
return res if len(res)>0 else fail
def find_phone(self,phone):
finder = False
phone = Phone(phone)
for user, info in self.data.items():
for ph in info.get_phone():
if str(ph) == str(phone):
finder = True
print(color(f"\nFind phone\nUser - {user.title()}{info}",Colors.purple))
return color("Found users",Colors.green) if finder == True else color("Dont find any user",Colors.green)
def save_contacts_to_file(self):
with open('contacts.pickle', 'wb') as file:
pickle.dump(self.data, file)
def load_contacts_from_file(self):
try:
with open('contacts.pickle', 'rb') as file:
self.data = pickle.load(file)
except FileNotFoundError:
pass
ad = AdressBook()
# ПЕРЕВИРКА СКРИПТА
# name = Name("Dima")
# phone = Phone("0993796625")
# birth = Birthday("2001.08.12")
# rec = Record(name, phone, birth)
# ad = AdressBook()
# ad.add_record(rec)
# #=============================================================================
# name1 = Name("Benderovec")
# phone1 = Phone("0993790447")
# birth1 = Birthday("2001.08.12")
# rec1 = Record(name1, phone1, birth1)
# ad.add_record(rec1)
# #=============================================================================
# # print(rec.days_to_birthday())
# #=============================================================================
# name2 = Name("Diana")
# phone2 = Phone("099797484")
# birth2 = Birthday("2003.04.01")
# rec2 = Record(name2, phone2, birth2)
# #============================================================================
# ad.add_record(rec2)
# print(ad.data)
# print(ad.iterator(6))
# print(ad.find("test"))
# НА ВСЕ ЩО НИЖЧЕ НЕ ЗВЕРТАТИ УВАГИ!!!!!!!!!!!!!!!!!
# result = button_dialog(
# title='Button dialog example',
# text='Do you want to confirm?',
# buttons=[
# ('Yes', True),
# ('No', False),
# ('Maybe...', None)
# ],
# ).run()
# print(result)
# html_completer = WordCompleter(['add user', 'add phone', 'add email', 'add adress'])
# text = prompt('Enter command: ', completer=html_completer)
# print('You said: %s' % text)
# my_completer = WordCompleter(['add phone', 'add user', 'add email', 'add adress'])
# text = prompt('Enter HTML: ', completer=my_completer, complete_while_typing=True,)
# print(text.split())
# for i in my_completer:
# print(i)
"""
from prompt_toolkit import prompt
from prompt_toolkit.completion import WordCompleter
html_completer = WordCompleter(['<html>', '<body>', '<head>', '<title>'])
text = prompt('Enter HTML: ', completer=html_completer)
print('You said: %s' % text)
from prompt_toolkit.shortcuts import yes_no_dialog
result = yes_no_dialog(
title='Yes/No dialog example',
text='Do you want to confirm?').run()
""" | 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/addressbook.py | addressbook.py |
greeting = """
@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@
#@@@@ @@@@@@@@@@@@@@@@@@@@
@@@@ @@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ #@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ #@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@& &@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@/
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@ ,@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@(
@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@ .@@@@@
@@@@@@@@@@@@@@@@@@@@ @@@%
@@@@@@@@@@@@@@@@@@@@% @@@@&
@@@@@@@@@@@@@@@@@@@@@
__ __ _ __
/ / /\ \ \___| | ___ ___ _ __ ___ ___ _ \ \
\ \/ \/ / _ \ |/ __/ _ \| '_ ` _ \ / _ \ (_) | |
\ /\ / __/ | (_| (_) | | | | | | __/ _ | |
\/ \/ \___|_|\___\___/|_| |_| |_|\___| (_) | |
/_/
""" | 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/greeting.py | greeting.py |
from pathlib import Path
import shutil
import os
from colorit import *
import sys
name_extensions = {
"images": (".jpeg", ".png", ".jpg", ".svg"),
"video": (".avi", ".mp4", ".mov", ".mkv"),
"documents": (".doc", ".docx", ".pdf", ".xlsx", ".pptx", ".txt"),
"music": (".mp3", ".ogg", ".wav", ".amr"),
"archives": (".zip", ".gz", ".tar"),
"unknown": ""
}
RUSS_SYMB = "абвгдеёжзийклмнопрстуфхцчшщъыьэюяєіїґ?<>,!@#[]#$%^&*()-=; "
ENG_SYMB = (
"a",
"b",
"v",
"g",
"d",
"e",
"e",
"j",
"z",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
"r",
"s",
"t",
"u",
"f",
"h",
"ts",
"ch",
"sh",
"sch",
"",
"y",
"",
"e",
"yu",
"ya",
"je",
"i",
"ji",
"g",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
"_",
)
TRANS = {}
# current_path = Path("C:\\test_sorted") поганий кейс(
for c, t in zip(RUSS_SYMB, ENG_SYMB):
TRANS[ord(c)] = t
TRANS[ord(c.upper())] = t.upper()
def normalize(name: str) -> str:
return name.translate(TRANS)
def unpack_arch(
archive_path, current_path
):
shutil.unpack_archive(archive_path, rf"{current_path}\\archives")
def create_folder(folder: Path): # створення папок для сортування
for name in name_extensions.keys():
if not folder.joinpath(name).exists():
folder.joinpath(name).mkdir()
def bypass_files(path_folder):
create_folder(path_folder)
for item in path_folder.glob("**/*"):
if item.is_file():
sort_file(item, path_folder)
if item.is_dir() and item.name not in list(name_extensions):
if os.path.getsize(item) == 0:
shutil.rmtree(item)
if item.name in name_extensions:
continue
def sort_file(
file: Path, path_folder: Path
): # сорт
if file.suffix in name_extensions["images"]:
file.replace(path_folder.joinpath("images", f"{normalize(file.stem)}{file.suffix}"))
elif file.suffix in name_extensions["documents"]:
file.replace(path_folder.joinpath("documents", f"{normalize(file.stem)}{file.suffix}"))
elif file.suffix in name_extensions["music"]:
file.replace(path_folder.joinpath("music", f"{normalize(file.stem)}{file.suffix}"))
elif file.suffix in name_extensions["video"]:
file.replace(path_folder.joinpath("video", f"{normalize(file.stem)}{file.suffix}"))
elif file.suffix in name_extensions["archives"]:
shutil.unpack_archive(file, path_folder)
os.remove(file)
else:
file.replace(path_folder.joinpath("unknown",f"{normalize(file.stem)}{file.suffix}"))
def sorting(pathh):
flag = False
try:
current_path = Path(pathh)
except IndexError:
print("Type path to folder")
# return None
if not current_path.exists():
print("Folder is not exist. Try again.")
return color(f"Folder is not exist. Try again.",Colors.red)
result_list = list(current_path.iterdir())
bypass_files(current_path)
flag = True
for i in result_list:
print(i, "- sorted")
return color("Done",Colors.blue) if flag == True else color("Something went wrong",Colors.red)
# .\HW6m.py C:\test_sorted | 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/sort.py | sort.py |
from colorit import *
from prettytable import PrettyTable
def pers_assistant_help():
pah_com_list = {"tel_book":"TELEPHONE BOOK", "note_book": "NOTE BOOK", "sorted": "SORTED"}
all_commands = {
"1":[
["show all", "This command shows all contacts in your address book", "show all"],
["add user", "This command adds a new user in your address book", "add user <FirstName_LastName> <phone>"],
["add tags","This command add a new tags for an existing contact"," add tags <tag>"],
["add phone", "This command adds a new phone number for an existing contact", "add phone <user> <phone>"],
["add email", "This command adds an email for an existing contact", "add email <user> <email>"],
["add birthday", "This command adds a birthday for an existing contact", "add birthday <user> <date>"],
["add adress", "This command adds an address for an existing contact", "add adress <user> <address>"],
["change phone","This command changes an phone for an existing contact","change phone <OldPhone> <NewPhone>"],
["change adress", "This command changes an address for an existing contact", "change adress <user> <new_address>"],
["change email", "This command changes an email address for an existing contact", "change email <user> <new_email>"],
["change birthday", "This command changes a birthday for an existing contact", "change birthday <user> <newBirthday>"],
["find name", "This command finds all existing contacts whose names match the search query", "find name <name>"],
["find phone", "This command finds existing contacts whose phone match the search query", "find phone <phone>"],
["find tags", "This command finds existing contacts whose tags match the search query", "find tags <tag>"]
["remove tags","This command removes a tags for an existing contact", "remove tags <user> <tag>"],
["remove phone", "This command removes a phone number for an existing contact", "remove phone <user> <phone>"],
["remove birthday", "This command removes a birthday for an existing contact", "remove birthday <user>"],
["remove email", "This command removes an email address for an existing contact", "remove email <user> <email>"],
["remove user", "This command removes an existing contact and all the information about it", "remove user <user>"],
["remove adress", "This command removes an existing contact and all the information about it", "remove adress <user> <address>"],
["when birthday", "This command shows a birthday of an existing contact", "when birthday <user>"],
["birthday within","This command shows all users who has birthday in selected period"," birthday within <days - (must be integer)>"]
],
"2":[
["add or add_note", "This command adds a new note in your Notepad", "add(add_note) <title> <body> <tags>"],
["edit or edit_note", "This command changes an existing note in your Notepad", "edit(edit_note) <title>"],
["delete", "This command deletes an existing note in your Notepad", "delete <title>"],
["find_tags", "This command finds and sorts existing notes whose tags match the search query", "find_tags <tag>"],
["find", "This command finds existing notes whose note(body) matches the search query", "find <frase>"],
["show or show_note", "This command shows an existing note in your Notepad", "show(show_note) <title>"],
["showall", "This command shows all existing notes in your Notepad", "showall"],
],
"3": [[
"sort directory", "This command sorts all files in the given directory", "sort directory <path to folder>"
]]}
print(f'''I'm your personal assistant.
I have {pah_com_list['tel_book']}, {pah_com_list['note_book']} and I can {pah_com_list['sorted']} your files in your folder.\n''')
while True:
print(f'''If you want to know how to work with:
"{pah_com_list['tel_book']}" press '1'
"{pah_com_list['note_book']}" press '2'
function "{pah_com_list['sorted']}" press '3'
SEE all comands press '4'
EXIT from HELP press any other key''')
user_input = input()
if user_input not in ["1", "2", "3", "4"]:
break
elif user_input in ["1", "2", "3"]:
my_table = PrettyTable(["Command Name", "Discription", "Example"])
[my_table.add_row(i) for i in all_commands[user_input]]
my_table.add_row(["quit, close, goodbye, exit", "This command finish work with your assistant", "quit(close, goodbye, exit)"])
print(my_table)
else:
my_table = PrettyTable(["Command Name", "Discription", "Example"])
all_commands_list = sorted([i for j in list(all_commands.values()) for i in j])
[my_table.add_row(i) for i in all_commands_list]
my_table.add_row(["quit, close, goodbye, exit", "This command finish work with your assistant", "quit(close, goodbye, exit)"])
print(my_table)
return color("Done",Colors.blue) | 11Team-AssistantBot | /11Team_AssistantBot-1.11.tar.gz/11Team_AssistantBot-1.11/11Team_AssistantBot/help.py | help.py |
import sys, platform, os, re
if not sys.version_info >= (3, 6):
sys.exit('Python 3.6 or higher is required!')
try:
import eldf
except ImportError:
sys.exit("Module eldf is not installed!\nPlease install it using this command:\n" + (sys.platform == 'win32')*(os.path.dirname(sys.executable) + '\\Scripts\\') + 'pip3 install eldf')
if len(sys.argv) < 2 or '-h' in sys.argv or '--help' in sys.argv:
print('''Usage: 11l py-or-11l-source-file [options]
Options:
--int64 use 64-bit integers
-d disable optimizations [makes compilation faster]
-t transpile only
-e expand includes
-v print version''')
sys.exit(1)
if '-v' in sys.argv:
print(open(os.path.join(os.path.dirname(sys.argv[0]), 'version.txt')).read())
sys.exit(0)
enopt = not '-d' in sys.argv
if not (sys.argv[1].endswith('.py') or sys.argv[1].endswith('.11l')):
sys.exit("source-file should have extension '.py' or '.11l'")
def show_error(fname, fcontents, e, syntax_error):
next_line_pos = fcontents.find("\n", e.pos)
if next_line_pos == -1:
next_line_pos = len(fcontents)
prev_line_pos = fcontents.rfind("\n", 0, e.pos) + 1
sys.exit(('Syntax' if syntax_error else 'Lexical') + ' error: ' + e.message + "\n in file '" + fname + "', line " + str(fcontents[:e.pos].count("\n") + 1) + "\n"
+ fcontents[prev_line_pos:next_line_pos] + "\n" + re.sub(r'[^\t]', ' ', fcontents[prev_line_pos:e.pos]) + '^'*max(1, e.end - e.pos))
import _11l_to_cpp.tokenizer, _11l_to_cpp.parse
if sys.argv[1].endswith('.py'):
import python_to_11l.tokenizer, python_to_11l.parse
py_source = open(sys.argv[1], encoding = 'utf-8-sig').read()
try:
_11l_code = python_to_11l.parse.parse_and_to_str(python_to_11l.tokenizer.tokenize(py_source), py_source, sys.argv[1])
except (python_to_11l.parse.Error, python_to_11l.tokenizer.Error) as e:
show_error(sys.argv[1], py_source, e, type(e) == python_to_11l.parse.Error)
_11l_fname = os.path.splitext(sys.argv[1])[0] + '.11l'
open(_11l_fname, 'w', encoding = 'utf-8', newline = "\n").write(_11l_code)
else:
_11l_fname = sys.argv[1]
_11l_code = open(sys.argv[1], encoding = 'utf-8-sig').read()
cpp_code = ''
if '--int64' in sys.argv:
cpp_code += "#define INT_IS_INT64\n"
_11l_to_cpp.parse.int_is_int64 = True
cpp_code += '#include "' + os.path.abspath(os.path.join(os.path.dirname(sys.argv[0]), '_11l_to_cpp', '11l.hpp')) + "\"\n\n" # replace("\\", "\\\\") is not necessary here (because MSVC for some reason treat backslashes in include path differently than in regular string literals)
try:
cpp_code += _11l_to_cpp.parse.parse_and_to_str(_11l_to_cpp.tokenizer.tokenize(_11l_code), _11l_code, _11l_fname, append_main = True)
except (_11l_to_cpp.parse.Error, _11l_to_cpp.tokenizer.Error) as e:
# open(_11l_fname, 'w', encoding = 'utf-8', newline = "\n").write(_11l_code)
show_error(_11l_fname, _11l_code, e, type(e) == _11l_to_cpp.parse.Error)
if '-e' in sys.argv:
included = set()
def process_include_directives(src_code, dir = ''):
exp_code = ''
writepos = 0
while True:
i = src_code.find('#include "', writepos)
if i == -1:
break
exp_code += src_code[writepos:i]
if src_code[i-2:i] == '//': # skip commented includes
exp_code += '#'
writepos = i + 1
continue
fname_start = i + len('#include "')
fname_end = src_code.find('"', fname_start)
assert(src_code[fname_end + 1] == "\n") # [-TODO: Add support of comments after #include directives-]
fname = src_code[fname_start:fname_end]
if fname[1:3] == ':\\' or fname.startswith('/'): # this is an absolute pathname
pass
else: # this is a relative pathname
assert(dir != '')
fname = os.path.join(dir, fname)
if fname not in included:
included.add(fname)
exp_code += process_include_directives(open(fname, encoding = 'utf-8-sig').read(), os.path.dirname(fname))
writepos = fname_end + 1
exp_code += src_code[writepos:]
return exp_code
cpp_code = process_include_directives(cpp_code)
cpp_fname = os.path.splitext(sys.argv[1])[0] + '.cpp'
open(cpp_fname, 'w', encoding = 'utf-8-sig', newline = "\n").write(cpp_code) # utf-8-sig is for MSVC
if '-t' in sys.argv or \
'-e' in sys.argv:
sys.exit()
if sys.platform == 'win32':
was_break = False
for version in ['2019', '2017']:
for edition in ['BuildTools', 'Community', 'Enterprise', 'Professional']:
vcvarsall = 'C:\\Program Files' + ' (x86)'*platform.machine().endswith('64') + '\\Microsoft Visual Studio\\' + version + '\\' + edition + R'\VC\Auxiliary\Build\vcvarsall.bat'
if os.path.isfile(vcvarsall):
was_break = True
#print('Using ' + version + '\\' + edition)
break # ^L.break
if was_break:
break
if not was_break:
sys.exit('''Unable to find vcvarsall.bat!
If you do not have Visual Studio 2017 or 2019 installed please install it or Build Tools for Visual Studio from here[https://visualstudio.microsoft.com/downloads/].''')
os.system('"' + vcvarsall + '" ' + ('x64' if platform.machine().endswith('64') else 'x86') + ' > nul && cl.exe /std:c++17 /MT /EHsc /nologo /W3 ' + '/O2 '*enopt + cpp_fname)
else:
if os.system('g++-8 --version > /dev/null') != 0:
sys.exit('GCC 8 is not found!')
os.system('g++-8 -std=c++17 -Wfatal-errors -DNDEBUG ' + '-O3 '*enopt + '-march=native -o "' + os.path.splitext(sys.argv[1])[0] + '" "' + cpp_fname + '" -lstdc++fs') | 11l | /11l-2021.3-py3-none-any.whl/11l.py | 11l.py |
try:
from python_to_11l.tokenizer import Token
import python_to_11l.tokenizer as tokenizer
except ImportError:
from tokenizer import Token
import tokenizer
from typing import List, Tuple, Dict, Callable
from enum import IntEnum
import os, re, eldf
class Scope:
parent : 'Scope'
class Var:
type : str
node : 'ASTNode'
def __init__(self, type, node):
assert(type is not None)
self.type = type
self.node = node
def serialize_to_dict(self):
node = None
if type(self.node) == ASTFunctionDefinition:
node = self.node.serialize_to_dict()
return {'type': self.type, 'node': node}
def deserialize_from_dict(self, d):
if d['node'] is not None:
self.node = ASTFunctionDefinition()
self.node.deserialize_from_dict(d['node'])
vars : Dict[str, Var]
nonlocals_copy : set
nonlocals : set
globals : set
is_function : bool
is_lambda_or_for = False
def __init__(self, func_args):
self.parent = None
if func_args is not None:
self.is_function = True
self.vars = dict(map(lambda x: (x[0], Scope.Var(x[1], None)), func_args))
else:
self.is_function = False
self.vars = {}
self.nonlocals_copy = set()
self.nonlocals = set()
self.globals = set()
def serialize_to_dict(self, imported_modules):
ids_dict = {'Imported modules': imported_modules}
for name, id in self.vars.items():
if name not in python_types_to_11l and not id.type.startswith('('): # )
ids_dict[name] = id.serialize_to_dict()
return ids_dict
def deserialize_from_dict(self, d):
for name, id_dict in d.items():
if name != 'Imported modules':
id = Scope.Var(id_dict['type'], None)
id.deserialize_from_dict(id_dict)
self.vars[name] = id
def add_var(self, name, error_if_already_defined = False, type = '', err_token = None, node = None):
s = self
while True:
if name in s.nonlocals_copy or name in s.nonlocals or name in s.globals:
return False
if s.is_function:
break
s = s.parent
if s is None:
break
if not (name in self.vars):
s = self
while True:
if name in s.vars:
return False
if s.is_function:
break
s = s.parent
if s is None:
break
self.vars[name] = Scope.Var(type, node)
return True
elif error_if_already_defined:
raise Error('redefinition of already defined variable is not allowed', err_token if err_token is not None else token)
return False
def find_and_get_prefix(self, name, token):
if name == 'self':
return ''
if name in ('isinstance', 'len', 'super', 'print', 'input', 'ord', 'chr', 'range', 'zip', 'all', 'any', 'abs', 'pow', 'sum', 'product', 'open', 'min', 'max', 'divmod', 'hex', 'bin', 'map', 'list', 'tuple', 'dict', 'set', 'sorted', 'reversed', 'filter', 'reduce', 'round', 'enumerate', 'hash', 'copy', 'deepcopy', 'NotImplementedError', 'ValueError', 'IndexError'):
return ''
s = self
while True:
if name in s.nonlocals_copy:
return '@='
if name in s.nonlocals:
return '@'
if name in s.globals:
return ':'
if s.is_function and not s.is_lambda_or_for:
break
s = s.parent
if s is None:
break
capture_level = 0
s = self
while True:
if name in s.vars:
if s.parent is None: # variable is declared in the global scope
if s.vars[name].type == '(Module)':
return ':::'
return ':' if capture_level > 0 else ''
else:
return capture_level*'@'
if s.is_function:
capture_level += 1
s = s.parent
if s is None:
if name in ('id',):
return ''
raise Error('undefined identifier', token)
def find(self, name):
s = self
while True:
id = s.vars.get(name)
if id is not None:
return id
s = s.parent
if s is None:
return None
def var_type(self, name):
id = self.find(name)
return id.type if id is not None else None
scope : Scope
class Module:
scope : Scope
def __init__(self, scope):
self.scope = scope
modules : Dict[str, Module] = {}
class SymbolBase:
id : str
lbp : int
nud_bp : int
led_bp : int
nud : Callable[['SymbolNode'], 'SymbolNode']
led : Callable[['SymbolNode', 'SymbolNode'], 'SymbolNode']
def set_nud_bp(self, nud_bp, nud):
self.nud_bp = nud_bp
self.nud = nud
def set_led_bp(self, led_bp, led):
self.led_bp = led_bp
self.led = led
def __init__(self):
def nud(s): raise Error('unknown unary operator', s.token)
self.nud = nud
def led(s, l): raise Error('unknown binary operator', s.token)
self.led = led
class SymbolNode:
token : Token
symbol : SymbolBase = None
children : List['SymbolNode']# = []
parent : 'SymbolNode' = None
ast_parent : 'ASTNode'
function_call = False
iterable_unpacking = False
tuple = False
is_list = False
is_set = False
def is_dict(self): return self.symbol.id == '{' and not self.is_set # }
slicing = False
is_not = False
skip_find_and_get_prefix = False
scope_prefix : str = ''
scope : Scope
token_str_override : str
def __init__(self, token, token_str_override = None):
self.token = token
self.children = []
self.scope = scope
self.token_str_override = token_str_override
def var_type(self):
if self.is_parentheses():
return self.children[0].var_type()
if self.symbol.id == '*' and self.children[0].var_type() == 'List':
return 'List'
if self.symbol.id == '+' and (self.children[0].var_type() == 'List' or self.children[1].var_type() == 'List'):
return 'List'
if self.is_list:
return 'List'
#if self.symbol.id == '[' and not self.is_list and self.children[0].var_type() == 'str': # ]
if self.symbol.id == '[' and self.children[0].var_type() == 'str': # ]
return 'str'
if self.symbol.id == '*' and self.children[1].var_type() == 'str':
return 'str'
if self.token.category == Token.Category.STRING_LITERAL:
return 'str'
if self.symbol.id == '.':
if self.children[0].token_str() == 'os' and self.children[1].token_str() == 'pathsep':
return 'str'
return None
if self.symbol.id == 'if':
t0 = self.children[0].var_type()
if t0 is not None:
return t0
return self.children[2].var_type()
if self.function_call and self.children[0].token_str() == 'str':
return 'str'
return self.scope.var_type(self.token.value(source))
def append_child(self, child):
child.parent = self
self.children.append(child)
def leftmost(self):
if self.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL, Token.Category.NAME, Token.Category.CONSTANT) or self.symbol.id == 'lambda':
return self.token.start
if self.symbol.id == '(': # )
if self.function_call:
return self.children[0].token.start
else:
return self.token.start
elif self.symbol.id == '[': # ]
if self.is_list:
return self.token.start
else:
return self.children[0].token.start
if len(self.children) in (2, 3):
return self.children[0].leftmost()
return self.token.start
def rightmost(self):
if self.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL, Token.Category.NAME, Token.Category.CONSTANT):
return self.token.end
if self.symbol.id in '([': # ])
if len(self.children) == 0:
return self.token.end + 1
return (self.children[-1] or self.children[-2]).rightmost() + 1
return self.children[-1].rightmost()
def left_to_right_token(self):
return Token(self.leftmost(), self.rightmost(), Token.Category.NAME)
def token_str(self):
return self.token.value(source) if not self.token_str_override else self.token_str_override
def is_parentheses(self):
return self.symbol.id == '(' and not self.tuple and not self.function_call # )
def to_str(self):
# r = ''
# prev_token_end = self.children[0].token.start
# for c in self.children:
# r += source[prev_token_end:c.token.start]
# if c.token.value(source) != 'self': # hack for a while
# r += c.token.value(source)
# prev_token_end = c.token.end
# return r
if self.token.category == Token.Category.NAME:
if self.scope_prefix == ':' and ((self.parent and self.parent.function_call and self is self.parent.children[0]) or (self.token_str()[0].isupper() and self.token_str() != self.token_str().upper()) or self.token_str() in python_types_to_11l): # global functions and types do not require prefix `:` because global functions and types are ok, but global variables are not so good and they should be marked with `:`
return self.token_str()
if self.token_str() == 'self' and (self.parent is None or (self.parent.symbol.id != '.' and self.parent.symbol.id != 'lambda')):
parent = self
while parent.parent is not None:
parent = parent.parent
ast_parent = parent.ast_parent
while ast_parent is not None:
if isinstance(ast_parent, ASTFunctionDefinition):
if len(ast_parent.function_arguments) and ast_parent.function_arguments[0][0] == 'self' and isinstance(ast_parent.parent, ASTClassDefinition):
return '(.)'
break
ast_parent = ast_parent.parent
return self.scope_prefix + self.token_str()
if self.token.category == Token.Category.NUMERIC_LITERAL:
n = self.token.value(source)
i = 0
# if n[0] in '-+':
# sign = n[0]
# i = 1
# else:
# sign = ''
sign = ''
is_hex = n[i:i+1] == '0' and n[i+1:i+2] in ('x', 'X')
is_oct = n[i:i+1] == '0' and n[i+1:i+2] in ('o', 'O')
is_bin = n[i:i+1] == '0' and n[i+1:i+2] in ('b', 'B')
if is_hex or is_oct or is_bin:
i += 2
if is_hex:
n = n[i:].replace('_', '')
if len(n) <= 2: # ultrashort hexadecimal number
n = '0'*(2-len(n)) + n
return n[:1] + "'" + n[1:]
elif len(n) <= 4: # short hexadecimal number
n = '0'*(4-len(n)) + n
return n[:2] + "'" + n[2:]
else:
number_with_separators = ''
j = len(n)
while j > 4:
number_with_separators = "'" + n[j-4:j] + number_with_separators
j -= 4
return sign + '0'*(4-j) + n[0:j] + number_with_separators
if n[-1] in 'jJ':
n = n[:-1] + 'i'
return sign + n[i:].replace('_', "'") + ('o' if is_oct else 'b' if is_bin else '')
if self.token.category == Token.Category.STRING_LITERAL:
def balance_pq_string(s):
min_nesting_level = 0
nesting_level = 0
for ch in s:
if ch == "‘":
nesting_level += 1
elif ch == "’":
nesting_level -= 1
min_nesting_level = min(min_nesting_level, nesting_level)
nesting_level -= min_nesting_level
return "'"*-min_nesting_level + "‘"*-min_nesting_level + "‘" + s + "’" + "’"*nesting_level + "'"*nesting_level
s = self.token.value(source)
if s[0] in 'rR':
l = 3 if s[1:4] in ('"""', "'''") else 1
return balance_pq_string(s[1+l:-l])
elif s[0] in 'bB':
return s[1:] + '.code'
else:
l = 3 if s[0:3] in ('"""', "'''") else 1
if '\\' in s or ('‘' in s and not '’' in s) or (not '‘' in s and '’' in s):
if s == R'"\\"' or s == R"'\\'":
return R'‘\’'
s = s.replace("\n", "\\n\\\n").replace("\\\\n\\\n", "\\\n")
if s[0] == '"':
return s if l == 1 else '"' + s[3:-3].replace('"', R'\"') + '"'
else:
return '"' + s[l:-l].replace('"', R'\"').replace(R"\'", "'") + '"'
else:
return balance_pq_string(s[l:-l])
if self.token.category == Token.Category.CONSTANT:
return {'None': 'N', 'False': '0B', 'True': '1B'}[self.token.value(source)]
def range_need_space(child1, child2):
return not((child1 is None or child1.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL))
and (child2 is None or child2.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL)))
if self.symbol.id == '(': # )
if self.function_call:
if self.children[0].symbol.id == '.':
c01 = self.children[0].children[1].token_str()
if self.children[0].children[0].symbol.id == '{' and c01 == 'get': # } # replace `{'and':'&', 'or':'|', 'in':'C'}.get(self.symbol.id, 'symbol-' + self.symbol.id)` with `(S .symbol.id {‘and’ {‘&’}; ‘or’ {‘|’}; ‘in’ {‘C’} E ‘symbol-’(.symbol.id)})`
parenthesis = ('(', ')') if self.parent is not None else ('', '')
return parenthesis[0] + self.children[0].to_str() + parenthesis[1]
if c01 == 'join' and not (self.children[0].children[0].symbol.id == '.' and self.children[0].children[0].children[0].token_str() == 'os'): # replace `', '.join(arr)` with `arr.join(‘, ’)`
assert(len(self.children) == 3)
return (self.children[1].to_str() if self.children[1].token.category == Token.Category.NAME or self.children[1].symbol.id == 'for' or self.children[1].function_call else '(' + self.children[1].to_str() + ')') + '.join(' + (self.children[0].children[0].children[0].to_str() if self.children[0].children[0].is_parentheses() else self.children[0].children[0].to_str()) + ')'
if c01 == 'split' and len(self.children) == 5 and not (self.children[0].children[0].token_str() == 're'): # split() second argument [limit] in 11l is similar to JavaScript, Ruby and PHP, but not Python
return self.children[0].to_str() + '(' + self.children[1].to_str() + ', ' + self.children[3].to_str() + ' + 1)'
if c01 == 'split' and len(self.children) == 1:
return self.children[0].to_str() + '_py()' # + '((‘ ’, "\\t", "\\r", "\\n"), group_delimiters\' 1B)'
if c01 == 'is_integer' and len(self.children) == 1: # `x.is_integer()` -> `fract(x) == 0`
return 'fract(' + self.children[0].children[0].to_str() + ') == 0'
if c01 == 'bit_length' and len(self.children) == 1: # `x.bit_length()` -> `bit_length(x)`
return 'bit_length(' + self.children[0].children[0].to_str() + ')'
repl = {'startswith':'starts_with', 'endswith':'ends_with', 'find':'findi', 'rfind':'rfindi',
'lower':'lowercase', 'islower':'is_lowercase', 'upper':'uppercase', 'isupper':'is_uppercase', 'isdigit':'is_digit', 'isalpha':'is_alpha',
'timestamp':'unix_time', 'lstrip':'ltrim', 'rstrip':'rtrim', 'strip':'trim',
'appendleft':'append_left', 'extendleft':'extend_left', 'popleft':'pop_left', 'issubset':'is_subset'}.get(c01, '')
if repl != '': # replace `startswith` with `starts_with`, `endswith` with `ends_with`, etc.
c00 = self.children[0].children[0].to_str()
if repl == 'uppercase' and c00.endswith('[2..]') and self.children[0].children[0].children[0].symbol.id == '(' and self.children[0].children[0].children[0].children[0].token_str() == 'hex': # ) # `hex(x)[2:].upper()` -> `hex(x)`
return 'hex(' + self.children[0].children[0].children[0].children[1].to_str() + ')'
#assert(len(self.children) == 3)
res = c00 + '.' + repl + '('
def is_char(child):
ts = child.token_str()
return child.token.category == Token.Category.STRING_LITERAL and (len(ts) == 3 or (ts[:2] == '"\\' and len(ts) == 4))
if repl.endswith('trim') and len(self.children) == 1: # `strip()` -> `trim((‘ ’, "\t", "\r", "\n"))`
res += '(‘ ’, "\\t", "\\r", "\\n")'
elif repl.endswith('trim') and not is_char(self.children[1]): # `"...".strip("\t ")` -> `"...".trim(Array[Char]("\t "))`
assert(len(self.children) == 3)
res += 'Array[Char](' + self.children[1].to_str() + ')'
else:
for i in range(1, len(self.children), 2):
assert(self.children[i+1] is None)
res += self.children[i].to_str()
if i < len(self.children)-2:
res += ', '
return res + ')'
if self.children[0].children[0].symbol.id == '(' and \
self.children[0].children[0].children[0].token_str() == 'open' and \
len(self.children[0].children[0].children) == 5 and \
self.children[0].children[0].children[4] is None and \
self.children[0].children[0].children[3].token_str() in ("'rb'", '"rb"') and \
self.children[0].children[1].token_str() == 'read': # ) # transform `open(fname, 'rb').read()` into `File(fname).read_bytes()`
assert(self.children[0].children[0].children[2] is None)
return 'File(' + self.children[0].children[0].children[1].to_str() + ').read_bytes()'
if c01 == 'total_seconds': # `delta.total_seconds()` -> `delta.seconds`
assert(len(self.children) == 1)
return self.children[0].children[0].to_str() + '.seconds'
if c01 == 'conjugate' and len(self.children) == 1: # `c.conjugate()` -> `conjugate(c)`
return 'conjugate(' + self.children[0].children[0].to_str() + ')'
if c01 == 'readlines': # `f.readlines()` -> `f.read_lines(1B)`
assert(len(self.children) == 1)
return self.children[0].children[0].to_str() + ".read_lines(1B)"
if c01 == 'readline': # `f.readline()` -> `f.read_line(1B)`
assert(len(self.children) == 1)
return self.children[0].children[0].to_str() + ".read_line(1B)"
if self.children[0].children[0].token_str() == 're' and self.children[0].children[1].token_str() != 'compile': # `re.search('pattern', 'string')` -> `re:‘pattern’.search(‘string’)`
c1_in_braces_if_needed = self.children[1].to_str()
if self.children[1].token.category != Token.Category.STRING_LITERAL:
c1_in_braces_if_needed = '(' + c1_in_braces_if_needed + ')'
if self.children[0].children[1].token_str() == 'split': # `re.split('pattern', 'string')` -> `‘string’.split(re:‘pattern’)`
return self.children[3].to_str() + '.split(re:' + c1_in_braces_if_needed + ')'
if self.children[0].children[1].token_str() == 'sub': # `re.sub('pattern', 'repl', 'string')` -> `‘string’.replace(re:‘pattern’, ‘repl’)`
return self.children[5].to_str() + '.replace(re:' + c1_in_braces_if_needed + ', ' + re.sub(R'\\(\d{1,2})', R'$\1', self.children[3].to_str()) + ')'
if self.children[0].children[1].token_str() == 'match':
assert c1_in_braces_if_needed[0] != '(', 'only string literal patterns supported in `match()` for a while' # )
if c1_in_braces_if_needed[-2] == '$': # `re.match('pattern$', 'string')` -> `re:‘pattern’.match(‘string’)`
return 're:' + c1_in_braces_if_needed[:-2] + c1_in_braces_if_needed[-1] + '.match(' + self.children[3].to_str() + ')'
else: # `re.match('pattern', 'string')` -> `re:‘^pattern’.search(‘string’)`
return 're:' + c1_in_braces_if_needed[0] + '^' + c1_in_braces_if_needed[1:] + '.search(' + self.children[3].to_str() + ')'
c0c1 = self.children[0].children[1].token_str()
return 're:' + c1_in_braces_if_needed + '.' + {'fullmatch': 'match', 'findall': 'find_strings', 'finditer': 'find_matches'}.get(c0c1, c0c1) + '(' + self.children[3].to_str() + ')'
if self.children[0].children[0].token_str() == 'collections' and self.children[0].children[1].token_str() == 'defaultdict': # `collections.defaultdict(ValueType) # KeyType` -> `DefaultDict[KeyType, ValueType]()`
assert(len(self.children) == 3)
if source[self.children[1].token.end + 2 : self.children[1].token.end + 3] != '#':
raise Error('to use `defaultdict` the type of dict keys must be specified in the comment', self.children[0].children[1].token)
sl = slice(self.children[1].token.end + 3, source.find("\n", self.children[1].token.end + 3))
return 'DefaultDict[' + trans_type(source[sl].lstrip(' '), self.scope, Token(sl.start, sl.stop, Token.Category.NAME)) + ', ' \
+ trans_type(self.children[1].token_str(), self.scope, self.children[1].token) + ']()'
if self.children[0].children[0].token_str() == 'collections' and self.children[0].children[1].token_str() == 'deque': # `collections.deque() # ValueType` -> `Deque[ValueType]()`
if len(self.children) == 3:
return 'Deque(' + self.children[1].to_str() + ')'
assert(len(self.children) == 1)
if source[self.token.end + 2 : self.token.end + 3] != '#':
raise Error('to use `deque` the type of deque values must be specified in the comment', self.children[0].children[1].token)
sl = slice(self.token.end + 3, source.find("\n", self.token.end + 3))
return 'Deque[' + trans_type(source[sl].lstrip(' '), self.scope, Token(sl.start, sl.stop, Token.Category.NAME)) + ']()'
if self.children[0].children[0].token_str() == 'int' and self.children[0].children[1].token_str() == 'from_bytes':
assert(len(self.children) == 5)
if not (self.children[3].token.category == Token.Category.STRING_LITERAL and self.children[3].token_str()[1:-1] == 'little'):
raise Error("only 'little' byteorder supported so far", self.children[3].token)
return "Int(bytes' " + self.children[1].to_str() + ')'
if self.children[0].children[0].token_str() == 'random' and self.children[0].children[1].token_str() == 'shuffle':
return 'random:shuffle(&' + self.children[1].to_str() + ')'
if self.children[0].children[0].token_str() == 'random' and self.children[0].children[1].token_str() == 'randint':
return 'random:(' + self.children[1].to_str() + ' .. ' + self.children[3].to_str() + ')'
if self.children[0].children[0].token_str() == 'random' and self.children[0].children[1].token_str() == 'randrange':
return 'random:(' + self.children[1].to_str() + (' .< ' + self.children[3].to_str() if len(self.children) == 5 else '') + ')'
if self.children[0].children[0].token_str() == 'heapq':
res = 'minheap:' + {'heappush':'push', 'heappop':'pop', 'heapify':'heapify'}[self.children[0].children[1].token_str()] + '(&'
for i in range(1, len(self.children), 2):
assert(self.children[i+1] is None)
res += self.children[i].to_str()
if i < len(self.children)-2:
res += ', '
return res + ')'
if self.children[0].children[0].token_str() == 'itertools' and self.children[0].children[1].token_str() == 'count': # `itertools.count(1)` -> `1..`
return self.children[1].to_str() + '..'
func_name = self.children[0].to_str()
if func_name == 'str':
func_name = 'String'
elif func_name in ('int', 'Int64'):
if func_name == 'int':
func_name = 'Int'
if len(self.children) == 5:
return func_name + '(' + self.children[1].to_str() + ", radix' " + self.children[3].to_str() + ')'
elif func_name == 'float':
if len(self.children) == 3 and self.children[1].token.category == Token.Category.STRING_LITERAL and self.children[1].token_str()[1:-1].lower() in ('infinity', 'inf'):
return 'Float.infinity'
func_name = 'Float'
elif func_name == 'complex':
func_name = 'Complex'
elif func_name == 'list': # `list(map(...))` -> `map(...)`
if len(self.children) == 3 and self.children[1].symbol.id == '(' and self.children[1].children[0].token_str() == 'range': # ) # `list(range(...))` -> `Array(...)`
parens = True#len(self.children[1].children) == 7 # if true, then this is a range with step
return 'Array' + '('*parens + self.children[1].to_str() + ')'*parens
assert(len(self.children) == 3)
if self.children[1].symbol.id == '(' and self.children[1].children[0].token_str() in ('map', 'product', 'zip'): # )
return self.children[1].to_str()
else:
return 'Array(' + self.children[1].to_str() + ')'
elif func_name == 'tuple': # `tuple(sorted(...))` -> `tuple_sorted(...)`
assert(len(self.children) == 3)
if self.children[1].function_call and self.children[1].children[0].token_str() == 'sorted':
return 'tuple_' + self.children[1].to_str()
elif func_name == 'dict':
func_name = 'Dict'
elif func_name == 'set': # `set() # KeyType` -> `Set[KeyType]()`
if len(self.children) == 3:
return 'Set(' + self.children[1].to_str() + ')'
assert(len(self.children) == 1)
if source[self.token.end + 2 : self.token.end + 3] != '#':
# if self.parent is None and type(self.ast_parent) == ASTExprAssignment \
# and self.ast_parent.dest_expression.symbol.id == '.' \
# and self.ast_parent.dest_expression.children[0].token_str() == 'self' \
# and type(self.ast_parent.parent) == ASTFunctionDefinition \
# and self.ast_parent.parent.function_name == '__init__':
# return 'Set()'
raise Error('to use `set` the type of set keys must be specified in the comment', self.children[0].token)
sl = slice(self.token.end + 3, source.find("\n", self.token.end + 3))
return 'Set[' + trans_type(source[sl].lstrip(' '), self.scope, Token(sl.start, sl.stop, Token.Category.NAME)) + ']()'
elif func_name == 'open':
func_name = 'File'
mode = '‘r’'
for i in range(1, len(self.children), 2):
if self.children[i+1] is None:
if i == 3:
mode = self.children[i].to_str()
else:
arg_name = self.children[i].to_str()
if arg_name == 'mode':
mode = self.children[i+1].to_str()
elif arg_name == 'newline':
if mode not in ('‘w’', '"w"'):
raise Error("`newline` argument is only supported in 'w' mode", self.children[i].token)
if self.children[i+1].to_str() != '"\\n"':
raise Error(R'the only allowed value for `newline` argument is `"\n"`', self.children[i+1].token)
self.children.pop(i+1)
self.children.pop(i)
break
elif func_name == 'product':
func_name = 'cart_product'
elif func_name == 'deepcopy':
func_name = 'copy'
elif func_name == 'print' and self.iterable_unpacking:
func_name = 'print_elements'
if func_name == 'len': # replace `len(container)` with `container.len`
assert(len(self.children) == 3)
if isinstance(self.ast_parent, (ASTIf, ASTWhile)) if self.parent is None else self.parent.symbol.id == 'if': # `if len(arr)` -> `I !arr.empty`
return '!' + self.children[1].to_str() + '.empty'
if len(self.children[1].children) == 2 and self.children[1].symbol.id not in ('.', '['): # ]
return '(' + self.children[1].to_str() + ')' + '.len'
return self.children[1].to_str() + '.len'
elif func_name == 'ord': # replace `ord(ch)` with `ch.code`
assert(len(self.children) == 3)
return self.children[1].to_str() + '.code'
elif func_name == 'chr': # replace `chr(code)` with `Char(code' code)`
assert(len(self.children) == 3)
return "Char(code' " + self.children[1].to_str() + ')'
elif func_name == 'isinstance': # replace `isinstance(obj, type)` with `T(obj) >= type`
assert(len(self.children) == 5)
return 'T(' + self.children[1].to_str() + ') >= ' + self.children[3].to_str()
elif func_name in ('map', 'filter'): # replace `map(function, iterable)` with `iterable.map(function)`
assert(len(self.children) == 5)
b = len(self.children[3].children) > 1 and self.children[3].symbol.id not in ('(', '[') # ])
c1 = self.children[1].to_str()
return '('*b + self.children[3].to_str() + ')'*b + '.' + func_name + '(' + {'int':'Int', 'float':'Float', 'str':'String'}.get(c1, c1) + ')'
elif func_name == 'reduce':
if len(self.children) == 5: # replace `reduce(function, iterable)` with `iterable.reduce(function)`
return self.children[3].to_str() + '.reduce(' + self.children[1].to_str() + ')'
else: # replace `reduce(function, iterable, initial)` with `iterable.reduce(initial, function)`
assert(len(self.children) == 7)
return self.children[3].to_str() + '.reduce(' + self.children[5].to_str() + ', ' + self.children[1].to_str() + ')'
elif func_name == 'super': # replace `super()` with `T.base`
assert(len(self.children) == 1)
return 'T.base'
elif func_name == 'range':
assert(3 <= len(self.children) <= 7)
parenthesis = ('(', ')') if self.parent is not None and (self.parent.symbol.id == 'for' or (self.parent.function_call and self.parent.children[0].token_str() in ('map', 'filter', 'reduce'))) else ('', '')
if len(self.children) == 3: # replace `range(e)` with `(0 .< e)`
space = ' ' * range_need_space(self.children[1], None)
c1 = self.children[1].to_str()
if c1.endswith(' + 1'): # `range(e + 1)` -> `0 .. e`
return parenthesis[0] + '0' + space + '..' + space + c1[:-4] + parenthesis[1]
return parenthesis[0] + '0' + space + '.<' + space + c1 + parenthesis[1]
else:
rangestr = ' .< ' if range_need_space(self.children[1], self.children[3]) else '.<'
if len(self.children) == 5: # replace `range(b, e)` with `(b .< e)`
if self.children[3].token.category == Token.Category.NUMERIC_LITERAL and self.children[3].token_str().replace('_', '').isdigit() and \
self.children[1].token.category == Token.Category.NUMERIC_LITERAL and self.children[1].token_str().replace('_', '').isdigit(): # if `b` and `e` are numeric literals, then ...
return parenthesis[0] + self.children[1].token_str().replace('_', '') + '..' + str(int(self.children[3].token_str().replace('_', '')) - 1) + parenthesis[1] # ... replace `range(b, e)` with `(b..e-1)`
c3 = self.children[3].to_str()
if c3.endswith(' + 1'): # `range(a, b + 1)` -> `a .. b`
return parenthesis[0] + self.children[1].to_str() + rangestr.replace('<', '.') + c3[:-4] + parenthesis[1]
return parenthesis[0] + self.children[1].to_str() + rangestr + c3 + parenthesis[1]
else: # replace `range(b, e, step)` with `(b .< e).step(step)`
return '(' + self.children[1].to_str() + rangestr + self.children[3].to_str() + ').step(' + self.children[5].to_str() + ')'
elif func_name == 'print':
first_named_argument = len(self.children)
for i in range(1, len(self.children), 2):
if self.children[i+1] is not None:
first_named_argument = i
break
sep = '‘ ’'
for i in range(first_named_argument, len(self.children), 2):
assert(self.children[i+1] is not None)
if self.children[i].to_str() == 'sep':
sep = self.children[i+1].to_str()
break
def surround_with_sep(s, before, after):
if (sep in ('‘ ’', '‘’') # special case for ‘ ’ and ‘’
or sep[0] == s[0]): # ‘`‘sep’‘str’‘sep’` -> `‘sepstrsep’`’|‘`"sep""str""sep"` -> `"sepstrsep"`’
return s[0] + sep[1:-1]*before + s[1:-1] + sep[1:-1]*after + s[-1]
else: # `"sep"‘str’"sep"`|`‘sep’"str"‘sep’`
return sep*before + s + sep*after
def parenthesize_if_needed(child):
#if child.token.category in (Token.Category.NAME, Token.Category.NUMERIC_LITERAL) or child.symbol.id == '[': # ] # `print(‘Result: ’3)` is currently not supported in 11l
if child.token.category == Token.Category.NAME or child.symbol.id in ('[', '('): # )]
return child.to_str()
else:
return '(' + child.to_str() + ')'
res = 'print('
for i in range(1, first_named_argument, 2):
if i == 1: # it's the first agrument
if i == first_named_argument - 2: # it's the only argument — ‘no sep is required’/‘no parentheses are required’
res += self.children[i].to_str()
elif self.children[i].token.category == Token.Category.STRING_LITERAL:
res += surround_with_sep(self.children[i].to_str(), False, True)
else:
res += parenthesize_if_needed(self.children[i])
else:
if self.children[i].token.category == Token.Category.STRING_LITERAL:
if self.children[i-2].token.category == Token.Category.STRING_LITERAL:
raise Error('consecutive string literals in `print()` are not supported', self.children[i].token)
res += surround_with_sep(self.children[i].to_str(), True, i != first_named_argument - 2)
else:
if self.children[i-2].token.category != Token.Category.STRING_LITERAL:
res += sep
res += parenthesize_if_needed(self.children[i])
for i in range(first_named_argument, len(self.children), 2):
if self.children[i].to_str() != 'sep':
if len(res) > len('print('): # )
res += ', '
res += self.children[i].to_str() + "' " + self.children[i+1].to_str()
return res + ')'
else:
if ':' in func_name:
colon_pos = func_name.rfind(':')
module_name = func_name[:colon_pos].replace(':', '.')
if module_name in modules:
tid = modules[module_name].scope.find(func_name[colon_pos+1:])
else:
tid = None
elif func_name.startswith('.'):
s = self.scope
while True:
if s.is_function and not s.is_lambda_or_for:
tid = s.parent.vars.get(func_name[1:])
break
s = s.parent
if s is None:
tid = None
break
else:
tid = self.scope.find(func_name)
f_node = tid.node if tid is not None and type(tid.node) == ASTFunctionDefinition else None
res = func_name + '('
for i in range(1, len(self.children), 2):
if self.children[i+1] is None:
if f_node is not None:
fargs = f_node.function_arguments[i//2 + int(func_name.startswith('.'))]
arg_type_name = fargs[2]
if arg_type_name.startswith(('List[', 'Dict[', 'DefaultDict[')) or (arg_type_name != '' and trans_type(arg_type_name, self.scope, self.children[i].token).endswith('&')) or fargs[3] == '&': # ]]]
res += '&'
res += self.children[i].to_str()
else:
ci_str = self.children[i].to_str()
res += ci_str + "' "
if f_node is not None:
for farg in f_node.function_arguments:
if farg[0] == ci_str:
if farg[2].startswith(('List[', 'Dict[')): # ]]
res += '&'
break
res += self.children[i+1].to_str()
if i < len(self.children)-2:
res += ', '
return res + ')'
elif self.tuple:
res = '('
for i in range(len(self.children)):
res += self.children[i].to_str()
if i < len(self.children)-1:
res += ', '
if len(self.children) == 1:
res += ','
return res + ')'
else:
assert(len(self.children) == 1)
return '(' + self.children[0].to_str() + ')'
elif self.symbol.id == '[': # ]
if self.is_list:
if len(self.children) == 1 and self.children[0].symbol.id == 'for':
return self.children[0].to_str()
res = '['
for i in range(len(self.children)):
res += self.children[i].to_str()
if i < len(self.children)-1:
res += ', '
return res + ']'
elif self.children[0].symbol.id == '{': # }
parenthesis = ('(', ')') if self.parent is not None else ('', '')
res = parenthesis[0] + 'S ' + self.children[1].to_str() + ' {'
for i in range(0, len(self.children[0].children), 2):
res += self.children[0].children[i].to_str() + ' {' + self.children[0].children[i+1].to_str() + '}'
if i < len(self.children[0].children)-2:
res += '; '
return res + '}' + parenthesis[1]
else:
c0 = self.children[0].to_str()
if self.slicing:
if len(self.children) == 2: # `a = b[:]` -> `a = copy(b)`
assert(self.children[1] is None)
return 'copy(' + c0 + ')'
if c0.startswith('bin(') and len(self.children) == 3 and self.children[1].token_str() == '2' and self.children[2] is None: # ) # `bin(x)[2:]` -> `bin(x)`
return c0
if len(self.children) == 4 and self.children[1] is None and self.children[2] is None and self.children[3].symbol.id == '-' and len(self.children[3].children) == 1 and self.children[3].children[0].token_str() == '1': # replace `result[::-1]` with `reversed(result)`
return 'reversed(' + c0 + ')'
def for_negative_bound(c):
child = self.children[c]
if child is None:
return None
r = child.to_str()
if r[0] == '-': # hacky implementation of ‘this rule’[https://docs.python.org/3/reference/simple_stmts.html]:‘If either bound is negative, the sequence's length is added to it.’
r = '(len)' + r
return r
space = ' ' * range_need_space(self.children[1], self.children[2])
fnb2 = for_negative_bound(2)
s = (for_negative_bound(1) or '0') + space + '.' + ('<' + space + fnb2 if fnb2 else '.')
if len(self.children) == 4 and self.children[3] is not None:
s = '(' + s + ').step(' + self.children[3].to_str() + ')'
return c0 + '[' + s + ']'
elif self.children[1].to_str() == '-1':
return c0 + '.last'
else:
c1 = self.children[1].to_str()
return (c0 + '['
+ '(len)'*(c1[0] == '-') # hacky implementation of ‘this rule’[https://docs.python.org/3/reference/simple_stmts.html]:‘the subscript must yield an integer. If it is negative, the sequence's length is added to it.’
+ c1 + ']')
elif self.symbol.id == '{': # }
if len(self.children) == 0:
return 'Dict()'
if self.is_set:
is_not_for = self.children[0].symbol.id != 'for'
res = 'Set(' + '['*is_not_for
for i in range(len(self.children)):
res += self.children[i].to_str()
if i < len(self.children)-1:
res += ', '
return res + ']'*is_not_for + ')'
if self.children[-1].symbol.id == 'for':
assert(len(self.children) == 2)
c = self.children[1]
c2s = c.children[2].to_str()
return 'Dict(' + (c2s[1:-1] if c.children[2].function_call and c.children[2].children[0].token_str() == 'range' else c2s) + ', ' + c.children[1].to_str() + ' -> (' + self.children[0].to_str() + ', ' + c.children[0].to_str() + '))'
res = '['
for i in range(0, len(self.children), 2):
res += self.children[i].to_str() + ' = ' + self.children[i+1].to_str()
if i < len(self.children)-2:
res += ', '
return res + ']'
elif self.symbol.id == 'lambda':
r = '(' if len(self.children) != 3 else ''
for i in range(0, len(self.children)-1, 2):
r += self.children[i].token_str()
if self.children[i+1] is not None:
r += ' = ' + self.children[i+1].to_str()
if i < len(self.children)-3:
r += ', '
if len(self.children) != 3: r += ')'
return r + ' -> ' + self.children[-1].to_str()
elif self.symbol.id == 'for':
if self.children[2].token_str() == 'for': # this is a multiloop
if self.children[2].children[2].token_str() == 'for': # this is a multiloop3
filtered = len(self.children[2].children[2].children) == 4
res = 'multiloop' + '_filtered'*filtered + '(' + self.children[2].children[0].to_str() + ', ' + self.children[2].children[2].children[0].to_str() + ', ' + self.children[2].children[2].children[2].to_str()
fparams = ', (' + self.children[1].token_str() + ', ' + self.children[2].children[1].token_str() + ', ' + self.children[2].children[2].children[1].token_str() + ') -> '
if filtered:
res += fparams + self.children[2].children[2].children[3].to_str()
res += fparams + self.children[0].to_str() + ')'
return res
filtered = len(self.children[2].children) == 4
res = 'multiloop' + '_filtered'*filtered + '(' + self.children[2].children[0].to_str() + ', ' + self.children[2].children[2].to_str()
fparams = ', (' + self.children[1].token_str() + ', ' + self.children[2].children[1].token_str() + ') -> '
if filtered:
res += fparams + self.children[2].children[3].to_str()
res += fparams + self.children[0].to_str() + ')'
return res
res = self.children[2].children[0].children[0].to_str() if self.children[2].symbol.id == '(' and len(self.children[2].children) == 1 and self.children[2].children[0].symbol.id == '.' and len(self.children[2].children[0].children) == 2 and self.children[2].children[0].children[1].token_str() == 'items' else self.children[2].to_str() # )
if len(self.children) == 4:
res += '.filter(' + self.children[1].to_str() + ' -> ' + self.children[3].to_str() + ')'
if self.children[1].to_str() != self.children[0].to_str():
res += '.map(' + self.children[1].to_str() + ' -> ' + self.children[0].to_str() + ')'
return res
elif self.symbol.id == 'not':
if len(self.children) == 1:
if (self.children[0].token.category == Token.Category.OPERATOR_OR_DELIMITER or (self.children[0].token.category == Token.Category.KEYWORD and self.children[0].symbol.id == 'in')) and len(self.children[0].children) == 2:
return '!(' + self.children[0].to_str() + ')'
else:
return '!' + self.children[0].to_str()
else:
assert(len(self.children) == 2)
return self.children[0].to_str() + ' !C ' + self.children[1].to_str()
elif self.symbol.id == 'is':
if self.children[1].token_str() == 'None':
return self.children[0].to_str() + (' != ' if self.is_not else ' == ') + 'N'
return '&' + self.children[0].to_str() + (' != ' if self.is_not else ' == ') + '&' + self.children[1].to_str()
if len(self.children) == 1:
#return '(' + self.symbol.id + self.children[0].to_str() + ')'
return {'~':'(-)'}.get(self.symbol.id, self.symbol.id) + self.children[0].to_str()
elif len(self.children) == 2:
#return '(' + self.children[0].to_str() + ' ' + self.symbol.id + ' ' + self.children[1].to_str() + ')'
if self.symbol.id == '.':
if self.children[0].symbol.id == '{' and self.children[1].token.category == Token.Category.NAME and self.children[1].token.value(source) == 'get': # } # replace `{'and':'&', 'or':'|', 'in':'C'}.get(self.symbol.id, 'symbol-' + self.symbol.id)` with `(S .symbol.id {‘and’ {‘&’}; ‘or’ {‘|’}; ‘in’ {‘C’} E ‘symbol-’(.symbol.id)})`
res = 'S ' + self.parent.children[1].to_str() + ' {'
for i in range(0, len(self.children[0].children), 2):
res += self.children[0].children[i].to_str() + ' {' + self.children[0].children[i+1].to_str() + '}'
if i < len(self.children[0].children)-2:
res += '; '
return res + ' E ' + self.parent.children[3].to_str() + '}'
c1ts = self.children[1].token_str()
if self.children[0].token_str() == 'sys' and c1ts in ('argv', 'exit', 'stdin', 'stdout', 'stderr'):
return ':'*(c1ts != 'exit') + c1ts
if self.children[0].scope_prefix == ':::':
if self.children[0].token_str() in ('math', 'cmath'):
c1 = self.children[1].to_str()
if c1 not in ('e', 'pi'):
if c1 == 'fabs': c1 = 'abs'
return c1
r = self.children[0].token_str() + ':' + self.children[1].to_str()
return {'tempfile:gettempdir': 'fs:get_temp_dir', 'os:path': 'fs:path', 'os:pathsep': 'os:env_path_sep', 'os:sep': 'fs:path:sep', 'os:system': 'os:', 'os:listdir': 'fs:list_dir', 'os:walk': 'fs:walk_dir',
'os:mkdir': 'fs:create_dir', 'os:makedirs': 'fs:create_dirs', 'os:remove': 'fs:remove_file', 'os:rmdir': 'fs:remove_dir', 'os:rename': 'fs:rename',
'time:time': 'Time().unix_time', 'time:sleep': 'sleep', 'datetime:datetime': 'Time', 'datetime:date': 'Time', 'datetime:timedelta': 'TimeDelta', 're:compile': 're:',
'random:random': 'random:'}.get(r, r)
if self.children[0].symbol.id == '.' and self.children[0].children[0].scope_prefix == ':::':
if self.children[0].children[0].token_str() == 'datetime':
if self.children[0].children[1].token_str() == 'datetime':
if self.children[1].token_str() == 'now': # `datetime.datetime.now()` -> `Time()`
return 'Time'
if self.children[1].token_str() == 'fromtimestamp': # `datetime.datetime.fromtimestamp()` -> `time:from_unix_time()`
return 'time:from_unix_time'
if self.children[1].token_str() == 'strptime': # `datetime.datetime.strptime()` -> `time:strptime()`
return 'time:strptime'
if self.children[0].children[1].token_str() == 'date' and self.children[1].token_str() == 'today': # `datetime.date.today()` -> `time:today()`
return 'time:today'
if self.children[0].children[0].token_str() == 'os' and self.children[0].children[1].token_str() == 'path':
r = {'pathsep':'os:env_path_sep', 'isdir':'fs:is_dir', 'isfile':'fs:is_file', 'islink':'fs:is_symlink',
'dirname':'fs:path:dir_name', 'basename':'fs:path:base_name', 'abspath':'fs:path:absolute', 'relpath':'fs:path:relative',
'getsize':'fs:file_size', 'splitext':'fs:path:split_ext'}.get(self.children[1].token_str(), '')
if r != '':
return r
if len(self.children[0].children) == 2 and self.children[0].children[0].scope_prefix == ':::' and self.children[0].children[0].token_str() != 'sys': # for `os.path.join()` [and also take into account `sys.argv.index()`]
return self.children[0].to_str() + ':' + self.children[1].to_str()
if self.children[0].to_str() == 'self':
parent = self
while parent.parent:
if parent.parent.symbol.id == 'for' and id(parent.parent.children[0]) == id(parent):
return '@.' + self.children[1].to_str()
parent = parent.parent
if parent.symbol.id == 'lambda':
if len(parent.children) >= 3 and parent.children[0].token_str() == 'self':
return 'self.' + self.children[1].to_str()
return '@.' + self.children[1].to_str()
ast_parent = parent.ast_parent
function_nesting = 0
while type(ast_parent) != ASTProgram:
if type(ast_parent) == ASTFunctionDefinition:
if len(ast_parent.function_arguments) >= 1 and ast_parent.function_arguments[0][0] == 'self' and type(ast_parent.parent) != ASTClassDefinition:
return 'self.' + self.children[1].to_str()
function_nesting += 1
if function_nesting == 2:
break
elif type(ast_parent) == ASTClassDefinition:
break
ast_parent = ast_parent.parent
return ('@' if function_nesting == 2 else '') + '.' + self.children[1].to_str()
if c1ts == 'days':
return self.children[0].to_str() + '.' + c1ts + '()'
return self.children[0].to_str() + '.' + self.children[1].to_str()
elif self.symbol.id == '+=' and self.children[1].symbol.id == '[' and self.children[1].is_list: # ]
c1 = self.children[1].to_str()
return self.children[0].to_str() + ' [+]= ' + (c1[1:-1] if len(self.children[1].children) == 1 and c1.startswith('[') else c1) # ]
elif self.symbol.id == '+=' and self.children[1].token.value(source) == '1':
return self.children[0].to_str() + '++'
elif self.symbol.id == '-=' and self.children[1].token.value(source) == '1':
return '--' + self.children[0].to_str() if self.parent else self.children[0].to_str() + '--'
elif self.symbol.id == '+=' and ((self.children[0].token.category == Token.Category.NAME and self.children[0].var_type() == 'str')
or (self.children[1].symbol.id == '+' and len(self.children[1].children) == 2 and
(self.children[1].children[0].token.category == Token.Category.STRING_LITERAL
or self.children[1].children[1].token.category == Token.Category.STRING_LITERAL))
or self.children[1].token.category == Token.Category.STRING_LITERAL):
return self.children[0].to_str() + ' ‘’= ' + self.children[1].to_str()
elif self.symbol.id == '+=' and self.children[0].token.category == Token.Category.NAME and self.children[0].var_type() == 'List':
return self.children[0].to_str() + ' [+]= ' + self.children[1].to_str()
elif self.symbol.id == '+' and self.children[1].symbol.id == '*' and self.children[0].token.category == Token.Category.STRING_LITERAL \
and self.children[1].children[1].token.category == Token.Category.STRING_LITERAL: # for `outfile.write('<blockquote'+(ch=='<')*' class="re"'+'>')`
return self.children[0].to_str() + '(' + self.children[1].to_str() + ')'
elif self.symbol.id == '+' and self.children[1].symbol.id == '*' and self.children[1].children[0].token.category == Token.Category.STRING_LITERAL \
and (self.children[0].token.category == Token.Category.STRING_LITERAL
or (self.children[0].symbol.id == '+'
and self.children[0].children[1].token.category == Token.Category.STRING_LITERAL)): # for `outfile.write("<table"+' style="display: inline"'*(prevci != 0 and instr[prevci-1] != "\n")+...)` and `outfile.write('<pre>' + ins + '</pre>' + "\n"*(not self.habr_html))`
return self.children[0].to_str() + '(' + self.children[1].to_str() + ')'
elif self.symbol.id == '+' and self.children[1].token.category == Token.Category.STRING_LITERAL and ((self.children[0].symbol.id == '+'
and self.children[0].children[1].token.category == Token.Category.STRING_LITERAL) # for `outfile.write(... + '<br /></span>' # ... \n + '<div class="spoiler_text" ...')`
or self.children[0].token.category == Token.Category.STRING_LITERAL): # for `pre {margin: 0;}''' + # ... \n '''...`
c0 = self.children[0].to_str()
c1 = self.children[1].to_str()
return c0 + {('"','"'):'‘’', ('"','‘'):'', ('’','‘'):'""', ('’','"'):''}[(c0[-1], c1[0])] + c1
elif self.symbol.id == '+' and (self.children[0].token.category == Token.Category.STRING_LITERAL
or self.children[1].token.category == Token.Category.STRING_LITERAL
or (self.children[0].symbol.id == '+' and self.children[0].children[1].token.category == Token.Category.STRING_LITERAL)):
c1 = self.children[1].to_str()
return self.children[0].to_str() + ('(' + c1 + ')' if c1[0] == '.' else c1)
elif self.symbol.id == '+' and self.children[1].symbol.id == '*' and (self.children[1].children[0].token.category == Token.Category.STRING_LITERAL # for `self.newlines() + ' ' * (indent*3) + 'F ' + ...`
or self.children[1].children[1].token.category == Token.Category.STRING_LITERAL): # for `(... + self.ohd*'</span>')`
p = self.children[0].symbol.id == '*'
return '('*p + self.children[0].to_str() + ')'*p + '‘’(' + self.children[1].to_str() + ')'
elif self.symbol.id == '+' and self.children[0].symbol.id == '*' and self.children[0].children[0].token.category == Token.Category.STRING_LITERAL: # for `' ' * (indent*3) + self.expression.to_str() + "\n"`
c1 = self.children[1].to_str()
return '(' + self.children[0].to_str() + ')‘’' + ('(' + c1 + ')' if c1[0] == '.' else c1)
elif self.symbol.id == '+' and (self.children[0].var_type() == 'str' or self.children[1].var_type() == 'str'):
return self.children[0].to_str() + '‘’' + self.children[1].to_str()
elif self.symbol.id == '+' and (self.children[0].var_type() == 'List' or self.children[1].var_type() == 'List'):
return self.children[0].to_str() + ' [+] ' + self.children[1].to_str()
elif self.symbol.id == '<=' and self.children[0].symbol.id == '<=': # replace `'0' <= ch <= '9'` with `ch C ‘0’..‘9’`
return self.children[0].children[1].to_str() + ' C ' + self.children[0].children[0].to_str() + (' .. ' if range_need_space(self.children[0].children[0], self.children[1]) else '..') + self.children[1].to_str()
elif self.symbol.id == '<' and self.children[0].symbol.id == '<=': # replace `'0' <= ch < '9'` with `ch C ‘0’.<‘9’`
return self.children[0].children[1].to_str() + ' C ' + self.children[0].children[0].to_str() + (' .< ' if range_need_space(self.children[0].children[0], self.children[1]) else '.<') + self.children[1].to_str()
elif self.symbol.id == '<=' and self.children[0].symbol.id == '<' : # replace `'0' < ch <= '9'` with `ch C ‘0’<.‘9’`
return self.children[0].children[1].to_str() + ' C ' + self.children[0].children[0].to_str() + (' <. ' if range_need_space(self.children[0].children[0], self.children[1]) else '<.') + self.children[1].to_str()
elif self.symbol.id == '<' and self.children[0].symbol.id == '<' : # replace `'0' <= ch <= '9'` with `ch C ‘0’<.<‘9’`
return self.children[0].children[1].to_str() + ' C ' + self.children[0].children[0].to_str() + (' <.< ' if range_need_space(self.children[0].children[0], self.children[1]) else '<.<') + self.children[1].to_str()
elif self.symbol.id == '==' and self.children[0].symbol.id == '(' and self.children[0].children[0].to_str() == 'len' and self.children[1].token.value(source) == '0': # ) # replace `len(arr) == 0` with `arr.empty`
return self.children[0].children[1].to_str() + '.empty'
elif self.symbol.id == '!=' and self.children[0].symbol.id == '(' and self.children[0].children[0].to_str() == 'len' and self.children[1].token.value(source) == '0': # ) # replace `len(arr) != 0` with `!arr.empty`
return '!' + self.children[0].children[1].to_str() + '.empty'
elif self.symbol.id in ('==', '!=') and self.children[1].symbol.id == '.' and len(self.children[1].children) == 2 and self.children[1].children[1].token_str().isupper(): # replace `token.category == Token.Category.NAME` with `token.category == NAME`
#self.skip_find_and_get_prefix = True # this is not needed here because in AST there is still `Token.Category.NAME`, not just `NAME`
return self.children[0].to_str() + ' ' + self.symbol.id + ' ' + self.children[1].children[1].token_str()
elif self.symbol.id in ('==', '!=') and self.children[0].function_call and self.children[0].children[0].token_str() == 'id' and self.children[1].function_call and self.children[1].children[0].token_str() == 'id': # replace `id(a) == id(b)` with `&a == &b`
return '&' + self.children[0].children[1].token_str() + ' ' + self.symbol.id + ' &' + self.children[1].children[1].token_str()
elif self.symbol.id == '%' and self.children[0].token.category == Token.Category.STRING_LITERAL:
add_parentheses = self.children[1].symbol.id != '(' or self.children[1].function_call # )
fmtstr = self.children[0].to_str()
nfmtstr = ''
i = 0
while i < len(fmtstr):
if fmtstr[i] == '#':
nfmtstr += '##'
i += 1
continue
fmtchr = fmtstr[i+1:i+2]
if fmtstr[i] == '%':
if fmtchr == '%':
nfmtstr += '%'
i += 2
elif fmtchr == 'g':
nfmtstr += '#.'
i += 2
else:
nfmtstr += '#'
before_period = 0
after_period = 6
period_pos = 0
i += 1
if fmtstr[i] == '-': # left align
nfmtstr += '<'
i += 1
if fmtstr[i:i+1] == '0' and fmtstr[i+1:i+2].isdigit(): # zero padding
nfmtstr += '0'
while i < len(fmtstr) and fmtstr[i].isdigit():
before_period = before_period*10 + ord(fmtstr[i]) - ord('0')
i += 1
if fmtstr[i:i+1] == '.':
period_pos = i
i += 1
after_period = 0
while i < len(fmtstr) and fmtstr[i].isdigit():
after_period = after_period*10 + ord(fmtstr[i]) - ord('0')
i += 1
if fmtstr[i:i+1] in ('d', 'i'):
if before_period != 0:
nfmtstr += str(before_period)
else:
nfmtstr += '.'#'.0' # `#.0` corresponds to `%.0f` rather than `%i` or `%d`, and `'%i' % (1.7)` = `1`, but `‘#.0’.format(1.7)` = `2`
elif fmtstr[i:i+1] == 's':
if before_period != 0:
nfmtstr += str(before_period)
else:
nfmtstr += '.'
elif fmtstr[i:i+1] == 'f':
if before_period != 0:
b = before_period
if after_period != 0:
b -= after_period + 1
if b > 1:
nfmtstr += str(b)
nfmtstr += '.' + str(after_period)
elif fmtstr[i:i+1] == 'g':
nfmtstr += str(before_period)
if period_pos != 0:
raise Error('precision in %g conversion type is not supported', Token(self.children[0].token.start + period_pos, self.children[0].token.start + i, Token.Category.STRING_LITERAL))
else:
tpos = self.children[0].token.start + i
raise Error('unsupported format character `' + fmtstr[i:i+1] + '`', Token(tpos, tpos, Token.Category.STRING_LITERAL))
i += 1
continue
nfmtstr += fmtstr[i]
i += 1
return nfmtstr + '.format' + '('*add_parentheses + self.children[1].to_str() + ')'*add_parentheses
else:
return self.children[0].to_str() + ' ' + {'and':'&', 'or':'|', 'in':'C', '//':'I/', '//=':'I/=', '**':'^', '**=':'^=', '^':'(+)', '^=':'(+)=', '|':'[|]', '|=':'[|]=', '&':'[&]', '&=':'[&]='}.get(self.symbol.id, self.symbol.id) + ' ' + self.children[1].to_str()
elif len(self.children) == 3:
assert(self.symbol.id == 'if')
c0 = self.children[0].to_str()
if self.children[1].symbol.id == 'is' and self.children[1].is_not and self.children[1].children[1].token.value(source) == 'None' and self.children[1].children[0].to_str() == c0: # replace `a if a is not None else b` with `a ? b`
return c0 + ' ? ' + self.children[2].to_str()
return 'I ' + self.children[1].to_str() + ' {' + c0 + '} E ' + self.children[2].to_str()
return ''
symbol_table : Dict[str, SymbolBase] = {}
allowed_keywords_in_expressions : List[str] = []
def symbol(id, bp = 0):
try:
s = symbol_table[id]
except KeyError:
s = SymbolBase()
s.id = id
s.lbp = bp
symbol_table[id] = s
if id[0].isalpha(): # this is keyword-in-expression
assert(id.isalpha())
allowed_keywords_in_expressions.append(id)
else:
s.lbp = max(bp, s.lbp)
return s
class ASTNode:
parent : 'ASTNode'
def walk_expressions(self, f):
pass
def walk_children(self, f):
pass
class ASTNodeWithChildren(ASTNode):
# children : List['ASTNode'] = [] # OMFG! This actually means static (common for all objects of type ASTNode) variable, not default value of member variable, that was unexpected to me as it contradicts C++11 behavior
children : List['ASTNode']
tokeni : int
def __init__(self):
self.children = []
self.tokeni = tokeni
def walk_children(self, f):
for child in self.children:
f(child)
def children_to_str(self, indent, t):
r = ''
if self.tokeni > 0:
ti = self.tokeni - 1
while ti > 0 and tokens[ti].category in (Token.Category.DEDENT, Token.Category.STATEMENT_SEPARATOR):
ti -= 1
r = (min(source[tokens[ti].end:tokens[self.tokeni].start].count("\n"), 2) - 1) * "\n"
r += ' ' * (indent*3) + t + "\n"
for c in self.children:
r += c.to_str(indent+1)
return r
class ASTNodeWithExpression(ASTNode):
expression : SymbolNode
def set_expression(self, expression):
self.expression = expression
self.expression.ast_parent = self
def walk_expressions(self, f):
f(self.expression)
class ASTProgram(ASTNodeWithChildren):
imported_modules : List[str] = None
def to_str(self):
r = ''
for c in self.children:
r += c.to_str(0)
return r
class ASTImport(ASTNode):
def __init__(self):
self.modules = []
def to_str(self, indent):
return ' ' * (indent*3) + '//import ' + ', '.join(self.modules) + "\n" # this is easier than avoid to add empty line here: `import sys\n\ndef f()` -> `\nF f()`
class ASTExpression(ASTNodeWithExpression):
def to_str(self, indent):
return ' ' * (indent*3) + self.expression.to_str() + "\n"
class ASTExprAssignment(ASTNodeWithExpression):
add_vars : List[bool]
drop_list = False
is_tuple_assign_expression = False
dest_expression : SymbolNode
additional_dest_expressions : List[SymbolNode]
def __init__(self):
# self.add_vars = [] # this is not necessary
self.additional_dest_expressions = []
def set_dest_expression(self, dest_expression):
self.dest_expression = dest_expression
self.dest_expression.ast_parent = self
def to_str(self, indent):
if type(self.parent) == ASTClassDefinition:
assert(len(self.add_vars) == 1 and self.add_vars[0] and not self.is_tuple_assign_expression)
return ' ' * (indent*3) + self.dest_expression.to_str() + ' = ' + self.expression.to_str() + "\n"
if self.dest_expression.slicing:
s = self.dest_expression.to_str() # [
if s.endswith(']') and self.expression.function_call and self.expression.children[0].token_str() == 'reversed' and self.expression.children[1].to_str() == s:
l = len(self.dest_expression.children[0].to_str())
return ' ' * (indent*3) + s[:l] + '.reverse_range(' + s[l+1:-1] + ")\n"
raise Error('slice assignment is not supported', self.dest_expression.left_to_right_token())
if self.drop_list:
return ' ' * (indent*3) + self.dest_expression.to_str() + ".drop()\n"
if self.dest_expression.tuple and len(self.dest_expression.children) == 2 and \
self. expression.tuple and len(self. expression.children) == 2 and \
self.dest_expression.children[0].to_str() == self.expression.children[1].to_str() and \
self.dest_expression.children[1].to_str() == self.expression.children[0].to_str():
return ' ' * (indent*3) + 'swap(&' + self.dest_expression.children[0].to_str() + ', &' + self.dest_expression.children[1].to_str() + ")\n"
if self.is_tuple_assign_expression or not any(self.add_vars):
r = ' ' * (indent*3) + self.dest_expression.to_str()
for ade in self.additional_dest_expressions:
r += ' = ' + ade.to_str()
return r + ' = ' + self.expression.to_str() + "\n"
if all(self.add_vars):
if self.expression.function_call and self.expression.children[0].token_str() == 'ref':
assert(len(self.expression.children) == 3)
return ' ' * (indent*3) + 'V& ' + self.dest_expression.to_str() + ' = ' + self.expression.children[1].to_str() + "\n"
return ' ' * (indent*3) + 'V ' + self.dest_expression.to_str() + ' = ' + self.expression.to_str() + "\n"
assert(self.dest_expression.tuple and len(self.dest_expression.children) == len(self.add_vars))
r = ' ' * (indent*3) + '('
for i in range(len(self.add_vars)):
if self.add_vars[i]:
r += 'V '
assert(self.dest_expression.children[i].token.category == Token.Category.NAME)
r += self.dest_expression.children[i].token_str()
if i < len(self.add_vars)-1:
r += ', '
return r + ') = ' + self.expression.to_str() + "\n"
def walk_expressions(self, f):
f(self.dest_expression)
super().walk_expressions(f)
class ASTAssert(ASTNodeWithExpression):
expression2 : SymbolNode = None
def set_expression2(self, expression2):
self.expression2 = expression2
self.expression2.ast_parent = self
def to_str(self, indent):
return ' ' * (indent*3) + 'assert(' + (self.expression.children[0].to_str() if self.expression.symbol.id == '(' and not self.expression.tuple and not self.expression.function_call # )
else self.expression.to_str()) + (', ' + self.expression2.to_str() if self.expression2 is not None else '') + ")\n"
def walk_expressions(self, f):
if self.expression2 is not None: f(self.expression2)
super().walk_expressions(f)
python_types_to_11l = {'&':'&', 'int':'Int', 'float':'Float', 'complex':'Complex', 'str':'String', 'Char':'Char', 'Int64':'Int64', 'UInt32':'UInt32', 'Byte':'Byte', 'bool':'Bool', 'None':'N', 'List':'', 'Tuple':'Tuple', 'Dict':'Dict', 'DefaultDict':'DefaultDict', 'Set':'Set', 'IO[str]': 'File',
'datetime.date':'Time', 'datetime.datetime':'Time'}
def trans_type(ty, scope, type_token):
if ty[0] in '\'"':
assert(ty[-1] == ty[0])
ty = ty[1:-1]
t = python_types_to_11l.get(ty)
if t is not None:
return t
else:
p = ty.find('[')
if p != -1:
assert(ty[-1] == ']')
i = p + 1
s = i
nesting_level = 0
types = []
while True:
if ty[i] == '[':
nesting_level += 1
elif ty[i] == ']':
if nesting_level == 0:
assert(i == len(ty)-1)
types.append(trans_type(ty[s:i], scope, type_token))
break
nesting_level -= 1
elif ty[i] == ',':
if nesting_level == 0: # ignore inner commas
if ty[s:i] == '[]' and ty.startswith('Callable['): # ] # for `Callable[[], str]`
types.append('()')
else:
types.append(trans_type(ty[s:i], scope, type_token))
i += 1
while ty[i] == ' ':
i += 1
s = i
#continue # this is not necessary here
i += 1
if ty.startswith('Tuple['): # ]
return '(' + ', '.join(types) + ')'
if ty.startswith('Dict['): # ]
assert(len(types) == 2)
return '[' + types[0] + ' = ' + types[1] + ']'
if ty.startswith('Callable['): # ]
assert(len(types) == 2)
return '(' + types[0] + ' -> ' + types[1] + ')'
if p == 0: # for `Callable`
assert(len(types) != 0)
parens = len(types) > 1
return '('*parens + ', '.join(types) + ')'*parens
return trans_type(ty[:p], scope, type_token) + '[' + ', '.join(types) + ']'
assert(ty.find(',') == -1)
if '.' in ty: # for `category : Token.Category`
return ty # [-TODO: generalize-]
id = scope.find(ty)
if id is None:
raise Error('class `' + ty + '` is not defined', type_token)
if id.type != '(Class)':
raise Error('`' + ty + '`: expected a class name (got variable' + (' of type `' + id.type + '`' if id.type != '' else '') + ')', type_token)
return ty + '&'*id.node.is_inout
class ASTTypeHint(ASTNode):
var : str
type : str
type_args : List[str]
scope : Scope
type_token : Token
is_reference = False
def __init__(self):
self.scope = scope
def trans_type(self, ty):
return trans_type(ty, self.scope, self.type_token)
def to_str_(self, indent, nullable = False):
if self.type == 'Callable':
if self.type_args[0] == '':
args = '()'
else:
tt = self.type_args[0].split(',')
args = ', '.join(self.trans_type(ty) for ty in tt)
if len(tt) > 1:
args = '(' + args + ')'
return ' ' * (indent*3) + '(' + args + ' -> ' + self.trans_type(self.type_args[1]) + ') ' + self.var
elif self.type == 'Optional':
assert(len(self.type_args) == 1)
return ' ' * (indent*3) + self.trans_type(self.type_args[0]) + ('& ' if self.is_reference else '? ') + self.var
return ' ' * (indent*3) + self.trans_type(self.type + ('[' + ', '.join(self.type_args) + ']' if len(self.type_args) else '')) + '?'*nullable + '&'*self.is_reference + ' ' + self.var
def to_str(self, indent):
return self.to_str_(indent) + "\n"
class ASTAssignmentWithTypeHint(ASTTypeHint, ASTNodeWithExpression):
def to_str(self, indent):
if self.type == 'DefaultDict':
assert(self.expression.function_call and self.expression.children[0].to_str() == 'collections:defaultdict')
return super().to_str(indent)
expression_str = self.expression.to_str()
if expression_str == 'N':
return super().to_str_(indent, True) + "\n"
return super().to_str_(indent) + (' = ' + expression_str if expression_str not in ('[]', 'Dict()') else '') + "\n"
class ASTFunctionDefinition(ASTNodeWithChildren):
function_name : str
function_return_type : str = ''
is_const = False
function_arguments : List[Tuple[str, str, str, str]]# = [] # (arg_name, default_value, type_name, qualifier)
first_named_only_argument = None
class VirtualCategory(IntEnum):
NO = 0
NEW = 1
OVERRIDE = 2
ABSTRACT = 3
ASSIGN = 4
virtual_category = VirtualCategory.NO
scope : Scope
def __init__(self):
super().__init__()
self.function_arguments = []
self.scope = scope
def serialize_to_dict(self):
return {'function_arguments': ['; '.join(arg) for arg in self.function_arguments]}
def deserialize_from_dict(self, d):
self.function_arguments = [arg.split('; ') for arg in d['function_arguments']]
def to_str(self, indent):
if self.function_name in ('move', 'copy', 'ref') and type(self.parent) == ASTProgram:
assert(len(self.function_arguments) == 1)
return ''
fargs = []
for arg in self.function_arguments:
farg = ''
default_value = arg[1]
if arg[2] != '':
ty = trans_type(arg[2], self.scope, tokens[self.tokeni])
# if ty.endswith('&'): # fix error ‘expected function's argument name’ at `F trazar(Rayo& =r; prof)` (when there was `r = ...` instead of `rr = ...`)
# arg = (arg[0].lstrip('='), arg[1], arg[2])
farg += ty
if default_value == 'N':
farg += '?'
assert(arg[3] == '')
farg += ' '
if ty.startswith(('Array[', '[', 'Dict[', 'DefaultDict[')) or arg[3] == '&': # ]]]]
farg += '&'
else:
if arg[3] == '&':
farg += '&'
farg += arg[0] + ('' if default_value == '' else ' = ' + default_value)
fargs.append((farg, arg[2] != ''))
if self.first_named_only_argument is not None:
fargs.insert(self.first_named_only_argument, ("'", fargs[self.first_named_only_argument][1]))
if len(self.function_arguments) and self.function_arguments[0][0] == 'self' and type(self.parent) == ASTClassDefinition:
fargs.pop(0)
fargs_str = ''
if len(fargs):
fargs_str = fargs[0][0]
prev_type = fargs[0][1]
for farg in fargs[1:]:
fargs_str += ('; ' if prev_type and not farg[1] else ', ') + farg[0]
prev_type = farg[1]
if self.virtual_category == self.VirtualCategory.ABSTRACT:
return ' ' * (indent*3) + 'F.virtual.abstract ' + self.function_name + '(' + fargs_str + ') -> ' + trans_type(self.function_return_type, self.scope, tokens[self.tokeni]) + "\n"
return self.children_to_str(indent, ('F', 'F.virtual.new', 'F.virtual.override', '', 'F.virtual.assign')[self.virtual_category] + '.const'*self.is_const + ' ' +
{'__init__':'', '__call__':'()', '__and__':'[&]', '__lt__':'<', '__eq__':'==', '__add__':'+', '__sub__':'-', '__mul__':'*', '__str__':'String'}.get(self.function_name, self.function_name)
+ '(' + fargs_str + ')'
+ ('' if self.function_return_type == '' else ' -> ' + trans_type(self.function_return_type, self.scope, tokens[self.tokeni])))
class ASTIf(ASTNodeWithChildren, ASTNodeWithExpression):
else_or_elif : ASTNode = None
def walk_expressions(self, f):
super().walk_expressions(f)
if self.else_or_elif is not None and isinstance(self.else_or_elif, ASTElseIf):
self.else_or_elif.walk_expressions(f)
def walk_children(self, f):
super().walk_children(f)
if self.else_or_elif is not None:
self.else_or_elif.walk_children(f)
def to_str(self, indent):
return self.children_to_str(indent, 'I ' + self.expression.to_str()) + (self.else_or_elif.to_str(indent) if self.else_or_elif is not None else '')
class ASTElse(ASTNodeWithChildren):
def to_str(self, indent):
return self.children_to_str(indent, 'E')
class ASTElseIf(ASTNodeWithChildren, ASTNodeWithExpression):
else_or_elif : ASTNode = None
def walk_expressions(self, f):
super().walk_expressions(f)
if self.else_or_elif is not None and isinstance(self.else_or_elif, ASTElseIf):
self.else_or_elif.walk_expressions(f)
def walk_children(self, f):
super().walk_children(f)
if self.else_or_elif is not None:
self.else_or_elif.walk_children(f)
def to_str(self, indent):
return self.children_to_str(indent, 'E I ' + self.expression.to_str()) + (self.else_or_elif.to_str(indent) if self.else_or_elif is not None else '')
class ASTSwitch(ASTNodeWithExpression):
class Case(ASTNodeWithChildren, ASTNodeWithExpression):
def __init__(self):
super().__init__()
self.tokeni = 0
cases : List[Case]
def __init__(self):
self.cases = []
def walk_children(self, f):
for case in self.cases:
f(case)
def to_str(self, indent):
r = ' ' * (indent*3) + 'S ' + self.expression.to_str() + "\n"
for case in self.cases:
r += case.children_to_str(indent + 1, 'E' if case.expression.token_str() == 'E' else case.expression.to_str())
return r
class ASTWhile(ASTNodeWithChildren, ASTNodeWithExpression):
def to_str(self, indent):
return self.children_to_str(indent, 'L' if self.expression.token.category == Token.Category.CONSTANT and self.expression.token.value(source) == 'True' else 'L ' + self.expression.to_str())
class ASTFor(ASTNodeWithChildren, ASTNodeWithExpression):
was_no_break : ASTNodeWithChildren = None
loop_variables : List[str]
os_walk = False
dir_filter = None
def walk_children(self, f):
super().walk_children(f)
if self.was_no_break is not None:
self.was_no_break.walk_children(f)
def to_str(self, indent):
if self.os_walk:
dir_filter = ''
if self.dir_filter is not None:
dir_filter = ", dir_filter' " + self.dir_filter # (
return self.children_to_str(indent, 'L(_fname) ' + self.expression.to_str()[:-1] + dir_filter + ", files_only' 0B)\n"
+ ' ' * ((indent+1)*3) + 'V ' + self.loop_variables[0] + " = fs:path:dir_name(_fname)\n"
+ ' ' * ((indent+1)*3) + '[String] ' + self.loop_variables[1] + ', ' + self.loop_variables[2] + "\n"
+ ' ' * ((indent+1)*3) + 'I fs:is_dir(_fname) {' + self.loop_variables[1] + ' [+]= fs:path:base_name(_fname)} E ' + self.loop_variables[2] + ' [+]= fs:path:base_name(_fname)')
if len(self.loop_variables) == 1:
r = 'L(' + self.loop_variables[0] + ') ' + (self.expression.children[1].to_str()
if self.expression.function_call and self.expression.children[0].token_str() == 'range' and # `L(i) 100` instead of `L(i) 0.<100`
len(self.expression.children) == 3 and self.expression.children[1].token.category == Token.Category.NUMERIC_LITERAL else self.expression.to_str())
if self.expression.token.category == Token.Category.NAME:
sid = self.expression.scope.find(self.expression.token_str())
if sid.type in ('Dict', 'DefaultDict'):
r += '.keys()'
elif self.expression.symbol.id == '(' and len(self.expression.children) == 1 and self.expression.children[0].symbol.id == '.' and len(self.expression.children[0].children) == 2 and self.expression.children[0].children[1].token_str() == 'items': # )
r = 'L(' + ', '.join(self.loop_variables) + ') ' + self.expression.children[0].children[0].to_str()
else:
r = 'L(' + ', '.join(self.loop_variables) + ') ' + self.expression.to_str()
# r = 'L(' + ''.join(self.loop_variables) + ') ' + self.expression.to_str()
# for index, loop_var in enumerate(self.loop_variables):
# r += "\n" + ' ' * ((indent+1)*3) + 'V ' + loop_var + ' = ' + ''.join(self.loop_variables) + '[' + str(index) + ']'
r = self.children_to_str(indent, r)
if self.was_no_break is not None:
r += self.was_no_break.children_to_str(indent, 'L.was_no_break')
return r
class ASTContinue(ASTNode):
def to_str(self, indent):
return ' ' * (indent*3) + "L.continue\n"
class ASTBreak(ASTNode):
def to_str(self, indent):
return ' ' * (indent*3) + "L.break\n"
class ASTReturn(ASTNodeWithExpression):
def to_str(self, indent):
return ' ' * (indent*3) + 'R' + (' ' + self.expression.to_str() if self.expression is not None else '') + "\n"
def walk_expressions(self, f):
if self.expression is not None: f(self.expression)
class ASTException(ASTNodeWithExpression):
def to_str(self, indent):
return ' ' * (indent*3) + 'X ' + self.expression.to_str() + "\n"
class ASTExceptionTry(ASTNodeWithChildren):
def to_str(self, indent):
return self.children_to_str(indent, 'X.try')
class ASTExceptionCatch(ASTNodeWithChildren):
exception_object_type : str
exception_object_name : str = ''
def to_str(self, indent):
return self.children_to_str(indent, 'X.catch' + (' ' + self.exception_object_type if self.exception_object_type != '' else '')
+ (' ' + self.exception_object_name if self.exception_object_name != '' else ''))
class ASTDel(ASTNodeWithExpression):
def to_str(self, indent):
assert(self.expression.slicing and len(self.expression.children) == 3)
return ' ' * (indent*3) + self.expression.children[0].to_str() + '.del(' + self.expression.children[1].to_str() + ' .< ' + self.expression.children[2].to_str() + ")\n"
class ASTClassDefinition(ASTNodeWithChildren):
base_class_name : str = None
base_class_node : 'ASTClassDefinition' = None
class_name : str
is_inout = False
def find_member_including_base_classes(self, name):
for child in self.children:
if isinstance(child, ASTTypeHint) and child.var == name:
return True
if self.base_class_node is not None:
return self.base_class_node.find_member_including_base_classes(name)
return False
def to_str(self, indent):
if self.base_class_name == 'IntEnum':
r = ' ' * (indent*3) + 'T.enum ' + self.class_name + "\n"
current_index = 0
for c in self.children:
assert(type(c) == ASTExprAssignment and c.expression.token.category == Token.Category.NUMERIC_LITERAL)
r += ' ' * ((indent+1)*3) + c.dest_expression.to_str()
if current_index != int(c.expression.token_str()):
current_index = int(c.expression.token_str())
r += ' = ' + c.expression.token_str()
current_index += 1
r += "\n"
return r
return self.children_to_str(indent, 'T ' + self.class_name + ('(' + self.base_class_name + ')' if self.base_class_name and self.base_class_name != 'Exception' else ''))
class ASTPass(ASTNode):
def to_str(self, indent):
return ' ' * ((indent-1)*3) + "{\n"\
+ ' ' * ((indent-1)*3) + "}\n"
class ASTStart(ASTNodeWithChildren):
def to_str(self, indent):
return self.children_to_str(indent-1, ':start:')
class Error(Exception):
def __init__(self, message, token):
self.message = message
self.pos = token.start
self.end = token.end
def next_token(): # why ‘next_token’: >[https://youtu.be/Nlqv6NtBXcA?t=1203]:‘we'll have an advance method which will fetch the next token’
global token, tokeni, tokensn
if token is None and tokeni != -1:
raise Error('no more tokens', Token(len(source), len(source), Token.Category.STATEMENT_SEPARATOR))
tokeni += 1
if tokeni == len(tokens):
token = None
tokensn = None
else:
token = tokens[tokeni]
tokensn = SymbolNode(token)
if token.category != Token.Category.INDENT:
if token.category != Token.Category.KEYWORD or token.value(source) in allowed_keywords_in_expressions:
key : str
if token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL):
key = '(literal)'
elif token.category == Token.Category.NAME:
key = '(name)'
if token.value(source) in ('V', 'C', 'I', 'E', 'F', 'L', 'N', 'R', 'S', 'T', 'X', 'var', 'fn', 'loop', 'null', 'switch', 'type', 'exception', 'sign'):
tokensn.token_str_override = '_' + token.value(source).lower() + '_'
elif token.category == Token.Category.CONSTANT:
key = '(constant)'
elif token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT):
key = ';'
else:
key = token.value(source)
tokensn.symbol = symbol_table[key]
def advance(value):
if token.value(source) != value:
raise Error('expected `' + value + '`', token)
next_token()
def peek_token(how_much = 1):
return tokens[tokeni+how_much] if tokeni+how_much < len(tokens) else Token()
# This implementation is based on [http://svn.effbot.org/public/stuff/sandbox/topdown/tdop-4.py]
def expression(rbp = 0):
def check_tokensn():
if tokensn.symbol is None:
raise Error('no symbol corresponding to token `' + token.value(source) + '` (belonging to ' + str(token.category) +') found while parsing expression', token)
check_tokensn()
t = tokensn
next_token()
check_tokensn()
left = t.symbol.nud(t)
while rbp < tokensn.symbol.lbp:
t = tokensn
next_token()
left = t.symbol.led(t, left)
check_tokensn()
return left
def infix(id, bp):
def led(self, left):
self.append_child(left)
self.append_child(expression(self.symbol.led_bp))
return self
symbol(id, bp).set_led_bp(bp, led)
def infix_r(id, bp):
def led(self, left):
self.append_child(left)
self.append_child(expression(self.symbol.led_bp - 1))
return self
symbol(id, bp).set_led_bp(bp, led)
def prefix(id, bp):
def nud(self):
self.append_child(expression(self.symbol.nud_bp))
return self
symbol(id).set_nud_bp(bp, nud)
symbol("lambda", 20)
symbol("if", 20); symbol("else") # ternary form
infix_r("or", 30); infix_r("and", 40); prefix("not", 50)
infix("in", 60); infix("not", 60) # not in
infix("is", 60);
infix("<", 60); infix("<=", 60)
infix(">", 60); infix(">=", 60)
infix("<>", 60); infix("!=", 60); infix("==", 60)
infix("|", 70); infix("^", 80); infix("&", 90)
infix("<<", 100); infix(">>", 100)
infix("+", 110); infix("-", 110)
infix("*", 120); infix("/", 120); infix("//", 120)
infix("%", 120)
prefix("-", 130); prefix("+", 130); prefix("~", 130)
infix_r("**", 140)
symbol(".", 150); symbol("[", 150); symbol("(", 150); symbol(")"); symbol("]")
infix_r('+=', 10); infix_r('-=', 10); infix_r('*=', 10); infix_r('/=', 10); infix_r('//=', 10); infix_r('%=', 10); infix_r('>>=', 10); infix_r('<<=', 10); infix_r('**=', 10); infix_r('|=', 10); infix_r('^=', 10); infix_r('&=', 10)
symbol("(name)").nud = lambda self: self
symbol("(literal)").nud = lambda self: self
symbol('(constant)').nud = lambda self: self
#symbol("(end)")
symbol(';')
symbol(',')
def led(self, left):
if token.category != Token.Category.NAME:
raise Error('expected an attribute name', token)
self.append_child(left)
self.append_child(tokensn)
next_token()
return self
symbol('.').led = led
def led(self, left):
self.function_call = True
self.append_child(left) # (
if token.value(source) != ')':
while True:
if token.value(source) == '*': # >[https://stackoverflow.com/a/19525681/2692494 <- google:‘python iterable unpacking precedence’]:‘The unpacking `*` is not an operator; it's part of the call syntax.’
if len(self.children) != 1:
raise Error('iterable unpacking is supported only in first agrument', token)
if not (left.token.category == Token.Category.NAME and left.token_str() == 'print'):
raise Error('iterable unpacking is supported only for `print()` function', token)
self.iterable_unpacking = True
next_token()
self.append_child(expression())
if token.value(source) == '=':
next_token()
self.append_child(expression())
else:
self.children.append(None)
if token.value(source) != ',':
break
advance(',') # (
advance(')')
return self
symbol('(').led = led
def nud(self):
comma = False # ((
if token.value(source) != ')':
while True:
if token.value(source) == ')':
break
self.append_child(expression())
if token.value(source) != ',':
break
comma = True
advance(',')
advance(')')
if len(self.children) == 0 or comma:
self.tuple = True
return self
symbol('(').nud = nud # )
def led(self, left):
self.append_child(left)
if token.value(source) == ':':
self.slicing = True
self.children.append(None)
next_token() # [
if token.value(source) != ']': # for `arr[:]`
if token.value(source) == ':':
self.children.append(None)
next_token()
self.append_child(expression())
else:
self.append_child(expression())
if token.value(source) == ':':
next_token()
self.append_child(expression())
else:
self.append_child(expression())
if token.value(source) == ':':
self.slicing = True
next_token() # [[
if token.value(source) != ']':
if token.value(source) == ':':
self.children.append(None)
next_token()
self.append_child(expression())
else:
self.append_child(expression())
if token.value(source) == ':':
next_token()
self.append_child(expression())
else:
self.children.append(None)
advance(']')
return self
symbol('[').led = led
def nud(self):
self.is_list = True
while True: # [
if token.value(source) == ']':
break
self.append_child(expression())
if token.value(source) != ',':
break
advance(',')
advance(']')
return self
symbol('[').nud = nud # ]
def nud(self): # {{{{
if token.value(source) != '}':
while True:
if token.value(source) == '}':
break
self.append_child(expression())
if token.value(source) != ':':
self.is_set = True
while True:
if token.value(source) != ',':
break
advance(',')
if token.value(source) == '}':
break
self.append_child(expression())
break
advance(':')
self.append_child(expression())
if self.children[-1].symbol.id == 'for':
for_scope = self.children[-1].children[0].scope
def set_scope_recursive(sn):
assert(sn.scope == scope)
sn.scope = for_scope
for child in sn.children:
if child is not None:
set_scope_recursive(child)
set_scope_recursive(self.children[0])
break
if token.value(source) != ',':
break
advance(',')
advance('}')
return self
symbol('{').nud = nud
symbol('}')
def led(self, left):
self.append_child(left)
self.append_child(expression())
advance('else')
self.append_child(expression())
return self
symbol('if').led = led
symbol(':'); symbol('='); symbol('->')
def nud(self):
global scope
prev_scope = scope
scope = Scope([])
scope.is_lambda_or_for = True
scope.parent = prev_scope
if token.value(source) != ':':
while True:
if token.category != Token.Category.NAME:
raise Error('expected an argument name', token)
tokensn.scope = scope
scope.add_var(tokensn.token_str())
self.append_child(tokensn)
next_token()
if token.value(source) == '=':
next_token()
self.append_child(expression())
else:
self.children.append(None)
if token.value(source) != ',':
break
advance(',')
advance(':')
self.append_child(expression())
scope = prev_scope
return self
symbol('lambda').nud = nud
def led(self, left):
global scope
prev_scope = scope
scope = for_scope = Scope([])
scope.is_lambda_or_for = True
scope.parent = prev_scope
def set_scope_recursive(sn):
if sn.scope == prev_scope:
sn.scope = scope
elif sn.scope.parent == prev_scope: # for nested list comprehensions
sn.scope.parent = scope
else: # this `sn.scope` was already processed
assert(sn.scope.parent == scope)
for child in sn.children:
if child is not None:
set_scope_recursive(child)
set_scope_recursive(left)
tokensn.scope = scope
scope.add_var(tokensn.token_str())
self.append_child(left)
self.append_child(tokensn)
next_token()
if token.value(source) == ',':
sn = SymbolNode(Token(token.start, token.start, Token.Category.OPERATOR_OR_DELIMITER))
sn.symbol = symbol_table['('] # )
sn.tuple = True
sn.append_child(self.children.pop())
self.append_child(sn)
next_token()
scope.add_var(tokensn.token_str())
sn.append_child(tokensn)
next_token()
if token.value(source) == ',':
next_token()
scope.add_var(tokensn.token_str())
sn.append_child(tokensn)
next_token()
scope = prev_scope
advance('in')
if_lbp = symbol('if').lbp
symbol('if').lbp = 0
self.append_child(expression())
symbol('if').lbp = if_lbp
if token.value(source) == 'if':
scope = for_scope
next_token()
self.append_child(expression())
scope = prev_scope
if self.children[2].token_str() == 'for': # this is a multiloop
for_scope.add_var(self.children[2].children[1].token_str())
def set_scope_recursive(sn):
sn.scope = scope
for child in sn.children:
if child is not None:
set_scope_recursive(child)
set_scope_recursive(self.children[2].children[0])
def set_for_scope_recursive(sn):
sn.scope = for_scope
for child in sn.children:
if child is not None:
set_for_scope_recursive(child)
if self.children[2].children[2].token_str() == 'for': # this is a multiloop3
for_scope.add_var(self.children[2].children[2].children[1].token_str())
if len(self.children[2].children[2].children) == 4:
set_for_scope_recursive(self.children[2].children[2].children[3])
else:
if len(self.children[2].children) == 4:
set_for_scope_recursive(self.children[2].children[3])
return self
symbol('for', 20).led = led
# multitoken operators
def led(self, left):
if token.value(source) != 'in':
raise Error('invalid syntax', token)
next_token()
self.append_child(left)
self.append_child(expression(60))
return self
symbol('not').led = led
def led(self, left):
if token.value(source) == 'not':
next_token()
self.is_not = True
self.append_child(left)
self.append_child(expression(60))
return self
symbol('is').led = led
def parse_internal(this_node, one_line_scope = False):
global token
def new_scope(node, func_args = None):
if token.value(source) != ':':
raise Error('expected `:`', Token(tokens[tokeni-1].end, tokens[tokeni-1].end, tokens[tokeni-1].category))
next_token()
global scope
prev_scope = scope
scope = Scope(func_args)
scope.parent = prev_scope
if token.category != Token.Category.INDENT: # handling of `if ...: break`, `def ...(...): return ...`, etc.
if one_line_scope:
raise Error('unexpected `:` (only one `:` in one line is allowed)', tokens[tokeni-1])
tokensn.scope = scope # for `if ...: new_var = ...` (though code `if ...: new_var = ...` has no real application, this line is needed for correct error message outputting)
parse_internal(node, True)
else:
next_token()
parse_internal(node)
scope = prev_scope
if token is not None:
tokensn.scope = scope
def expected(ch):
if token.value(source) != ch:
raise Error('expected `'+ch+'`', token)
next_token()
def expected_name(what_name):
next_token()
if token.category != Token.Category.NAME:
raise Error('expected ' + what_name, token)
token_value = tokensn.token_str()
next_token()
return token_value
def check_vars_defined(sn : SymbolNode):
if sn.token.category == Token.Category.NAME:
if sn.parent is None or sn.parent.symbol.id != '.' or sn is sn.parent.children[0]: # in `a.b` only `a` [first child] is checked
if not sn.skip_find_and_get_prefix:
sn.scope_prefix = sn.scope.find_and_get_prefix(sn.token_str(), sn.token)
else:
if sn.function_call:
check_vars_defined(sn.children[0])
for i in range(1, len(sn.children), 2):
if sn.children[i+1] is None:
check_vars_defined(sn.children[i])
else:
check_vars_defined(sn.children[i+1]) # checking of named arguments (sn.children[i]) is skipped
else:
for child in sn.children:
if child is not None:
check_vars_defined(child)
while token is not None:
if token.category == Token.Category.KEYWORD:
global scope
if token.value(source) == 'import':
if type(this_node) != ASTProgram:
raise Error('only global import statements are supported', token)
node = ASTImport()
next_token()
while True:
if token.category != Token.Category.NAME:
raise Error('expected module name', token)
module_name = token.value(source)
while peek_token().value(source) == '.':
next_token()
next_token()
if token.category != Token.Category.NAME:
raise Error('expected module name', token)
module_name += '.' + token.value(source)
node.modules.append(module_name)
# Process module [transpile it if necessary]
if module_name not in ('sys', 'tempfile', 'os', 'time', 'datetime', 'math', 'cmath', 're', 'random', 'collections', 'heapq', 'itertools', 'eldf'):
if this_node.imported_modules is not None:
this_node.imported_modules.append(module_name)
module_file_name = os.path.join(os.path.dirname(file_name), module_name.replace('.', '/')).replace('\\', '/') # `os.path.join()` is needed for case when `os.path.dirname(file_name)` is empty string, `replace('\\', '/')` is needed for passing 'tests/parser/errors.txt'
try:
modulefstat = os.stat(module_file_name + '.py')
except FileNotFoundError:
raise Error('can not import module `' + module_name + "`: file '" + module_file_name + ".py' is not found", token)
_11l_file_mtime = 0
if os.path.isfile(module_file_name + '.11l'):
_11l_file_mtime = os.stat(module_file_name + '.11l').st_mtime
modified = _11l_file_mtime == 0 \
or modulefstat.st_mtime > _11l_file_mtime \
or os.stat(__file__).st_mtime > _11l_file_mtime \
or os.stat(os.path.dirname(__file__) + '/tokenizer.py').st_mtime > _11l_file_mtime \
or not os.path.isfile(module_file_name + '.py_global_scope')
if not modified: # check for dependent modules modifications
py_global_scope = eldf.parse(open(module_file_name + '.py_global_scope', encoding = 'utf-8-sig').read())
py_imported_modules = py_global_scope['Imported modules']
for m in py_imported_modules:
if os.stat(os.path.join(os.path.dirname(module_file_name), m.replace('.', '/') + '.py')).st_mtime > _11l_file_mtime:
modified = True
break
if modified:
module_source = open(module_file_name + '.py', encoding = 'utf-8-sig').read()
imported_modules = []
prev_scope = scope
s = parse_and_to_str(tokenizer.tokenize(module_source), module_source, module_file_name + '.py', imported_modules)
modules[module_name] = Module(scope)
open(module_file_name + '.11l', 'w', encoding = 'utf-8', newline = "\n").write(s)
open(module_file_name + '.py_global_scope', 'w', encoding = 'utf-8', newline = "\n").write(eldf.to_eldf(scope.serialize_to_dict(imported_modules)))
scope = prev_scope
if this_node.imported_modules is not None:
this_node.imported_modules.extend(imported_modules)
else:
module_scope = Scope(None)
module_scope.deserialize_from_dict(py_global_scope)
modules[module_name] = Module(module_scope)
if this_node.imported_modules is not None:
this_node.imported_modules.extend(py_imported_modules)
if '.' in module_name:
scope.add_var(module_name.split('.')[0], True, '(Module)')
scope.add_var(module_name, True, '(Module)')
next_token()
if token.value(source) != ',':
break
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'from':
next_token()
assert(token.value(source) in ('typing', 'functools', 'itertools', 'enum', 'copy'))
next_token()
advance('import')
while True:
if token.category != Token.Category.NAME:
raise Error('expected name', token)
next_token()
if token.value(source) != ',':
break
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
continue
elif token.value(source) == 'def':
node = ASTFunctionDefinition()
node.function_name = expected_name('function name')
scope.add_var(node.function_name, True, node = node)
if token.value(source) != '(': # )
raise Error('expected `(` after function name', token) # )(
next_token()
was_default_argument = False
def advance_type():
type_ = token.value(source)
next_token()
if token.value(source) == '[': # ]
nesting_level = 0
while True:
type_ += token.value(source)
if token.value(source) == '[':
next_token()
nesting_level += 1
elif token.value(source) == ']':
next_token()
nesting_level -= 1
if nesting_level == 0:
break
elif token.value(source) == ',':
type_ += ' '
next_token()
else:
if token.category != Token.Category.NAME:
raise Error('expected subtype name', token)
next_token()
return type_
while token.value(source) != ')':
if token.value(source) == '*':
assert(node.first_named_only_argument is None)
node.first_named_only_argument = len(node.function_arguments)
next_token()
advance(',')
continue
if token.category != Token.Category.NAME:
raise Error('expected function\'s argument name', token)
func_arg_name = tokensn.token_str()
next_token()
type_ = ''
qualifier = ''
if token.value(source) == ':': # this is a type hint
next_token()
if token.category == Token.Category.STRING_LITERAL:
type_ = token.value(source)[1:-1]
if token.value(source)[0] == '"': # `def insert(i, n : "Node"):` -> `F insert(i, Node &n)`
qualifier = '&'
next_token()
else:
type_ = advance_type()
if type_ == 'list':
type_ = ''
qualifier = '&'
if token.value(source) == '=':
next_token()
expr = expression()
check_vars_defined(expr)
default = expr.to_str()
was_default_argument = True
else:
if was_default_argument and node.first_named_only_argument is None:
raise Error('non-default argument follows default argument', tokens[tokeni-1])
default = ''
node.function_arguments.append((func_arg_name, default, type_, qualifier)) # ((
if token.value(source) not in ',)':
raise Error('expected `,` or `)` in function\'s arguments list', token)
if token.value(source) == ',':
next_token()
next_token()
if token.value(source) == '->':
next_token()
if token.value(source) == 'None':
node.function_return_type = 'None'
next_token()
else:
node.function_return_type = advance_type()
if source[token.end:token.end+7] == ' # -> &':
node.function_return_type += '&'
elif source[token.end:token.end+8] == ' # const':
node.is_const = True
node.parent = this_node
new_scope(node, map(lambda arg: (arg[0], arg[2]), node.function_arguments))
if len(node.children) == 0: # needed for:
n = ASTPass() # class FileToStringProxy:
n.parent = node # def __init__(self):
node.children.append(n) # self.result = []
# Detect virtual functions and assign `virtual_category`
if type(this_node) == ASTClassDefinition and node.function_name != '__init__':
if this_node.base_class_node is not None:
for child in this_node.base_class_node.children:
if type(child) == ASTFunctionDefinition and child.function_name == node.function_name:
if child.virtual_category == ASTFunctionDefinition.VirtualCategory.NO:
if child.function_return_type == '':
raise Error('please specify return type of virtual function', tokens[child.tokeni])
if len(child.children) and type(child.children[0]) == ASTException and child.children[0].expression.symbol.id == '(' and child.children[0].expression.children[0].token.value(source) == 'NotImplementedError': # )
child.virtual_category = ASTFunctionDefinition.VirtualCategory.ABSTRACT
else:
child.virtual_category = ASTFunctionDefinition.VirtualCategory.NEW
node.virtual_category = ASTFunctionDefinition.VirtualCategory.ASSIGN if child.virtual_category == ASTFunctionDefinition.VirtualCategory.ABSTRACT else ASTFunctionDefinition.VirtualCategory.OVERRIDE
if node.function_return_type == '': # specifying return type of overriden virtual functions is not necessary — it can be taken from original virtual function definition
node.function_return_type = child.function_return_type
break
elif token.value(source) == 'class':
node = ASTClassDefinition()
node.class_name = expected_name('class name')
scope.add_var(node.class_name, True, '(Class)', node = node)
if token.value(source) == '(':
node.base_class_name = expected_name('base class name')
if node.base_class_name != 'Exception':
base_class = scope.find(node.base_class_name)
if base_class is None:
raise Error('class `' + node.base_class_name + '` is not defined', tokens[tokeni-1])
if base_class.type != '(Class)':
raise Error('expected a class name', tokens[tokeni-1])
assert(type(base_class.node) == ASTClassDefinition)
node.base_class_node = base_class.node
expected(')')
if source[token.end:token.end+4] == ' # &':
node.is_inout = True
new_scope(node)
elif token.value(source) == 'pass':
node = ASTPass()
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'if':
if peek_token().value(source) == '__name__':
node = ASTStart()
next_token()
next_token()
assert(token.value(source) == '==')
next_token()
assert(token.value(source) in ("'__main__'", '"__main__"'))
next_token()
new_scope(node)
else:
node = ASTIf()
next_token()
node.set_expression(expression())
new_scope(node)
n = node
while token is not None and token.value(source) in ('elif', 'else'):
if token.value(source) == 'elif':
n.else_or_elif = ASTElseIf()
n.else_or_elif.parent = n
n = n.else_or_elif
next_token()
n.set_expression(expression())
new_scope(n)
if token is not None and token.value(source) == 'else':
n.else_or_elif = ASTElse()
n.else_or_elif.parent = n
next_token()
new_scope(n.else_or_elif)
break
elif token.value(source) == 'while':
node = ASTWhile()
next_token()
node.set_expression(expression())
if node.expression.token.category in (Token.Category.CONSTANT, Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL) and node.expression.token.value(source) != 'True':
raise Error('do you mean `while True`?', node.expression.token) # forbid `while 1:`
new_scope(node)
elif token.value(source) == 'for':
node = ASTFor()
next_token()
prev_scope = scope
scope = Scope(None)
scope.parent = prev_scope
node.loop_variables = [tokensn.token_str()]
scope.add_var(node.loop_variables[0], True)
next_token()
while token.value(source) == ',':
next_token()
node.loop_variables.append(tokensn.token_str())
scope.add_var(tokensn.token_str(), True)
next_token()
advance('in')
node.set_expression(expression())
new_scope(node)
scope = prev_scope
if token is not None and token.value(source) == 'else':
node.was_no_break = ASTNodeWithChildren()
node.was_no_break.parent = node
next_token()
new_scope(node.was_no_break)
elif token.value(source) == 'continue':
node = ASTContinue()
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'break':
node = ASTBreak()
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'return':
node = ASTReturn()
next_token()
if token.category in (Token.Category.DEDENT, Token.Category.STATEMENT_SEPARATOR):
node.expression = None
else:
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('nonlocal', 'global'):
nonlocal_or_global = token.value(source)
next_token()
while True:
if token.category != Token.Category.NAME:
raise Error('expected ' + nonlocal_or_global + ' variable name', token)
if nonlocal_or_global == 'nonlocal':
if source[token.end + 1 : token.end + 5] == "# =\n":
scope.nonlocals_copy.add(token.value(source))
else:
scope.nonlocals.add(token.value(source))
else:
scope.globals.add(token.value(source))
next_token()
if token.value(source) == ',':
next_token()
else:
break
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
continue
elif token.value(source) == 'assert':
node = ASTAssert()
next_token()
node.set_expression(expression())
if token.value(source) == ',':
next_token()
node.set_expression2(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'raise':
node = ASTException()
next_token()
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) == 'try':
node = ASTExceptionTry()
next_token()
new_scope(node)
elif token.value(source) == 'except':
node = ASTExceptionCatch()
prev_scope = scope
scope = Scope(None)
scope.parent = prev_scope
if peek_token().value(source) != ':':
node.exception_object_type = expected_name('exception object type name')
while token.value(source) == '.':
node.exception_object_type += ':' + expected_name('type name')
if node.exception_object_type.startswith('self:'):
node.exception_object_type = '.' + node.exception_object_type[5:]
if token.value(source) != ':':
advance('as')
if token.category != Token.Category.NAME:
raise Error('expected exception object name', token)
node.exception_object_name = tokensn.token_str()
scope.add_var(node.exception_object_name, True)
next_token()
else:
next_token()
node.exception_object_type = ''
new_scope(node)
scope = prev_scope
elif token.value(source) == 'del':
node = ASTDel()
next_token()
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
else:
raise Error('unrecognized statement started with keyword', token)
elif token.category == Token.Category.NAME and peek_token().value(source) == '=':
name_token = token
name_token_str = tokensn.token_str()
node = ASTExprAssignment()
node.set_dest_expression(tokensn)
next_token()
next_token()
node.set_expression(expression())
if node.expression.symbol.id == '.' and len(node.expression.children) == 2 and node.expression.children[1].token_str().isupper(): # replace `category = Token.Category.NAME` with `category = NAME`
node.set_expression(node.expression.children[1])
node.expression.parent = None
node.expression.skip_find_and_get_prefix = True # this can not be replaced with `isupper()` check before `find_and_get_prefix()` call because there will be conflict with uppercase [constant] variables, like `WIDTH` or `HEIGHT` (they[‘variables’] will not be checked, but they should)
type_name = ''
if node.expression.token.category == Token.Category.STRING_LITERAL or (node.expression.function_call and node.expression.children[0].token_str() == 'str') \
or (node.expression.symbol.id == '+' and len(node.expression.children) == 2 and (node.expression.children[0].token.category == Token.Category.STRING_LITERAL
or node.expression.children[1].token.category == Token.Category.STRING_LITERAL)):
type_name = 'str'
elif node.expression.var_type() == 'List':
type_name = 'List'
elif node.expression.is_dict():
type_name = 'Dict'
elif node.expression.function_call and node.expression.children[0].symbol.id == '.' and \
node.expression.children[0].children[0].token_str() == 'collections' and \
node.expression.children[0].children[1].token_str() == 'defaultdict':
type_name = 'DefaultDict'
node.add_vars = [scope.add_var(name_token_str, False, type_name, name_token)]
if node.expression.symbol.id == '[' and len(node.expression.children) == 0: # ]
if node.add_vars[0]:
raise Error('please specify type of empty list', Token(node.dest_expression.token.start, node.expression.token.end + 1, Token.Category.NAME))
node.drop_list = True
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT)): # `poss_nbors = (x-1,y),(x-1,y+1)`
raise Error('expected end of statement', token) # ^
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
if ((node.dest_expression.token_str() == 'Char' and node.expression.token_str() == 'str') # skip `Char = str` statement
or (node.dest_expression.token_str() == 'Byte' and node.expression.token_str() == 'int') # skip `Byte = int` statement
or (node.dest_expression.token_str() == 'Int64' and node.expression.token_str() == 'int') # skip `Int64 = int` statement
or (node.dest_expression.token_str() == 'UInt64' and node.expression.token_str() == 'int') # skip `UInt64 = int` statement
or (node.dest_expression.token_str() == 'UInt32' and node.expression.token_str() == 'int')): # skip `UInt32 = int` statement
continue
elif token.category == Token.Category.NAME and (peek_token().value(source) == ':' # this is type hint
or (token.value(source) == 'self' and peek_token().value(source) == '.' and peek_token(2).category == Token.Category.NAME)
and peek_token(3).value(source) == ':'):
is_self = peek_token().value(source) == '.'
if is_self:
if not (type(this_node) == ASTFunctionDefinition and this_node.function_name == '__init__'):
raise Error('type annotation for `self.*` is permitted only inside `__init__`', token)
next_token()
next_token()
name_token = token
var = tokensn.token_str()
next_token()
advance(':')
if token.category not in (Token.Category.NAME, Token.Category.STRING_LITERAL):
raise Error('expected type name', token)
type_ = token.value(source) if token.category == Token.Category.NAME else token.value(source)[1:-1]
type_token = token
next_token()
while token.value(source) == '.': # for `category : Token.Category`
type_ += '.' + expected_name('type name')
if is_self:
scope.parent.add_var(var, True, type_, name_token)
else:
scope.add_var(var, True, type_, name_token)
type_args = []
if token.value(source) == '[':
next_token()
while token.value(source) != ']':
if token.value(source) == '[': # for `Callable[[str, int], str]`
next_token()
if token.value(source) == ']': # for `Callable[[], str]`
type_arg = ''
else:
type_arg = token.value(source)
next_token()
while token.value(source) == ',':
next_token()
type_arg += ',' + token.value(source)
next_token() # [
advance(']')
type_args.append(type_arg)
elif peek_token().value(source) == '[': # ] # for `table : List[List[List[str]]] = []` and `empty_list : List[List[str]] = []`
type_arg = token.value(source)
next_token()
nesting_level = 0
while True:
type_arg += token.value(source)
if token.value(source) == '[':
next_token()
nesting_level += 1
elif token.value(source) == ']':
next_token()
nesting_level -= 1
if nesting_level == 0:
break
elif token.value(source) == ',':
type_arg += ' '
next_token()
else:
assert(token.category == Token.Category.NAME)
next_token()
type_args.append(type_arg)
else:
type_args.append(token.value(source))
next_token()
while token.value(source) == '.': # for `datetime.date` in `dates : List[datetime.date] = []`
type_args[-1] += '.' + expected_name('subtype name') # [[
if token.value(source) not in ',]':
raise Error('expected `,` or `]` in type\'s arguments list', token)
if token.value(source) == ',':
next_token()
next_token()
if token is not None and token.value(source) == '=':
node = ASTAssignmentWithTypeHint()
next_token()
node.set_expression(expression())
else:
node = ASTTypeHint()
if source[tokens[tokeni-1].end:tokens[tokeni-1].end+4] == ' # &':
node.is_reference = True
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT)):
raise Error('expected end of statement', token)
node.type_token = type_token
node.var = var
node.type = type_
node.type_args = type_args
assert(token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT)) # [-replace with `raise Error` with meaningful error message after first precedent of triggering this assert-]
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
if is_self:
node.parent = this_node.parent
this_node.parent.children.append(node)
node.walk_expressions(check_vars_defined)
continue
elif token.category == Token.Category.DEDENT:
next_token()
if token.category == Token.Category.STATEMENT_SEPARATOR: # Token.Category.EOF
next_token()
assert(token is None)
return
else:
node_expression = expression()
if token is not None and token.value(source) == '=':
node = ASTExprAssignment()
if node_expression.token.category == Token.Category.NAME:
assert(False) #node.add_var = scope.add_var(node_expression.token.value(source))
if node_expression.tuple:
node.add_vars = []
for v in node_expression.children:
if v.token.category != Token.Category.NAME:
node.is_tuple_assign_expression = True
break
node.add_vars.append(scope.add_var(v.token_str()))
else:
node.add_vars = [False]
node.set_dest_expression(node_expression)
next_token()
while True:
expr = expression()
if token is not None and token.value(source) == '=':
expr.ast_parent = node
node.additional_dest_expressions.append(expr)
next_token()
else:
node.set_expression(expr)
break
else:
node = ASTExpression()
node.set_expression(node_expression)
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT)):
raise Error('expected end of statement', token)
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.DEDENT)): # `(w, h) = int(w1), int(h1)`
raise Error('expected end of statement', token) # ^
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
if (type(node) == ASTExprAssignment and node_expression.token_str() == '.' and node_expression.children[0].token_str() == 'self'
and type(this_node) == ASTFunctionDefinition and this_node.function_name == '__init__'): # only in constructors
assert(type(this_node.parent) == ASTClassDefinition)
found_in_base_class = False
if this_node.parent.base_class_node is not None:
found_in_base_class = this_node.parent.base_class_node.find_member_including_base_classes(node_expression.children[1].token_str())
if not found_in_base_class and scope.parent.add_var(node_expression.children[1].token_str()):
if node.expression.symbol.id == '[' and len(node.expression.children) == 0: # ]
raise Error('please specify type of empty list', Token(node.dest_expression.leftmost(), node.expression.rightmost(), Token.Category.NAME))
node.add_vars = [True]
node.set_dest_expression(node_expression.children[1])
node.parent = this_node.parent
this_node.parent.children.append(node)
node.walk_expressions(check_vars_defined)
continue
elif ((node.expression.symbol.id == '[' and len(node.expression.children) == 0) # ] # skip `self.* = []` because `create_array({})` is meaningless
or (node.expression.symbol.id == '(' and len(node.expression.children) == 1 and node.expression.children[0].token_str() == 'set')): # ) # skip `self.* = set()`
continue
node.walk_expressions(check_vars_defined)
node.parent = this_node
this_node.children.append(node)
if one_line_scope and tokens[tokeni-1].value(source) != ';':
return
return
tokens = []
source = ''
tokeni = -1
token = Token(0, 0, Token.Category.STATEMENT_SEPARATOR)
scope = Scope(None)
tokensn = SymbolNode(token)
file_name = ''
def parse_and_to_str(tokens_, source_, file_name_, imported_modules = None):
if len(tokens_) == 0: return ASTProgram().to_str()
global tokens, source, tokeni, token, scope, tokensn, file_name
prev_tokens = tokens
prev_source = source
prev_tokeni = tokeni
prev_token = token
# prev_scope = scope
prev_tokensn = tokensn
prev_file_name = file_name
tokens = tokens_ + [Token(len(source_), len(source_), Token.Category.STATEMENT_SEPARATOR)]
source = source_
tokeni = -1
token = None
scope = Scope(None)
for pytype in python_types_to_11l:
scope.add_var(pytype)
scope.add_var('IntEnum', True, '(Class)', node = ASTClassDefinition())
file_name = file_name_
next_token()
p = ASTProgram()
p.imported_modules = imported_modules
parse_internal(p)
def check_for_and_or(node):
def f(e : SymbolNode):
if e.symbol.id == 'or' and \
(e.children[0].symbol.id == 'and' or e.children[1].symbol.id == 'and'):
if e.children[0].symbol.id == 'and':
start = e.children[0].children[0].leftmost()
end = e.children[1].rightmost()
midend = e.children[0].children[1].rightmost()
midstart = e.children[0].children[1].leftmost()
else:
start = e.children[0].leftmost()
end = e.children[1].children[1].rightmost()
midend = e.children[1].children[0].rightmost()
midstart = e.children[1].children[0].leftmost()
raise Error("relative precedence of operators `and` and `or` is undetermined; please add parentheses this way:\n`("
+ source[start:midend ] + ')' + source[midend :end] + "`\nor this way:\n`"
+ source[start:midstart] + '(' + source[midstart:end] + ')`', Token(start, end, Token.Category.OPERATOR_OR_DELIMITER))
for child in e.children:
if child is not None:
f(child)
node.walk_expressions(f)
node.walk_children(check_for_and_or)
check_for_and_or(p)
def transformations(node):
if isinstance(node, ASTNodeWithChildren):
index = 0
while index < len(node.children):
child = node.children[index]
if index < len(node.children) - 1 and type(child) == ASTExprAssignment and child.dest_expression.token.category == Token.Category.NAME and type(node.children[index+1]) == ASTIf and type(node.children[index+1].else_or_elif) == ASTElseIf: # transform if-elif-else chain into switch
if_node = node.children[index+1]
var_name = child.dest_expression.token.value(source)
transformation_possible = True
while True:
if not (if_node.expression.symbol.id == '==' and if_node.expression.children[0].token.category == Token.Category.NAME and if_node.expression.children[0].token.value(source) == var_name
and if_node.expression.children[1].token.category in (Token.Category.STRING_LITERAL, Token.Category.NUMERIC_LITERAL)):
transformation_possible = False
break
if_node = if_node.else_or_elif
if if_node is None or type(if_node) == ASTElse:
break
if transformation_possible:
tid = child.dest_expression.scope.find(var_name)
assert(tid is not None)
found_reference_to_var_name = False
def find_reference_to_var_name(node):
def f(e : SymbolNode):
if e.token.category == Token.Category.NAME and e.token_str() == var_name and id(e.scope.find(var_name)) == id(tid):
nonlocal found_reference_to_var_name
found_reference_to_var_name = True
return
for child in e.children:
if child is not None:
f(child)
node.walk_expressions(f)
node.walk_children(find_reference_to_var_name)
if_node = node.children[index+1]
while True:
if_node.walk_children(find_reference_to_var_name) # looking for switch variable inside switch statements
if found_reference_to_var_name:
break
if type(if_node) == ASTElse:
break
if_node = if_node.else_or_elif
if if_node is None:
break
if not found_reference_to_var_name:
i = index + 2
while i < len(node.children):
find_reference_to_var_name(node.children[i]) # looking for switch variable after switch
if found_reference_to_var_name:
break
i += 1
switch_node = ASTSwitch()
switch_node.set_expression(child.dest_expression if found_reference_to_var_name else child.expression)
if_node = node.children[index+1]
while True:
case = ASTSwitch.Case()
case.parent = switch_node
case.set_expression(SymbolNode(Token(0, 0, Token.Category.KEYWORD), 'E') if type(if_node) == ASTElse else if_node.expression.children[1])
case.children = if_node.children
for child in case.children:
child.parent = case
switch_node.cases.append(case)
if type(if_node) == ASTElse:
break
if_node = if_node.else_or_elif
if if_node is None:
break
if found_reference_to_var_name:
index += 1
else:
node.children.pop(index)
node.children.pop(index)
node.children.insert(index, switch_node)
switch_node.parent = node
continue # to update child = node.children[index]
if index < len(node.children) - 1 and type(child) == ASTExpression and child.expression.symbol.id == '-=' and child.expression.children[1].token.value(source) == '1' \
and type(node.children[index+1]) == ASTIf and len(node.children[index+1].expression.children) == 2 \
and node.children[index+1].expression.children[0].token.value(source) == child.expression.children[0].token.value(source): # transform `nesting_level -= 1 \n if nesting_level == 0:` into `if --nesting_level == 0`
child.expression.parent = node.children[index+1].expression#.children[0].parent
node.children[index+1].expression.children[0] = child.expression
node.children.pop(index)
continue
if type(child) == ASTFor:
if len(child.loop_variables): # detect loop variables' changing/modification, and add qualifier `=` to changing ones
lvars = child.loop_variables
found = set()
def detect_lvars_modification(node):
if type(node) == ASTExprAssignment:
nonlocal found
if node.dest_expression.token_str() in lvars:
found.add(node.dest_expression.token_str())
if len(lvars) == 1:
return
elif node.dest_expression.tuple:
for t in node.dest_expression.children:
if t.token_str() in lvars:
found.add(t.token_str())
if len(lvars) == 1:
return
def f(e : SymbolNode):
if e.symbol.id[-1] == '=' and e.symbol.id not in ('==', '!=', '<=', '>=') and e.children[0].token_str() in lvars: # +=, -=, *=, /=, etc.
nonlocal found
found.add(e.children[0].token_str())
node.walk_expressions(f)
node.walk_children(detect_lvars_modification)
detect_lvars_modification(child)
for lvar in found:
lvari = lvars.index(lvar)
child.loop_variables[lvari] = '=' + child.loop_variables[lvari]
if child.expression.symbol.id == '(' and child.expression.children[0].symbol.id == '.' \
and child.expression.children[0].children[0].token_str() == 'os' \
and child.expression.children[0].children[1].token_str() == 'walk': # ) # detect `for ... in os.walk(...)` and remove `dirs[:] = ...` statement
child.os_walk = True
assert(len(child.loop_variables) == 3)
c0 = child.children[0]
if (type(c0) == ASTExprAssignment and c0.dest_expression.symbol.id == '[' # ]
and len(c0.dest_expression.children) == 2
and c0.dest_expression.children[1] is None
and c0.dest_expression.children[0].token_str() == child.loop_variables[1]
and c0.expression.symbol.id == '[' # ]
and len(c0.expression.children) == 1
and c0.expression.children[0].symbol.id == 'for'
and len(c0.expression.children[0].children) == 4
and c0.expression.children[0].children[1].to_str()
== c0.expression.children[0].children[0].to_str()):
child.dir_filter = c0.expression.children[0].children[1].to_str() + ' -> ' + c0.expression.children[0].children[3].to_str()
child.children.pop(0)
elif child.expression.symbol.id == '(' and child.expression.children[0].token_str() == 'enumerate': # )
assert(len(child.loop_variables) == 2)
set_index_node = ASTExprAssignment()
set_index_node.set_dest_expression(SymbolNode(Token(0, 0, Token.Category.NAME), child.loop_variables[0].lstrip('=')))
child.loop_variables.pop(0)
start = ''
if len(child.expression.children) >= 5:
if child.expression.children[4] is not None:
assert(child.expression.children[3].to_str() == 'start')
start = child.expression.children[4].to_str()
else:
start = child.expression.children[3].to_str()
set_index_node.set_expression(SymbolNode(Token(0, 0, Token.Category.NAME), 'L.index' + (' + ' + start if start != '' else '')))
set_index_node.add_vars = [True]
set_index_node.parent = child
child.children.insert(0, set_index_node)
child.expression.children[0].parent = child.expression.parent
child.expression.children[0].ast_parent = child.expression.ast_parent
child.expression = child.expression.children[1]
elif type(child) == ASTFunctionDefinition: # detect function's arguments changing/modification inside this function, and add qualifier `=` to changing ones
if len(child.function_arguments):
fargs = [farg[0] for farg in child.function_arguments]
found = set()
def detect_arguments_modification(node):
if type(node) == ASTExprAssignment:
nonlocal found
if node.dest_expression.token_str() in fargs:
found.add(node.dest_expression.token_str())
if len(fargs) == 1:
return
elif node.dest_expression.tuple:
for t in node.dest_expression.children:
if t.token_str() in fargs:
found.add(t.token_str())
if len(fargs) == 1:
return
def f(e : SymbolNode):
if e.symbol.id[-1] == '=' and e.symbol.id not in ('==', '!=', '<=', '>=') and e.children[0].token_str() in fargs: # +=, -=, *=, /=, etc.
nonlocal found
found.add(e.children[0].token_str())
node.walk_expressions(f)
node.walk_children(detect_arguments_modification)
detect_arguments_modification(child)
for farg in found:
fargi = fargs.index(farg)
if child.function_arguments[fargi][3] != '&': # if argument already has `&` qualifier, then qualifier `=` is not needed
child.function_arguments[fargi] = ('=' + child.function_arguments[fargi][0], child.function_arguments[fargi][1], child.function_arguments[fargi][2], child.function_arguments[fargi][3])
index += 1
node.walk_children(transformations)
transformations(p)
s = p.to_str() # call `to_str()` moved here [from outside] because it accesses global variables `source` (via `token.value(source)`) and `tokens` (via `tokens[ti]`)
tokens = prev_tokens
source = prev_source
tokeni = prev_tokeni
token = prev_token
# scope = prev_scope
tokensn = prev_tokensn
file_name = prev_file_name
return s | 11l | /11l-2021.3-py3-none-any.whl/python_to_11l/parse.py | parse.py |
from typing import List, Tuple
Char = str
from enum import IntEnum
keywords = [ # https://docs.python.org/3/reference/lexical_analysis.html#keywords
'False', 'await', 'else', 'import', 'pass',
'None', 'break', 'except', 'in', 'raise',
'True', 'class', 'finally', 'is', 'return',
'and', 'continue', 'for', 'lambda', 'try',
'as', 'def', 'from', 'nonlocal', 'while',
'assert', 'del', 'global', 'not', 'with',
'async', 'elif', 'if', 'or', 'yield',]
operators = [ # https://docs.python.org/3/reference/lexical_analysis.html#operators
'+', '-', '*', '**', '/', '//', '%', '@',
'<<', '>>', '&', '|', '^', '~',
'<', '>', '<=', '>=', '==', '!=',]
#operators.sort(key = lambda x: len(x), reverse = True)
delimiters = [ # https://docs.python.org/3/reference/lexical_analysis.html#delimiters
'(', ')', '[', ']', '{', '}',
',', ':', '.', ';', '@', '=', '->',
'+=', '-=', '*=', '/=', '//=', '%=', '@=',
'&=', '|=', '^=', '>>=', '<<=', '**=',]
#delimiters.sort(key = lambda x: len(x), reverse = True)
operators_and_delimiters = sorted(operators + delimiters, key = lambda x: len(x), reverse = True)
class Error(Exception):
message : str
pos : int
end : int
def __init__(self, message, pos):
self.message = message
self.pos = pos
self.end = pos
class Token:
class Category(IntEnum): # why ‘Category’: >[https://docs.python.org/3/reference/lexical_analysis.html#other-tokens]:‘the following categories of tokens exist’
NAME = 0 # or IDENTIFIER
KEYWORD = 1
CONSTANT = 2
OPERATOR_OR_DELIMITER = 3
NUMERIC_LITERAL = 4
STRING_LITERAL = 5
INDENT = 6 # [https://docs.python.org/3/reference/lexical_analysis.html#indentation][-1]
DEDENT = 7
STATEMENT_SEPARATOR = 8
start : int
end : int
category : Category
def __init__(self, start, end, category):
self.start = start
self.end = end
self.category = category
def __repr__(self):
return str(self.start)
def value(self, source):
return source[self.start:self.end]
def to_str(self, source):
return 'Token('+str(self.category)+', "'+self.value(source)+'")'
def tokenize(source, newline_chars : List[int] = None, comments : List[Tuple[int, int]] = None):
tokens : List[Token] = []
indentation_levels : List[int] = []
nesting_elements : List[Tuple[Char, int]] = [] # parentheses, square brackets or curly braces
begin_of_line = True
expected_an_indented_block = False
i = 0
while i < len(source):
if begin_of_line: # at the beginning of each line, the line's indentation level is compared to the last of the indentation_levels [:1]
begin_of_line = False
linestart = i
indentation_level = 0
while i < len(source):
if source[i] == ' ':
indentation_level += 1
elif source[i] == "\t":
indentation_level += 8 # consider tab as just 8 spaces (I know that Python 3 use different rules, but I disagree with Python 3 approach ([-1]:‘Tabs are replaced (from left to right) by one to eight spaces’), so I decided to use this simpler solution)
else:
break
i += 1
if i == len(source): # end of source
break
if source[i] in "\r\n#": # lines with only whitespace and/or comments do not affect the indentation
continue
prev_indentation_level = indentation_levels[-1] if len(indentation_levels) else 0
if expected_an_indented_block:
if not indentation_level > prev_indentation_level:
raise Error('expected an indented block', i)
if indentation_level == prev_indentation_level: # [1:] [-1]:‘If it is equal, nothing happens.’ [:2]
if len(tokens):
tokens.append(Token(linestart-1, linestart, Token.Category.STATEMENT_SEPARATOR))
elif indentation_level > prev_indentation_level: # [2:] [-1]:‘If it is larger, it is pushed on the stack, and one INDENT token is generated.’ [:3]
if not expected_an_indented_block:
raise Error('unexpected indent', i)
expected_an_indented_block = False
indentation_levels.append(indentation_level)
tokens.append(Token(linestart, i, Token.Category.INDENT))
else: # [3:] [-1]:‘If it is smaller, it ~‘must’ be one of the numbers occurring on the stack; all numbers on the stack that are larger are popped off, and for each number popped off a DEDENT token is generated.’ [:4]
while True:
indentation_levels.pop()
tokens.append(Token(i, i, Token.Category.DEDENT))
level = indentation_levels[-1] if len(indentation_levels) else 0
if level == indentation_level:
break
if level < indentation_level:
raise Error('unindent does not match any outer indentation level', i)
ch = source[i]
if ch in " \t":
i += 1 # just skip whitespace characters
elif ch in "\r\n":
if newline_chars is not None:
newline_chars.append(i)
i += 1
if ch == "\r" and source[i:i+1] == "\n":
i += 1
if len(nesting_elements) == 0: # [https://docs.python.org/3/reference/lexical_analysis.html#implicit-line-joining ‘Implicit line joining’]:‘Expressions in parentheses, square brackets or curly braces can be split over more than one physical line without using backslashes.’
begin_of_line = True
elif ch == '#':
comment_start = i
i += 1
while i < len(source) and source[i] not in "\r\n":
i += 1
if comments is not None:
comments.append((comment_start, i))
else:
expected_an_indented_block = ch == ':'
operator_or_delimiter = ''
for op in operators_and_delimiters:
if source[i:i+len(op)] == op:
operator_or_delimiter = op
break
lexem_start = i
i += 1
category : Token.Category
if operator_or_delimiter != '':
i = lexem_start + len(operator_or_delimiter)
category = Token.Category.OPERATOR_OR_DELIMITER
if ch in '([{':
nesting_elements.append((ch, lexem_start))
elif ch in ')]}': # ([{
if len(nesting_elements) == 0 or nesting_elements[-1][0] != {')':'(', ']':'[', '}':'{'}[ch]: # }])
raise Error('there is no corresponding opening parenthesis/bracket/brace for `' + ch + '`', lexem_start)
nesting_elements.pop()
elif ch == ';':
category = Token.Category.STATEMENT_SEPARATOR
elif ch in ('"', "'") or (ch in 'rRbB' and source[i:i+1] in ('"', "'")):
ends : str
if ch in 'rRbB':
ends = source[i:i+3] if source[i:i+3] in ('"""', "'''") else source[i]
else:
i -= 1
ends = source[i:i+3] if source[i:i+3] in ('"""', "'''") else ch
i += len(ends)
while True:
if i == len(source):
raise Error('unclosed string literal', lexem_start)
if source[i] == '\\':
i += 1
if i == len(source):
continue
elif source[i:i+len(ends)] == ends:
i += len(ends)
break
i += 1
category = Token.Category.STRING_LITERAL
elif ch.isalpha() or ch == '_': # this is NAME/IDENTIFIER or KEYWORD
while i < len(source):
ch = source[i]
if not (ch.isalpha() or ch == '_' or '0' <= ch <= '9' or ch == '?'):
break
i += 1
if source[lexem_start:i] in keywords:
if source[lexem_start:i] in ('None', 'False', 'True'):
category = Token.Category.CONSTANT
else:
category = Token.Category.KEYWORD
else:
category = Token.Category.NAME
elif (ch in '-+' and '0' <= source[i:i+1] <= '9') or '0' <= ch <= '9': # this is NUMERIC_LITERAL
if ch in '-+':
assert(False) # considering sign as a part of numeric literal is a bad idea — expressions like `j-3` are cease to parse correctly
#sign = ch
ch = source[i+1]
else:
i -= 1
is_hex = ch == '0' and source[i+1:i+2] in ('x', 'X')
is_oct = ch == '0' and source[i+1:i+2] in ('o', 'O')
is_bin = ch == '0' and source[i+1:i+2] in ('b', 'B')
if is_hex or is_oct or is_bin:
i += 2
# if not '0' <= source[i:i+1] <= '9':
# raise Error('expected digit', i)
start = i
i += 1
if is_hex:
while i < len(source) and ('0' <= source[i] <= '9' or 'a' <= source[i] <= 'z' or 'A' <= source[i] <= 'Z' or source[i] == '_'):
i += 1
elif is_oct:
while i < len(source) and ('0' <= source[i] <= '7' or source[i] == '_'):
i += 1
elif is_bin:
while i < len(source) and source[i] in '01_':
i += 1
else:
while i < len(source) and ('0' <= source[i] <= '9' or source[i] in '_.eE'):
if source[i] in 'eE':
if source[i+1:i+2] in '-+':
i += 1
i += 1
if source[i:i+1] in ('j', 'J'):
i += 1
if '_' in source[start:i] and not '.' in source[start:i]: # float numbers do not checked for a while
number = source[start:i].replace('_', '')
number_with_separators = ''
j = len(number)
while j > 3:
number_with_separators = '_' + number[j-3:j] + number_with_separators
j -= 3
number_with_separators = number[0:j] + number_with_separators
if source[start:i] != number_with_separators:
raise Error('digit separator in this number is located in the wrong place (should be: '+ number_with_separators +')', start)
category = Token.Category.NUMERIC_LITERAL
elif ch == '\\':
if source[i] not in "\r\n":
raise Error('only new line character allowed after backslash', i)
if source[i] == "\r":
i += 1
if source[i] == "\n":
i += 1
continue
else:
raise Error('unexpected character ' + ch, lexem_start)
tokens.append(Token(lexem_start, i, category))
if len(nesting_elements):
raise Error('there is no corresponding closing parenthesis/bracket/brace for `' + nesting_elements[-1][0] + '`', nesting_elements[-1][1])
if expected_an_indented_block:
raise Error('expected an indented block', i)
while len(indentation_levels): # [4:] [-1]:‘At the end of the file, a DEDENT token is generated for each number remaining on the stack that is larger than zero.’
tokens.append(Token(i, i, Token.Category.DEDENT))
indentation_levels.pop()
return tokens | 11l | /11l-2021.3-py3-none-any.whl/python_to_11l/tokenizer.py | tokenizer.py |
try:
from tokenizer import Token
import tokenizer
except ImportError:
from .tokenizer import Token
from . import tokenizer
from typing import List, Tuple, Dict, Callable, Set
from enum import IntEnum
import os, eldf
class Error(Exception):
def __init__(self, message, token):
self.message = message
self.pos = token.start
self.end = token.end
class Scope:
parent : 'Scope'
node : 'ASTNode' = None
class Id:
type : str
type_node : 'ASTTypeDefinition' = None
ast_nodes : List['ASTNodeWithChildren']
last_occurrence : 'SymbolNode' = None
def __init__(self, type, ast_node = None):
assert(type is not None)
self.type = type
self.ast_nodes = []
if ast_node is not None:
self.ast_nodes.append(ast_node)
def init_type_node(self, scope):
if self.type != '':
tid = scope.find(self.type)
if tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition:
self.type_node = tid.ast_nodes[0]
def serialize_to_dict(self):
ast_nodes = []
for ast_node in self.ast_nodes:
if type(ast_node) in (ASTFunctionDefinition, ASTTypeDefinition):
ast_nodes.append(ast_node.serialize_to_dict())
return {'type': self.type, 'ast_nodes': ast_nodes}
def deserialize_from_dict(self, d):
#self.type = d['type']
for ast_node_dict in d['ast_nodes']:
ast_node = ASTFunctionDefinition() if ast_node_dict['node_type'] == 'function' else ASTTypeDefinition()
ast_node.deserialize_from_dict(ast_node_dict)
self.ast_nodes.append(ast_node)
ids : Dict[str, Id]
is_function : bool
is_lambda = False
def __init__(self, func_args):
self.parent = None
if func_args is not None:
self.is_function = True
self.ids = dict(map(lambda x: (x[0], Scope.Id(x[1])), func_args))
else:
self.is_function = False
self.ids = {}
def init_ids_type_node(self):
for id in self.ids.values():
id.init_type_node(self.parent)
def serialize_to_dict(self):
ids_dict = {}
for name, id in self.ids.items():
ids_dict[name] = id.serialize_to_dict()
return ids_dict
def deserialize_from_dict(self, d):
for name, id_dict in d.items():
id = Scope.Id(id_dict['type'])
id.deserialize_from_dict(id_dict)
self.ids[name] = id
def find_in_current_function(self, name):
s = self
while True:
if name in s.ids:
return True
if s.is_function:
return False
s = s.parent
if s is None:
return False
def find_in_current_type_function(self, name):
s = self
while True:
if name in s.ids:
return True
if s.is_function and type(s.node) == ASTFunctionDefinition and type(s.node.parent) == ASTTypeDefinition:
return False
s = s.parent
if s is None:
return False
def find(self, name):
s = self
while True:
id = s.ids.get(name)
if id is not None:
return id
s = s.parent
if s is None:
return None
def find_and_return_scope(self, name):
s = self
if type(s.node) == ASTTypeDefinition:
id = s.ids.get(name)
if id is not None:
return id, s
while True:
if type(s.node) != ASTTypeDefinition:
id = s.ids.get(name)
if id is not None:
return id, s
s = s.parent
if s is None:
return None, None
def add_function(self, name, ast_node):
if name in self.ids: # V &id = .ids.set_if_not_present(name, Id(N)) // [[[or `put_if_absent` as in Java, or `add_if_absent`]]] note that this is an error: `V id = .ids.set_if_not_present(...)`, but you can do this: `V id = copy(.ids.set_if_not_present(...))`
assert(type(self.ids[name].ast_nodes[0]) == ASTFunctionDefinition) # assert(id.ast_nodes.empty | T(id.ast_nodes[0]) == ASTFunctionDefinition)
self.ids[name].ast_nodes.append(ast_node) # id.ast_nodes [+]= ast_node
else:
self.ids[name] = Scope.Id('', ast_node)
def add_name(self, name, ast_node):
if name in self.ids: # I !.ids.set(name, Id(N, ast_node))
if isinstance(ast_node, ASTVariableDeclaration):
t = ast_node.type_token
elif isinstance(ast_node, ASTNodeWithChildren):
t = tokens[ast_node.tokeni + 1]
else:
t = token
raise Error('redefinition of already defined identifier is not allowed', t) # X Error(‘redefinition ...’, ...)
self.ids[name] = Scope.Id('', ast_node)
scope : Scope
class SymbolBase:
id : str
lbp : int
nud_bp : int
led_bp : int
nud : Callable[['SymbolNode'], 'SymbolNode']
led : Callable[['SymbolNode', 'SymbolNode'], 'SymbolNode']
def set_nud_bp(self, nud_bp, nud):
self.nud_bp = nud_bp
self.nud = nud
def set_led_bp(self, led_bp, led):
self.led_bp = led_bp
self.led = led
def __init__(self):
def nud(s): raise Error('unknown unary operator', s.token)
self.nud = nud
def led(s, l): raise Error('unknown binary operator', s.token)
self.led = led
int_is_int64 = False
class SymbolNode:
token : Token
symbol : SymbolBase = None
children : List['SymbolNode']# = []
parent : 'SymbolNode' = None
ast_parent : 'ASTNode'
function_call : bool = False
tuple : bool = False
is_list : bool = False
is_dict : bool = False
is_type : bool = False
postfix : bool = False
scope : Scope
token_str_override : str
def __init__(self, token, token_str_override = None, symbol = None):
self.token = token
self.children = []
self.scope = scope
self.token_str_override = token_str_override
self.symbol = symbol
def append_child(self, child):
child.parent = self
self.children.append(child)
def leftmost(self):
if self.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL, Token.Category.NAME, Token.Category.CONSTANT):
return self.token.start
if self.symbol.id == '(': # )
if self.function_call:
return self.children[0].token.start
else:
return self.token.start
elif self.symbol.id == '[': # ]
if self.is_list or self.is_dict:
return self.token.start
else:
return self.children[0].token.start
if len(self.children) in (2, 3):
return self.children[0].leftmost()
return self.token.start
def rightmost(self):
if self.token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL, Token.Category.NAME, Token.Category.CONSTANT):
return self.token.end
if self.symbol.id in '([': # ])
if len(self.children) == 0:
return self.token.end + 1
return (self.children[-1] or self.children[-2]).rightmost() + 1
return self.children[-1].rightmost()
def left_to_right_token(self):
return Token(self.leftmost(), self.rightmost(), Token.Category.NAME)
def token_str(self):
return self.token.value(source) if not self.token_str_override else self.token_str_override
def to_type_str(self):
if self.symbol.id == '[': # ]
if self.is_list:
assert(len(self.children) == 1)
return 'Array[' + self.children[0].to_type_str() + ']'
elif self.is_dict:
assert(len(self.children) == 1 and self.children[0].symbol.id == '=')
return 'Dict[' + self.children[0].children[0].to_type_str() + ', ' \
+ self.children[0].children[1].to_type_str() + ']'
else:
assert(self.is_type)
r = self.children[0].token.value(source) + '['
for i in range(1, len(self.children)):
r += self.children[i].to_type_str()
if i < len(self.children) - 1:
r += ', '
return r + ']'
elif self.symbol.id == '(': # )
if len(self.children) == 1 and self.children[0].symbol.id == '->':
r = 'Callable['
c0 = self.children[0]
if c0.children[0].symbol.id == '(': # )
for child in c0.children[0].children:
r += child.to_type_str() + ', '
else:
r += c0.children[0].to_type_str() + ', '
return r + c0.children[1].to_type_str() + ']'
else:
assert(self.tuple)
r = '('
for i in range(len(self.children)):
assert(self.children[i].symbol.id != '->')
r += self.children[i].to_type_str()
if i < len(self.children) - 1:
r += ', '
return r + ')'
assert(self.token.category == Token.Category.NAME)
return self.token_str()
def to_str(self):
if self.token.category == Token.Category.NAME:
if self.token_str() in ('L.index', 'Ц.индекс', 'loop.index', 'цикл.индекс'):
parent = self
while parent.parent:
parent = parent.parent
ast_parent = parent.ast_parent
while True:
if type(ast_parent) == ASTLoop:
ast_parent.has_L_index = True
break
ast_parent = ast_parent.parent
return 'Lindex'
if self.token_str() == '(.)':
# if self.parent is not None and self.parent.symbol.id == '=' and self is self.parent.children[1]: # `... = (.)` -> `... = this;`
# return 'this'
return '*this'
tid = self.scope.find(self.token_str())
if tid is not None and ((len(tid.ast_nodes) and isinstance(tid.ast_nodes[0], ASTVariableDeclaration) and tid.ast_nodes[0].is_ptr and not tid.ast_nodes[0].nullable) # `animals [+]= animal` -> `animals.append(std::move(animal));`
or (tid.type_node is not None and (tid.type_node.has_virtual_functions or tid.type_node.has_pointers_to_the_same_type))) \
and (self.parent is None or self.parent.symbol.id not in ('.', ':')):
if tid.last_occurrence is None:
last_reference = None
var_name = self.token_str()
def find_last_reference_to_identifier(node):
def f(e : SymbolNode):
if e.token.category == Token.Category.NAME and e.token_str() == var_name and id(e.scope.find(var_name)) == id(tid):
nonlocal last_reference
last_reference = e
for child in e.children:
if child is not None:
f(child)
node.walk_expressions(f)
node.walk_children(find_last_reference_to_identifier)
if tid.type_node is not None:
find_last_reference_to_identifier(self.scope.node)
tid.last_occurrence = last_reference
else:
for index in range(len(tid.ast_nodes[0].parent.children)):
if id(tid.ast_nodes[0].parent.children[index]) == id(tid.ast_nodes[0]):
for index in range(index + 1, len(tid.ast_nodes[0].parent.children)):
find_last_reference_to_identifier(tid.ast_nodes[0].parent.children[index])
tid.last_occurrence = last_reference
break
if id(tid.last_occurrence) == id(self):
return 'std::move(' + self.token_str() + ')'
if tid is not None and len(tid.ast_nodes) and isinstance(tid.ast_nodes[0], ASTVariableDeclaration) and tid.ast_nodes[0].is_ptr and tid.ast_nodes[0].nullable:
if self.parent is None or (not (self.parent.symbol.id in ('==', '!=') and self.parent.children[1].token_str() in ('N', 'Н', 'null', 'нуль'))
and not (self.parent.symbol.id == '.')
and not (self.parent.symbol.id == '?')
and not (self.parent.symbol.id == '=' and self is self.parent.children[0])):
return '*' + self.token_str()
return self.token_str().lstrip('@=').replace(':', '::')
if self.token.category == Token.Category.KEYWORD and self.token_str() in ('L.last_iteration', 'Ц.последняя_итерация', 'loop.last_iteration', 'цикл.последняя_итерация'):
parent = self
while parent.parent:
parent = parent.parent
ast_parent = parent.ast_parent
while True:
if type(ast_parent) == ASTLoop:
ast_parent.has_L_last_iteration = True
break
ast_parent = ast_parent.parent
return '(__begin == __end)'
if self.token.category == Token.Category.NUMERIC_LITERAL:
n = self.token_str()
if n[-1] in 'oо':
return '0' + n[:-1] + 'LL'*int_is_int64
if n[-1] in 'bд':
return '0b' + n[:-1] + 'LL'*int_is_int64
if n[-1] == 's':
return n[:-1] + 'f'
if n[4:5] == "'" or n[-3:-2] == "'" or n[-2:-1] == "'":
nn = ''
for c in n:
nn += {'А':'A','Б':'B','С':'C','Д':'D','Е':'E','Ф':'F'}.get(c, c)
if n[-2:-1] == "'":
nn = nn.replace("'", '')
return '0x' + nn
if '.' in n or 'e' in n:
return n
return n + 'LL'*int_is_int64
if self.token.category == Token.Category.STRING_LITERAL:
s = self.token_str()
if s[0] == '"':
return 'u' + s + '_S'
eat_left = 0
while s[eat_left] == "'":
eat_left += 1
eat_right = 0
while s[-1-eat_right] == "'":
eat_right += 1
s = s[1+eat_left*2:-1-eat_right*2]
if '\\' in s or "\n" in s:
delimiter = '' # (
while ')' + delimiter + '"' in s:
delimiter += "'"
return 'uR"' + delimiter + '(' + s + ')' + delimiter + '"_S'
return 'u"' + repr(s)[1:-1].replace('"', R'\"').replace(R"\'", "'") + '"_S'
if self.token.category == Token.Category.CONSTANT:
return {'N': 'nullptr', 'Н': 'nullptr', 'null': 'nullptr', 'нуль': 'nullptr', '0B': 'false', '0В': 'false', '1B': 'true', '1В': 'true'}[self.token_str()]
def is_char(child):
ts = child.token_str()
return child.token.category == Token.Category.STRING_LITERAL and (len(ts) == 3 or (ts[:2] == '"\\' and len(ts) == 4))
def char_or_str(child, is_char):
if is_char:
if child.token_str()[1:-1] == "\\":
return R"u'\\'_C"
return "u'" + child.token_str()[1:-1].replace("'", R"\'") + "'_C"
return child.to_str()
if self.symbol.id == '(': # )
if self.function_call:
func_name = self.children[0].to_str()
f_node = None
if self.children[0].symbol.id == '.':
if len(self.children[0].children) == 1:
s = self.scope
while True:
if s.is_function:
if type(s.node) != ASTFunctionDefinition:
assert(s.is_lambda)
raise Error('probably `@` is missing (before this dot)', self.children[0].token)
if type(s.node.parent) == ASTTypeDefinition:
assert(s.node.parent.scope == s.parent)
fid = s.node.parent.find_id_including_base_types(self.children[0].children[0].to_str())
if fid is None:
raise Error('call of undefined method `' + func_name + '`', self.children[0].children[0].token)
if len(fid.ast_nodes) > 1:
raise Error('methods\' overloading is not supported for now', self.children[0].children[0].token)
f_node = fid.ast_nodes[0]
if type(f_node) == ASTTypeDefinition:
if len(f_node.constructors) == 0:
f_node = ASTFunctionDefinition()
else:
if len(f_node.constructors) > 1:
raise Error('constructors\' overloading is not supported for now (see type `' + f_node.type_name + '`)', self.children[0].left_to_right_token())
f_node = f_node.constructors[0]
break
s = s.parent
assert(s)
elif func_name.endswith('.map') and self.children[2].token.category == Token.Category.NAME and self.children[2].token_str()[0].isupper():
c2 = self.children[2].to_str()
return func_name + '([](const auto &x){return ' + {'Int':'to_int', 'Int64':'to_int64', 'UInt64':'to_uint64', 'UInt32':'to_uint32', 'Float':'to_float'}.get(c2, c2) + '(x);})'
elif func_name.endswith('.split'):
f_node = type_of(self.children[0])
if f_node is None: # assume this is String method
f_node = builtins_scope.find('String').ast_nodes[0].scope.ids.get('split').ast_nodes[0]
elif self.children[0].children[1].token.value(source) == 'union':
func_name = self.children[0].children[0].to_str() + '.set_union'
else:
f_node = type_of(self.children[0])
elif func_name == 'Int':
if self.children[1] is not None and self.children[1].token_str() == "bytes'":
return 'int_from_bytes(' + self.children[2].to_str() + ')'
func_name = 'to_int'
f_node = builtins_scope.find('Int').ast_nodes[0].constructors[0]
elif func_name == 'Int64':
func_name = 'to_int64'
f_node = builtins_scope.find('Int').ast_nodes[0].constructors[0]
elif func_name == 'UInt64':
func_name = 'to_uint64'
f_node = builtins_scope.find('Int').ast_nodes[0].constructors[0]
elif func_name == 'UInt32':
func_name = 'to_uint32'
f_node = builtins_scope.find('Int').ast_nodes[0].constructors[0]
elif func_name == 'Float':
func_name = 'to_float'
elif func_name == 'Char' and self.children[2].token.category == Token.Category.STRING_LITERAL:
assert(self.children[1] is None) # [-TODO: write a good error message-]
if not is_char(self.children[2]):
raise Error('Char can be constructed only from single character string literals', self.children[2].token)
return char_or_str(self.children[2], True)
elif func_name.startswith('Array['): # ]
func_name = 'Array<' + func_name[6:-1] + '>'
elif func_name == 'Array': # `list(range(1,10))` -> `Array(1.<10)` -> `create_array(range_el(1, 10))`
func_name = 'create_array'
elif self.children[0].symbol.id == '[' and self.children[0].is_list: # ] # `[Type]()` -> `Array<Type>()`
func_name = trans_type(self.children[0].to_type_str(), self.children[0].scope, self.children[0].token)
elif func_name == 'Dict':
func_name = 'create_dict'
elif func_name.startswith('DefaultDict['): # ]
func_name = 'DefaultDict<' + ', '.join(trans_type(c.to_type_str(), c.scope, c.token) for c in self.children[0].children[1:]) + '>'
elif func_name in ('Set', 'Deque'):
func_name = 'create_set' if func_name == 'Set' else 'create_deque'
if self.children[2].is_list:
c = self.children[2].children
res = func_name + ('<' + trans_type(c[0].children[0].token_str(), self.scope, c[0].children[0].token)
+ '>' if len(c) > 1 and c[0].function_call and c[0].children[0].token_str()[0].isupper() else '') + '({'
for i in range(len(c)):
res += c[i].to_str()
if i < len(c)-1:
res += ', '
return res + '})'
elif func_name.startswith(('Set[', 'Deque[')): # ]]
c = self.children[0].children[1]
func_name = func_name[:func_name.find('[')] + '<' + trans_type(c.to_type_str(), c.scope, c.token) + '>' # ]
elif func_name == 'sum' and self.children[2].function_call and self.children[2].children[0].symbol.id == '.' and self.children[2].children[0].children[1].token_str() == 'map':
assert(len(self.children) == 3)
return 'sum_map(' + self.children[2].children[0].children[0].to_str() + ', ' + self.children[2].children[2].to_str() + ')'
elif func_name in ('min', 'max') and len(self.children) == 5 and self.children[3] is not None and self.children[3].token_str() == "key'":
return func_name + '_with_key(' + self.children[2].to_str() + ', ' + self.children[4].to_str() + ')'
elif func_name == 'copy':
s = self.scope
while True:
if s.is_function:
if type(s.node.parent) == ASTTypeDefinition:
fid = s.parent.ids.get('copy')
if fid is not None:
func_name = '::copy'
break
s = s.parent
assert(s)
elif func_name == 'move':
func_name = 'std::move'
elif func_name == '*this':
func_name = '(*this)' # function call has higher precedence than dereference in C++, so `*this(...)` is equivalent to `*(this(...))`
elif self.children[0].symbol.id == '[': # ]
pass
elif self.children[0].function_call: # for `enumFromTo(0)(1000)`
pass
else:
if self.children[0].symbol.id == ':':
fid, sc = find_module(self.children[0].children[0].to_str()).scope.find_and_return_scope(self.children[0].children[1].token_str())
else:
fid, sc = self.scope.find_and_return_scope(func_name)
if fid is None:
raise Error('call of undefined function `' + func_name + '`', self.children[0].left_to_right_token())
if len(fid.ast_nodes) > 1:
raise Error('functions\' overloading is not supported for now', self.children[0].left_to_right_token())
if len(fid.ast_nodes) == 0:
if sc.is_function: # for calling of function arguments, e.g. `F amb(comp, ...)...comp(prev, opt)`
f_node = None
else:
raise Error('node of function `' + func_name + '` is not found', self.children[0].left_to_right_token())
else:
f_node = fid.ast_nodes[0]
if type(f_node) == ASTLoop: # for `L(justify) [(s, w) -> ...]...justify(...)`
f_node = None
else:
#assert(type(f_node) in (ASTFunctionDefinition, ASTTypeDefinition) or (type(f_node) in (ASTVariableInitialization, ASTVariableDeclaration) and f_node.function_pointer)
# or (type(f_node) == ASTVariableInitialization and f_node.expression.symbol.id == '->'))
if type(f_node) == ASTTypeDefinition:
if f_node.has_virtual_functions or f_node.has_pointers_to_the_same_type:
func_name = 'std::make_unique<' + func_name + '>'
# elif f_node.has_pointers_to_the_same_type:
# func_name = 'make_SharedPtr<' + func_name + '>'
if len(f_node.constructors) == 0:
f_node = ASTFunctionDefinition()
else:
if len(f_node.constructors) > 1:
raise Error('constructors\' overloading is not supported for now (see type `' + f_node.type_name + '`)', self.children[0].left_to_right_token())
f_node = f_node.constructors[0]
last_function_arg = 0
res = func_name + '('
for i in range(1, len(self.children), 2):
if self.children[i] is None:
cstr = self.children[i+1].to_str()
if f_node is not None and type(f_node) == ASTFunctionDefinition:
if last_function_arg >= len(f_node.function_arguments):
raise Error('too many arguments for function `' + func_name + '`', self.children[0].left_to_right_token())
if f_node.first_named_only_argument is not None and last_function_arg >= f_node.first_named_only_argument:
raise Error('argument `' + f_node.function_arguments[last_function_arg][0] + '` of function `' + func_name + '` is named-only', self.children[i+1].token)
if len(f_node.function_arguments[last_function_arg]) > 3 and '&' in f_node.function_arguments[last_function_arg][3] and not (self.children[i+1].symbol.id == '&' and len(self.children[i+1].children) == 1):
raise Error('argument `' + f_node.function_arguments[last_function_arg][0] + '` of function `' + func_name + '` is in-out, but there is no `&` prefix', self.children[i+1].token)
if f_node.function_arguments[last_function_arg][2] == 'File?':
tid = self.scope.find(self.children[i+1].token_str())
if tid is None or tid.type != 'File?':
res += '&'
elif f_node.function_arguments[last_function_arg][2].endswith('?') and f_node.function_arguments[last_function_arg][2] != 'Int?' and not cstr.startswith(('std::make_unique<', 'make_SharedPtr<')):
res += '&'
res += cstr
last_function_arg += 1
else:
if f_node is None or type(f_node) != ASTFunctionDefinition:
raise Error('function `' + func_name + '` is not found (you can remove named arguments in function call to suppress this error)', self.children[0].left_to_right_token())
argument_name = self.children[i].token_str()[:-1]
while True:
if last_function_arg == len(f_node.function_arguments):
raise Error('argument `' + argument_name + '` is not found in function `' + func_name + '`', self.children[i].token)
if f_node.function_arguments[last_function_arg][0] == argument_name:
last_function_arg += 1
break
if f_node.function_arguments[last_function_arg][1] == '':
raise Error('argument `' + f_node.function_arguments[last_function_arg][0] + '` of function `' + func_name + '` has no default value, please specify its value here', self.children[i].token)
res += f_node.function_arguments[last_function_arg][1] + ', '
last_function_arg += 1
if f_node.function_arguments[last_function_arg-1][2].endswith('?') and not '->' in f_node.function_arguments[last_function_arg-1][2]:
res += '&'
res += self.children[i+1].to_str()
if i < len(self.children)-2:
res += ', '
if f_node is not None:
if type(f_node) == ASTFunctionDefinition:
while last_function_arg < len(f_node.function_arguments):
if f_node.function_arguments[last_function_arg][1] == '':
t = self.children[len(self.children)-1].token
raise Error('missing required argument `'+ f_node.function_arguments[last_function_arg][0] + '`', Token(t.end, t.end, Token.Category.DELIMITER))
last_function_arg += 1
elif f_node.function_pointer:
if last_function_arg != len(f_node.type_args):
raise Error('wrong number of arguments passed to function pointer', Token(self.children[0].token.end, self.children[0].token.end, Token.Category.DELIMITER))
return res + ')'
elif self.tuple:
res = 'make_tuple('
for i in range(len(self.children)):
res += self.children[i].to_str()
if i < len(self.children)-1:
res += ', '
return res + ')'
else:
assert(len(self.children) == 1)
if self.children[0].symbol.id in ('..', '.<', '.+', '<.', '<.<'): # чтобы вместо `(range_el(0, seq.len()))` было `range_el(0, seq.len())`
return self.children[0].to_str()
return '(' + self.children[0].to_str() + ')'
elif self.symbol.id == '[': # ]
if self.is_list:
if len(self.children) == 0:
raise Error('empty array is not supported', self.left_to_right_token())
type_of_values_is_char = True
for child in self.children:
if not is_char(child):
type_of_values_is_char = False
break
res = 'create_array' + ('<' + trans_type(self.children[0].children[0].token_str(), self.scope, self.children[0].children[0].token)
+ '>' if len(self.children) > 1 and self.children[0].function_call and self.children[0].children[0].token_str()[0].isupper() and self.children[0].children[0].token_str() not in ('Array', 'Set') else '') + '({'
for i in range(len(self.children)):
res += char_or_str(self.children[i], type_of_values_is_char)
if i < len(self.children)-1:
res += ', '
return res + '})'
elif self.is_dict:
char_key = True
char_val = True
for child in self.children:
assert(child.symbol.id == '=')
if not is_char(child.children[0]):
char_key = False
if not is_char(child.children[1]):
char_val = False
res = 'create_dict(dict_of'
for child in self.children:
c0 = child.children[0]
if c0.symbol.id == '.' and len(c0.children) == 2 and c0.children[1].token_str().isupper():
c0str = c0.to_str().replace('.', '::') # replace `python_to_11l:tokenizer:Token.Category.NAME` with `python_to_11l::tokenizer::Token::Category::NAME`
else:
c0str = char_or_str(c0, char_key)
res += '(' + c0str + ', ' + char_or_str(child.children[1], char_val) + ')'
return res + ')'
elif self.children[1].token.category == Token.Category.NUMERIC_LITERAL:
return '_get<' + self.children[1].to_str() + '>(' + self.children[0].to_str() + ')' # for support tuples (e.g. `(1, 2)[0]` -> `_get<0>(make_tuple(1, 2))`)
else:
c1 = self.children[1].to_str()
if c1.startswith('(len)'):
return self.children[0].to_str() + '.at_plus_len(' + c1[len('(len)'):] + ')'
return self.children[0].to_str() + '[' + c1 + ']'
elif self.symbol.id in ('S', 'В', 'switch', 'выбрать'):
char_val = True
for i in range(1, len(self.children), 2):
if not is_char(self.children[i+1]):
char_val = False
res = '[&](const auto &a){return ' # `[&]` is for `cc = {'а':'A','б':'B','с':'C','д':'D','е':'E','ф':'F'}.get(c.lower(), c)` -> `[&](const auto &a){return a == u'а'_C ? u"A"_S : ... : c;}(c.lower())`
was_break = False
for i in range(1, len(self.children), 2):
if self.children[i].token.value(source) in ('E', 'И', 'else', 'иначе'):
res += char_or_str(self.children[i+1], char_val)
was_break = True
break
res += ('a == ' + (char_or_str(self.children[i], is_char(self.children[i]))[:-2] if self.children[i].token.category == Token.Category.STRING_LITERAL else self.children[i].to_str()) if self.children[i].symbol.id not in ('..', '.<', '.+', '<.', '<.<')
else 'in(a, ' + self.children[i].to_str() + ')') + ' ? ' + char_or_str(self.children[i+1], char_val) + ' : '
# L.was_no_break
# res ‘’= ‘throw KeyError(a)’
return res + ('throw KeyError(a)' if not was_break else '') + ';}(' + self.children[0].to_str() + ')'
if len(self.children) == 1:
#return '(' + self.symbol.id + self.children[0].to_str() + ')'
if self.postfix:
return self.children[0].to_str() + self.symbol.id
elif self.symbol.id == ':':
c0 = self.children[0].to_str()
if c0 in ('stdin', 'stdout', 'stderr'):
return '_' + c0
if importing_module:
return os.path.basename(file_name)[:-4] + '::' + c0
return '::' + c0
elif self.symbol.id == '.':
c0 = self.children[0].to_str()
sn = self
while True:
if sn.symbol.id == '.' and len(sn.children) == 3:
return 'T.' + c0 + '()'*(c0 in ('len', 'last', 'empty')) # T means *‘t’emporary [variable], and it can be safely used because `T` is a keyletter
if sn.parent is None:
n = sn.ast_parent
while n is not None:
if type(n) == ASTWith:
return 'T.' + c0
n = n.parent
break
sn = sn.parent
if self.scope.find_in_current_function(c0):
return 'this->' + c0
else:
return c0
elif self.symbol.id == '..':
c0 = self.children[0].to_str()
if c0.startswith('(len)'):
return 'range_elen_i(' + c0[len('(len)'):] + ')'
else:
return 'range_ei(' + c0 + ')'
elif self.symbol.id == '&':
assert(self.parent.function_call)
return self.children[0].to_str()
else:
return {'(-)':'~'}.get(self.symbol.id, self.symbol.id) + self.children[0].to_str()
elif len(self.children) == 2:
#return '(' + self.children[0].to_str() + ' ' + self.symbol.id + ' ' + self.children[1].to_str() + ')'
def char_if_len_1(child):
return char_or_str(child, is_char(child))
if self.symbol.id == '.':
cts0 = self.children[0].token_str()
c1 = self.children[1].to_str()
if cts0 == '@':
if self.scope.find_in_current_type_function(c1):
return 'this->' + c1
else:
return c1
if cts0 == '.' and len(self.children[0].children) == 1: # `.left.tree_indent()` -> `left->tree_indent()`
c00 = self.children[0].children[0].token_str()
id_ = self.scope.find(c00)
if id_ is None and type(self.scope.node) == ASTFunctionDefinition and type(self.scope.node.parent) == ASTTypeDefinition:
id_ = self.scope.node.parent.find_id_including_base_types(c00)
if id_ is not None and len(id_.ast_nodes) and type(id_.ast_nodes[0]) in (ASTVariableInitialization, ASTVariableDeclaration):
if id_.ast_nodes[0].is_reference:
return c00 + '->' + c1
tid = self.scope.find(id_.ast_nodes[0].type.rstrip('?'))
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and tid.ast_nodes[0].has_pointers_to_the_same_type:
return c00 + '->' + c1
if cts0 == ':' and len(self.children[0].children) == 1: # `:token_node.symbol` -> `::token_node->symbol`
id_ = global_scope.find(self.children[0].children[0].token_str())
if id_ is not None and len(id_.ast_nodes) and type(id_.ast_nodes[0]) in (ASTVariableInitialization, ASTVariableDeclaration):
tid = self.scope.find(id_.ast_nodes[0].type)#.rstrip('?')
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and tid.ast_nodes[0].has_pointers_to_the_same_type:
return '::' + self.children[0].children[0].token_str() + '->' + c1
if cts0 == '.' and len(self.children[0].children) == 2: # // for `ASTNode token_node; token_node.symbol.id = sid` -> `... token_node->symbol->id = sid`
t_node = type_of(self.children[0]) # \\ and `ASTNode token_node; ... :token_node.symbol.id = sid` -> `... ::token_node->symbol->id = sid`
if t_node is not None and type(t_node) in (ASTVariableDeclaration, ASTVariableInitialization) and (t_node.is_reference or t_node.is_ptr): # ( # t_node.is_shared_ptr):
return self.children[0].to_str() + '->' + c1
if cts0 == '(': # ) # `parse(expr_str).eval()` -> `parse(expr_str)->eval()`
fid, sc = self.scope.find_and_return_scope(self.children[0].children[0].token_str())
if fid is not None and len(fid.ast_nodes) == 1:
f_node = fid.ast_nodes[0]
if type(f_node) == ASTFunctionDefinition and f_node.function_return_type != '':
frtid = sc.find(f_node.function_return_type)
if frtid is not None and len(frtid.ast_nodes) == 1 and type(frtid.ast_nodes[0]) == ASTTypeDefinition and frtid.ast_nodes[0].has_pointers_to_the_same_type:
return self.children[0].to_str() + '->' + c1
if cts0 in ('Float', 'Float32') and c1 == 'infinity':
return 'std::numeric_limits<' + cpp_type_from_11l[cts0] + '>::infinity()'
id_, s = self.scope.find_and_return_scope(cts0.lstrip('@='))
if id_ is not None:
if id_.type != '' and id_.type.endswith('?'):
return cts0.lstrip('@=') + '->' + c1
if len(id_.ast_nodes) and type(id_.ast_nodes[0]) == ASTLoop and id_.ast_nodes[0].is_loop_variable_a_ptr and cts0 == id_.ast_nodes[0].loop_variable:
return cts0 + '->' + c1
if len(id_.ast_nodes) and type(id_.ast_nodes[0]) == ASTVariableInitialization and (id_.ast_nodes[0].is_ptr): # ( # or id_.ast_nodes[0].is_shared_ptr):
return self.children[0].to_str() + '->' + c1 + '()'*(c1 in ('len', 'last', 'empty')) # `to_str()` is needed for such case: `animal.say(); animals [+]= animal; animal.say()` -> `animal->say(); animals.append(animal); std::move(animal)->say();`
if len(id_.ast_nodes) and type(id_.ast_nodes[0]) in (ASTVariableInitialization, ASTVariableDeclaration): # `Node tree = ...; tree.tree_indent()` -> `... tree->tree_indent()` # (
tid = self.scope.find(id_.ast_nodes[0].type)#.rstrip('?'))
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and tid.ast_nodes[0].has_pointers_to_the_same_type:
return cts0 + '->' + c1
if id_.type != '' and s.is_function:
tid = s.find(id_.type)
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and tid.ast_nodes[0].has_pointers_to_the_same_type:
return cts0 + '->' + c1
if c1.isupper():
c0 = self.children[0].to_str()
#assert(c0[0].isupper())
return c0.replace('.', '::') + '::' + c1 # replace `Token.Category.STATEMENT_SEPARATOR` with `Token::Category::STATEMENT_SEPARATOR`
return char_if_len_1(self.children[0]) + '.' + c1 + '()'*(c1 in ('len', 'last', 'empty', 'real', 'imag') and not (self.parent is not None and self.parent.function_call and self is self.parent.children[0])) # char_if_len_1 is needed here because `u"0"_S.code` (have gotten from #(11l)‘‘0’.code’) is illegal [correct: `u'0'_C.code`]
elif self.symbol.id == ':':
c0 = self.children[0].to_str()
c0 = {'time':'timens', # 'time': a symbol with this name already exists and therefore this name cannot be used as a namespace name
'random':'randomns'}.get(c0, c0) # GCC: .../11l-lang/_11l_to_cpp/11l_hpp/random.hpp:1:11: error: ‘namespace random { }’ redeclared as different kind of symbol
c1 = self.children[1].to_str()
return c0 + '::' + (c1 if c1 != '' else '_')
elif self.symbol.id == '->':
captured_variables = set()
def gather_captured_variables(sn):
if sn.token.category == Token.Category.NAME:
if sn.token_str().startswith('@'):
by_ref = True # sn.parent.children[0] is sn and ((sn.parent.symbol.id[-1] == '=' and sn.parent.symbol.id not in ('==', '!='))
# or (sn.parent.symbol.id == '.' and sn.parent.children[1].token_str() == 'append'))
t = sn.token_str()[1:]
if t.startswith('='):
t = t[1:]
by_ref = False
captured_variables.add('this' if t == '' else '&'*by_ref + t)
elif sn.token.value(source) == '(.)':
captured_variables.add('this')
else:
for child in sn.children:
if child is not None and child.symbol.id != '->':
gather_captured_variables(child)
gather_captured_variables(self.children[1])
return '[' + ', '.join(sorted(captured_variables)) + '](' + ', '.join(map(lambda c: 'const ' + ('auto &' if c.symbol.id != '=' else 'decltype(' + c.children[1].to_str() + ') &') + c.to_str(),
self.children[0].children if self.children[0].symbol.id == '(' else [self.children[0]])) + '){return ' + self.children[1].to_str() + ';}' # )
elif self.symbol.id in ('..', '.<', '.+', '<.', '<.<'):
s = {'..':'ee', '.<':'el', '.+':'ep', '<.':'le', '<.<':'ll'}[self.symbol.id]
c0 = char_if_len_1(self.children[0])
c1 = char_if_len_1(self.children[1])
b = s[0]
if c0.startswith('(len)'):
b += 'len'
c0 = c0[len('(len)'):]
e = s[1]
if c1.startswith('(len)'):
e += 'len'
c1 = c1[len('(len)'):]
return 'range_' + b + '_'*(len(b) > 1 or len(e) > 1) + e + '(' + c0 + ', ' + c1 + ')'
elif self.symbol.id in ('C', 'С', 'in'):
return 'in(' + char_if_len_1(self.children[0]) + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id in ('!C', '!С', '!in'):
return '!in(' + char_if_len_1(self.children[0]) + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id in ('I/', 'Ц/'):
return 'idiv(' + self.children[0].to_str() + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id in ('I/=', 'Ц/='):
return self.children[0].to_str() + ' = idiv(' + self.children[0].to_str() + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id in ('==', '!=') and self.children[1].token.category == Token.Category.STRING_LITERAL:
return self.children[0].to_str() + ' ' + self.symbol.id + ' ' + char_if_len_1(self.children[1])[:-2]
elif self.symbol.id in ('==', '!=', '=') and self.children[1].token.category == Token.Category.NAME and self.children[1].token_str().isupper(): # `token.category == NAME` -> `token.category == decltype(token.category)::NAME` and `category = NAME` -> `category = decltype(category)::NAME`
return self.children[0].to_str() + ' ' + self.symbol.id + ' decltype(' + self.children[0].to_str() + ')::' + self.children[1].token_str()
elif self.symbol.id in ('==', '!=') and self.children[0].symbol.id == '&' and len(self.children[0].children) == 1 and self.children[1].symbol.id == '&' and len(self.children[1].children) == 1: # `&a == &b` -> `&a == &b`
id_, s = self.scope.find_and_return_scope(self.children[0].children[0].token_str())
if id_ is not None and len(id_.ast_nodes) and type(id_.ast_nodes[0]) == ASTLoop and id_.ast_nodes[0].is_loop_variable_a_ptr and self.children[0].children[0].token_str() == id_.ast_nodes[0].loop_variable: # `L(obj)...&obj != &objChoque` -> `...&*obj != objChoque`
return '&*' + self.children[0].children[0].token_str() + ' ' + self.symbol.id + ' ' + self.children[1].children[0].token_str()
return '&' + self.children[0].children[0].token_str() + ' ' + self.symbol.id + ' &' + self.children[1].children[0].token_str()
elif self.symbol.id == '==' and self.children[0].symbol.id == '==': # replace `a == b == c` with `equal(a, b, c)`
def f(child):
if child.symbol.id == '==':
return f(child.children[0]) + ', ' + child.children[1].to_str()
return child.to_str()
return 'equal(' + f(self) + ')'
elif self.symbol.id == '=' and self.children[0].symbol.id == '[': # ] # replace `a[k] = v` with `a.set(k, v)`
if self.children[0].children[1].token.category == Token.Category.NUMERIC_LITERAL: # replace `a[0] = v` with `_set<0>(a, v)` to support tuples
return '_set<' + self.children[0].children[1].to_str() + '>(' + self.children[0].children[0].to_str() + ', ' + char_if_len_1(self.children[1]) + ')'
else:
c01 = self.children[0].children[1].to_str()
if c01.startswith('(len)'):
return self.children[0].children[0].to_str() + '.set_plus_len(' + c01[len('(len)'):] + ', ' + char_if_len_1(self.children[1]) + ')'
else:
return self.children[0].children[0].to_str() + '.set(' + c01 + ', ' + char_if_len_1(self.children[1]) + ')'
elif self.symbol.id == '[+]=': # replace `a [+]= v` with `a.append(v)`
return self.children[0].to_str() + '.append(' + self.children[1].to_str() + ')'
elif self.symbol.id == '=' and self.children[0].tuple:
#assert(False)
return 'assign_from_tuple(' + ', '.join(c.to_str() for c in self.children[0].children) + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id == '?':
return '[&]{auto R = ' + self.children[0].to_str() + '; return R != nullptr ? *R : ' + self.children[1].to_str() + ';}()'
elif self.symbol.id == '^':
c1 = self.children[1].to_str()
if c1 == '2':
return 'square(' + self.children[0].to_str() + ')'
if c1 == '3':
return 'cube(' + self.children[0].to_str() + ')'
return 'pow(' + self.children[0].to_str() + ', ' + c1 + ')'
elif self.symbol.id == '%':
return 'mod(' + self.children[0].to_str() + ', ' + self.children[1].to_str() + ')'
elif self.symbol.id == '[&]' and self.parent is not None and self.parent.symbol.id in ('==', '!='): # there is a difference in precedence of operators `&` and `==`/`!=` in Python/11l and C++
return '(' + self.children[0].to_str() + ' & ' + self.children[1].to_str() + ')'
elif self.symbol.id == '(concat)' and self.parent is not None and self.parent.symbol.id in ('+', '-', '==', '!='): # `print(‘id = ’id+1)` -> `print((‘id = ’id)+1)`, `a & b != u"1x"` -> `(a & b) != u"1x"` [[[`'-'` is needed because `print(‘id = ’id-1)` also should generate a compile-time error]]]
return '(' + self.children[0].to_str() + ' & ' + self.children[1].to_str() + ')'
else:
def is_integer(t):
return t.category == Token.Category.NUMERIC_LITERAL and ('.' not in t.value(source)) and ('e' not in t.value(source))
if self.symbol.id == '/' and (is_integer(self.children[0].token) or is_integer(self.children[1].token)):
if is_integer(self.children[0].token):
return self.children[0].token_str() + '.0 / ' + self.children[1].to_str()
else:
return self.children[0].to_str() + ' / ' + self.children[1].token_str() + '.0'
if self.symbol.id == '=' and self.children[0].symbol.id == '.' and len(self.children[0].children) == 2: # `:token_node.symbol = :symbol_table[...]` -> `::token_node->symbol = &::symbol_table[...]`
t_node = type_of(self.children[0])
if t_node is not None and type(t_node) in (ASTVariableDeclaration, ASTVariableInitialization) and t_node.is_reference:
c1s = self.children[1].to_str()
return self.children[0].to_str() + ' = ' + '&'*(c1s != 'nullptr') + c1s
return self.children[0].to_str() + ' ' + {'&':'&&', '|':'||', '[&]':'&', '[&]=':'&=', '[|]':'|', '[|]=':'|=', '(concat)':'&', '[+]':'+', '‘’=':'&=', '(+)':'^', '(+)=':'^='}.get(self.symbol.id, self.symbol.id) + ' ' + self.children[1].to_str()
elif len(self.children) == 3:
if self.children[1].token.category == Token.Category.SCOPE_BEGIN:
assert(self.symbol.id == '.')
if self.children[2].symbol.id == '?': # not necessary, just to beautify generated C++
return '[&](auto &&T){auto X = ' + self.children[2].children[0].to_str() + '; return X != nullptr ? *X : ' + self.children[2].children[1].to_str() + ';}(' + self.children[0].to_str() + ')'
return '[&](auto &&T){return ' + self.children[2].to_str() + ';}(' + self.children[0].to_str() + ')' # why I prefer `auto &&T` to `auto&& T`: ampersand is related to the variable, but not to the type, for example in `int &i, j` `j` is not a reference, but just an integer
assert(self.symbol.id in ('I', 'Е', 'if', 'если'))
return self.children[0].to_str() + ' ? ' + self.children[1].to_str() + ' : ' + self.children[2].to_str()
return ''
symbol_table : Dict[str, SymbolBase] = {}
allowed_keywords_in_expressions : List[str] = []
def symbol(id, bp = 0):
try:
s = symbol_table[id]
except KeyError:
s = SymbolBase()
s.id = id
s.lbp = bp
symbol_table[id] = s
if id[0].isalpha() and not id in ('I/', 'Ц/', 'I/=', 'Ц/=', 'C', 'С', 'in'): # this is keyword-in-expression
assert(id.isalpha() or id in ('L.last_iteration', 'Ц.последняя_итерация', 'loop.last_iteration', 'цикл.последняя_итерация'))
allowed_keywords_in_expressions.append(id)
else:
s.lbp = max(bp, s.lbp)
return s
class ASTNode:
parent : 'ASTNode' = None
access_specifier_public = 1
def walk_expressions(self, f):
pass
def walk_children(self, f):
pass
class ASTNodeWithChildren(ASTNode):
# children : List['ASTNode'] = [] # OMFG! This actually means static (common for all objects of type ASTNode) variable, not default value of member variable, that was unexpected to me as it contradicts C++11 behavior
children : List['ASTNode']
tokeni : int
#scope : Scope
def __init__(self):
self.children = []
self.tokeni = tokeni
def walk_children(self, f):
for child in self.children:
f(child)
def children_to_str(self, indent, t, place_opening_curly_bracket_on_its_own_line = True, add_at_beginning = ''):
r = ''
if self.tokeni > 0:
ti = self.tokeni - 1
while ti > 0 and tokens[ti].category in (Token.Category.SCOPE_END, Token.Category.STATEMENT_SEPARATOR):
ti -= 1
r = (min(source[tokens[ti].end:tokens[self.tokeni].start].count("\n"), 2) - 1) * "\n"
r += ' ' * (indent*4) + t + (("\n" + ' ' * (indent*4) + "{\n") if place_opening_curly_bracket_on_its_own_line else " {\n") # }
r += add_at_beginning
for c in self.children:
r += c.to_str(indent+1)
return r + ' ' * (indent*4) + "}\n"
def children_to_str_detect_single_stmt(self, indent, r, check_for_if = False):
def has_if(node):
while True:
if not isinstance(node, ASTNodeWithChildren) or len(node.children) != 1:
return False
if type(node) == ASTIf:
return True
node = node.children[0]
if (len(self.children) != 1
or (check_for_if and (type(self.children[0]) == ASTIf or has_if(self.children[0]))) # for correctly handling of dangling-else
or type(self.children[0]) == ASTLoopRemoveCurrentElementAndContinue): # `L.remove_current_element_and_continue` ‘раскрывается в 2 строки кода’\‘is translated into 2 statements’
return self.children_to_str(indent, r, False)
assert(len(self.children) == 1)
c0str = self.children[0].to_str(indent+1)
if c0str.startswith(' ' * ((indent+1)*4) + "was_break = true;\n"):
return self.children_to_str(indent, r, False)
return ' ' * (indent*4) + r + "\n" + c0str
class ASTNodeWithExpression(ASTNode):
expression : SymbolNode
def set_expression(self, expression):
self.expression = expression
self.expression.ast_parent = self
def walk_expressions(self, f):
f(self.expression)
class ASTProgram(ASTNodeWithChildren):
beginning_extra = ''
def to_str(self):
r = self.beginning_extra
prev_global_statement = True
code_block_id = 1
for c in self.children:
global_statement = type(c) in (ASTVariableDeclaration, ASTVariableInitialization, ASTTupleInitialization, ASTFunctionDefinition, ASTTypeDefinition, ASTTypeAlias, ASTTypeEnum, ASTMain)
if global_statement != prev_global_statement:
prev_global_statement = global_statement
if not global_statement:
sname = 'CodeBlock' + str(code_block_id)
r += "\n"*(c is not self.children[0]) + 'struct ' + sname + "\n{\n " + sname + "()\n {\n"
else:
r += " }\n} code_block_" + str(code_block_id) + ";\n"
code_block_id += 1
r += c.to_str(2*(not global_statement))
if prev_global_statement != True: # {{
r += " }\n} code_block_" + str(code_block_id) + ";\n"
return r
class ASTExpression(ASTNodeWithExpression):
def to_str(self, indent):
if self.expression.symbol.id == '=' and type(self.parent) == ASTTypeDefinition:
return ' ' * (indent*4) + 'decltype(' + self.expression.children[1].to_str() + ') ' + self.expression.to_str() + ";\n"
return ' ' * (indent*4) + self.expression.to_str() + ";\n"
cpp_type_from_11l = {'auto&':'auto&', 'V':'auto', 'П':'auto', 'var':'auto', 'перем':'auto',
'Int':'int', 'Int64':'Int64', 'UInt64':'UInt64', 'UInt32':'uint32_t', 'Float':'double', 'Float32':'float', 'Complex':'Complex', 'String':'String', 'Bool':'bool', 'Byte':'Byte',
'N':'void', 'Н':'void', 'null':'void', 'нуль':'void',
'Array':'Array', 'Tuple':'Tuple', 'Dict':'Dict', 'DefaultDict':'DefaultDict', 'Set':'Set', 'Deque':'Deque'}
def trans_type(ty, scope, type_token, ast_type_node = None, is_reference = False):
if ty[-1] == '?':
ty = ty[:-1]
t = cpp_type_from_11l.get(ty)
if t is not None:
if t == 'int' and int_is_int64:
return 'Int64'
return t
else:
if '.' in ty: # for `Token.Category category`
return ty.replace('.', '::') # [-TODO: generalize-]
if ty.startswith('('):
assert(ty[-1] == ')')
i = 1
s = i
nesting_level = 0
types = ''
while True:
if ty[i] in ('(', '['):
nesting_level += 1
elif ty[i] in (')', ']'):
if nesting_level == 0:
assert(i == len(ty)-1)
types += trans_type(ty[s:i], scope, type_token, ast_type_node)
break
nesting_level -= 1
elif ty[i] == ',':
if nesting_level == 0: # ignore inner commas
types += trans_type(ty[s:i], scope, type_token, ast_type_node) + ', '
i += 1
while ty[i] == ' ':
i += 1
s = i
continue
i += 1
tuple_types = types.split(', ')
if tuple_types[0] in ('int', 'float', 'double') and tuple_types.count(tuple_types[0]) == len(tuple_types) and len(tuple_types) in range(2, 5):
return {'int':'i', 'float':'', 'double':'d'}[tuple_types[0]] + 'vec' + str(len(tuple_types))
return 'Tuple<' + types + '>'
p = ty.find('[') # ]
if p != -1:
if '=' in ty:
assert(p == 0 and ty[0] == '[' and ty[-1] == ']')
tylist = ty[1:-1].split('=')
assert(len(tylist) == 2)
return 'Dict<' + trans_type(tylist[0], scope, type_token, ast_type_node) + ', ' \
+ trans_type(tylist[1], scope, type_token, ast_type_node) + '>'
if ty.startswith('Callable['): # ]
tylist = ty[p+1:-1].split(', ')
def trans_ty(ty):
tt = trans_type(ty, scope, type_token, ast_type_node)
return tt if tt.startswith('std::unique_ptr<') else 'const ' + tt + ('&'*(ty not in ('Int', 'Float')))
return 'std::function<' + trans_type(tylist[-1], scope, type_token, ast_type_node) + '(' + ', '.join(trans_ty(t) for t in tylist[:-1]) + ')>'
return (trans_type(ty[:p], scope, type_token, ast_type_node) if p != 0 else 'Array') + '<' + trans_type(ty[p+1:-1], scope, type_token, ast_type_node) + '>'
p = ty.find(',')
if p != -1:
return trans_type(ty[:p], scope, type_token, ast_type_node) + ', ' + trans_type(ty[p+1:].lstrip(' '), scope, type_token, ast_type_node)
id = scope.find(ty)
if id is None or len(id.ast_nodes) == 0:
raise Error('type `' + ty + '` is not defined', type_token)
if type(id.ast_nodes[0]) in (ASTTypeAlias, ASTTypeEnum):
return ty
if type(id.ast_nodes[0]) != ASTTypeDefinition:
raise Error('`' + ty + '`: expected a type name', type_token)
if id.ast_nodes[0].has_virtual_functions or id.ast_nodes[0].has_pointers_to_the_same_type:
if ast_type_node is not None and tokens[id.ast_nodes[0].tokeni].start > type_token.start: # if type `ty` was declared after this variable, insert a forward declaration of type `ty`
ast_type_node.forward_declared_types.add(ty)
return ty if is_reference else 'std::unique_ptr<' + ty + '>'# if id.ast_nodes[0].has_virtual_functions else 'SharedPtr<' + ty + '>'
return ty
class ASTVariableDeclaration(ASTNode):
vars : List[str]
type : str
type_args : List[str]
is_const = False
function_pointer = False
is_reference = False
scope : Scope
type_token : Token
is_ptr = False
nullable = False
#is_shared_ptr = False
def __init__(self):
self.scope = scope
def trans_type(self, ty, is_reference = False):
if ty.endswith('&'):
assert(trans_type(ty[:-1], self.scope, self.type_token, self.parent if type(self.parent) == ASTTypeDefinition else None, is_reference) == 'auto')
return 'auto&'
return trans_type(ty, self.scope, self.type_token, self.parent if type(self.parent) == ASTTypeDefinition else None, is_reference)
def to_str(self, indent):
if self.function_pointer:
def trans_type(ty):
tt = self.trans_type(ty)
return tt if tt.startswith('std::unique_ptr<') else 'const ' + tt + ('&'*(ty not in ('Int', 'Float')))
return ' ' * (indent*4) + 'std::function<' + self.trans_type(self.type) + '(' + ', '.join(trans_type(ty) for ty in self.type_args) + ')> ' + ', '.join(self.vars) + ";\n"
return ' ' * (indent*4) + 'const '*self.is_const + self.trans_type(self.type, self.is_reference) + ('<' + ', '.join(self.trans_type(ty) for ty in self.type_args) + '>' if len(self.type_args) else '') + ' ' + '*'*self.is_reference + ', '.join(self.vars) + ";\n"
class ASTVariableInitialization(ASTVariableDeclaration, ASTNodeWithExpression):
def to_str(self, indent):
return super().to_str(indent)[:-2] + ' = ' + self.expression.to_str() + ";\n"
class ASTTupleInitialization(ASTNodeWithExpression):
dest_vars : List[str]
is_const = False
bind_array = False
def __init__(self):
self.dest_vars = []
def to_str(self, indent):
e = self.expression.to_str()
if self.bind_array:
e = 'bind_array<' + str(len(self.dest_vars)) + '>(' + e + ')'
return ' ' * (indent*4) + 'const '*self.is_const + 'auto [' + ', '.join(self.dest_vars) + '] = ' + e + ";\n"
class ASTTupleAssignment(ASTNodeWithExpression):
dest_vars : List[Tuple[str, bool]]
def __init__(self):
self.dest_vars = []
def to_str(self, indent):
r = ''
for i, dv in enumerate(self.dest_vars):
if dv[1]:
r += ' ' * (indent*4) + 'TUPLE_ELEMENT_T(' + str(i) + ', ' + self.expression.to_str() + ') ' + dv[0] + ";\n"
return r + ' ' * (indent*4) + 'assign_from_tuple(' + ', '.join(dv[0] for dv in self.dest_vars) + ', ' + self.expression.to_str() + ')' + ";\n"
class ASTWith(ASTNodeWithChildren, ASTNodeWithExpression):
def to_str(self, indent):
return self.children_to_str(indent, '[&](auto &&T)', False)[:-1] + '(' + self.expression.to_str() + ");\n"
class ASTFunctionDefinition(ASTNodeWithChildren):
function_name : str = ''
function_return_type : str = ''
is_const = False
function_arguments : List[Tuple[str, str, str, str]]# = [] # (arg_name, default_value, type_, qualifier)
first_named_only_argument = None
last_non_default_argument : int
class VirtualCategory(IntEnum):
NO = 0
NEW = 1
OVERRIDE = 2
ABSTRACT = 3
ASSIGN = 4
FINAL = 5
virtual_category = VirtualCategory.NO
scope : Scope
member_initializer_list = ''
def __init__(self, function_arguments = None, function_return_type = ''):
super().__init__()
self.function_arguments = function_arguments or []
self.function_return_type = function_return_type
self.scope = scope
def serialize_to_dict(self, node_type = True):
r = {}
if node_type: # 'node_type' is inserted in dict before 'function_arguments' as this looks more logical in .11l_global_scope
r['node_type'] = 'function'
r['function_arguments'] = ['; '.join(arg) for arg in self.function_arguments]
return r
def deserialize_from_dict(self, d):
self.function_arguments = [arg.split('; ') for arg in d['function_arguments']]
def to_str(self, indent):
is_const = False
if type(self.parent) == ASTTypeDefinition:
if self.function_name == '': # this is constructor
s = self.parent.type_name
elif self.function_name == '(destructor)':
s = '~' + self.parent.type_name
elif self.function_name == 'String':
s = 'operator String'
is_const = True
else:
s = ('auto' if self.function_return_type == '' else trans_type(self.function_return_type, self.scope, tokens[self.tokeni])) + ' ' + \
{'()':'operator()', '[&]':'operator&', '<':'operator<', '==':'operator==', '+':'operator+', '-':'operator-', '*':'operator*'}.get(self.function_name, self.function_name)
if self.virtual_category != self.VirtualCategory.NO:
arguments = []
for index, arg in enumerate(self.function_arguments):
if arg[2] == '': # if there is no type specified
raise Error('type should be specified for argument `' + arg[0] + '` [for virtual functions all arguments should have types]', tokens[self.tokeni])
else:
arguments.append(
('' if '=' in arg[3] or '&' in arg[3] else 'const ')
+ trans_type(arg[2].rstrip('?'), self.scope, tokens[self.tokeni]) + '* '*0 + ' '
+ ('&' if '&' in arg[3] or '=' not in arg[3] else '')
+ arg[0] + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = ' + arg[1]))
s = 'virtual ' + s + '(' + ', '.join(arguments) + ')' + ('', ' override', ' = 0', ' override', ' final')[self.virtual_category - 1]
return ' ' * (indent*4) + s + ";\n" if self.virtual_category == self.VirtualCategory.ABSTRACT else self.children_to_str(indent, s)
elif type(self.parent) != ASTProgram: # local functions [i.e. functions inside functions] are represented as C++ lambdas
captured_variables = set()
def gather_captured_variables(node):
def f(sn : SymbolNode):
if sn.token.category == Token.Category.NAME:
if sn.token.value(source)[0] == '@':
by_ref = True # sn.parent and sn.parent.children[0] is sn and sn.parent.symbol.id[-1] == '=' and sn.parent.symbol.id not in ('==', '!=')
t = sn.token.value(source)[1:]
if t.startswith('='):
t = t[1:]
by_ref = False
captured_variables.add('this' if t == '' else '&'*by_ref + t)
elif sn.token.value(source) == '(.)':
captured_variables.add('this')
else:
for child in sn.children:
if child is not None:
f(child)
node.walk_expressions(f)
node.walk_children(gather_captured_variables)
gather_captured_variables(self)
arguments = []
for arg in self.function_arguments:
if arg[2] == '': # if there is no type specified
arguments.append(('auto ' if '=' in arg[3] else 'const auto &') + arg[0] if arg[1] == '' else
('' if '=' in arg[3] else 'const ') + 'decltype(' + arg[1] + ') ' + arg[0] + ' = ' + arg[1])
else:
tid = self.scope.parent.find(arg[2].rstrip('?'))
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and (tid.ast_nodes[0].has_virtual_functions or tid.ast_nodes[0].has_pointers_to_the_same_type):
arguments.append('std::unique_ptr<' + arg[2].rstrip('?') + '> ' + arg[0] + ('' if arg[1] == '' else ' = ' + arg[1]))
else:
arguments.append(('' if '=' in arg[3] else 'const ') + trans_type(arg[2], self.scope, tokens[self.tokeni]) + ' ' + ('&'*((arg[2] not in ('Int', 'Float')) and ('=' not in arg[3]))) + arg[0] + ('' if arg[1] == '' else ' = ' + arg[1]))
return self.children_to_str(indent, ('auto' if self.function_return_type == '' else 'std::function<' + trans_type(self.function_return_type, self.scope, tokens[self.tokeni]) + '(' + ', '.join(trans_type(arg[2], self.scope, tokens[self.tokeni]) for arg in self.function_arguments) + ')>') + ' ' + self.function_name
+ ' = [' + ', '.join(sorted(filter(lambda v: not '&'+v in captured_variables, captured_variables))) + ']('
+ ', '.join(arguments) + ')')[:-1] + ";\n"
else:
s = ('auto' if self.function_return_type == '' else trans_type(self.function_return_type, self.scope, tokens[self.tokeni])) + ' ' + self.function_name
if len(self.function_arguments) == 0:
return self.children_to_str(indent, s + '()' + ' const'*(self.is_const or is_const))
templates = []
arguments = []
for index, arg in enumerate(self.function_arguments):
if arg[2] == '': # if there is no type specified
templates.append('typename T' + str(index + 1) + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = decltype(' + arg[1] + ')'))
arguments.append(('T' + str(index + 1) + ' ' if '=' in arg[3] else 'const '*(arg[3] != '&') + 'T' + str(index + 1) + ' &')
+ arg[0] + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = ' + arg[1]))
else:
tid = self.scope.parent.find(arg[2].rstrip('?'))
if tid is not None and len(tid.ast_nodes) and type(tid.ast_nodes[0]) == ASTTypeDefinition and (tid.ast_nodes[0].has_virtual_functions or tid.ast_nodes[0].has_pointers_to_the_same_type):
arguments.append('std::unique_ptr<' + arg[2].rstrip('?') + '> '
#+ ('' if '=' in arg[3] else 'const ')
+ arg[3] # add `&` if needed
+ arg[0] + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = ' + arg[1]))
elif arg[2].endswith('?'):
arguments.append(trans_type(arg[2].rstrip('?'), self.scope, tokens[self.tokeni]) + '* '
+ ('' if '=' in arg[3] else 'const ')
+ arg[0] + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = ' + arg[1]))
else:
ty = trans_type(arg[2], self.scope, tokens[self.tokeni])
arguments.append(
(('' if arg[3] == '=' else 'const ') + ty + ' ' + '&'*(arg[2] not in ('Int', 'Float') and arg[3] != '=') if arg[3] != '&' else ty + ' &')
+ arg[0] + ('' if arg[1] == '' or index < self.last_non_default_argument else ' = ' + arg[1]))
if self.member_initializer_list == '' and self.function_name == '' and type(self.parent) == ASTTypeDefinition:
i = 0
while i < len(self.children):
c = self.children[i]
if isinstance(c, ASTExpression) and c.expression.symbol.id == '=' \
and c.expression.children[0].symbol.id == '.' \
and len(c.expression.children[0].children) == 1 \
and c.expression.children[0].children[0].token.category == Token.Category.NAME \
and c.expression.children[1].token.category == Token.Category.NAME \
and c.expression.children[1].token_str() in (arg[0] for arg in self.function_arguments):
if self.scope.parent.ids.get(c.expression.children[0].children[0].token_str()) is None: # this member variable is defined in the base type/class
i += 1
continue
if self.member_initializer_list == '':
self.member_initializer_list = " :\n"
else:
self.member_initializer_list += ",\n"
ec1 = c.expression.children[1].token_str()
for index, arg in enumerate(self.function_arguments):
if arg[0] == ec1:
if arguments[index].startswith('std::unique_ptr<'):
ec1 = 'std::move(' + ec1 + ')'
break
self.member_initializer_list += ' ' * ((indent+1)*4) + c.expression.children[0].children[0].token_str() + '(' + ec1 + ')'
self.children.pop(i)
continue
i += 1
r = self.children_to_str(indent, ('template <' + ', '.join(templates) + '> ')*(len(templates) != 0) + s + '(' + ', '.join(arguments) + ')' + ' const'*(self.is_const or self.function_name in tokenizer.sorted_operators) + self.member_initializer_list)
if isinstance(self.parent, ASTTypeDefinition) and self.function_name in ('+', '-', '*', '/') and self.function_name + '=' not in self.parent.scope.ids:
r += ' ' * (indent*4) + 'template <typename Ty> auto &operator' + self.function_name + "=(const Ty &t)\n"
r += ' ' * (indent*4) + "{\n"
r += ' ' * ((indent+1)*4) + '*this = *this ' + self.function_name + " t;\n"
r += ' ' * ((indent+1)*4) + "return *this;\n"
r += ' ' * (indent*4) + "}\n"
return r
class ASTIf(ASTNodeWithChildren, ASTNodeWithExpression):
else_or_elif : ASTNode = None
likely = 0
def walk_children(self, f):
super().walk_children(f)
if self.else_or_elif is not None:
self.else_or_elif.walk_children(f)
def to_str(self, indent):
if self.likely == 0:
s = 'if (' + self.expression.to_str() + ')'
elif self.likely == 1:
s = 'if (likely(' + self.expression.to_str() + '))'
else:
assert(self.likely == -1)
s = 'if (unlikely(' + self.expression.to_str() + '))'
return self.children_to_str_detect_single_stmt(indent, s, check_for_if = True) + (self.else_or_elif.to_str(indent) if self.else_or_elif is not None else '')
class ASTElseIf(ASTNodeWithChildren, ASTNodeWithExpression):
else_or_elif : ASTNode = None
def walk_children(self, f):
super().walk_children(f)
if self.else_or_elif is not None:
self.else_or_elif.walk_children(f)
def to_str(self, indent):
return self.children_to_str_detect_single_stmt(indent, 'else if (' + self.expression.to_str() + ')', check_for_if = True) + (self.else_or_elif.to_str(indent) if self.else_or_elif is not None else '')
class ASTElse(ASTNodeWithChildren):
def to_str(self, indent):
return self.children_to_str_detect_single_stmt(indent, 'else')
class ASTSwitch(ASTNodeWithExpression):
class Case(ASTNodeWithChildren, ASTNodeWithExpression):
pass
cases : List[Case]
has_string_case = False
def __init__(self):
self.cases = []
def walk_children(self, f):
for case in self.cases:
for child in case.children:
f(child)
def to_str(self, indent):
def is_char(child):
ts = child.token_str()
return child.token.category == Token.Category.STRING_LITERAL and (len(ts) == 3 or (ts[:2] == '"\\' and len(ts) == 4))
def char_if_len_1(child):
if is_char(child):
if child.token_str()[1:-1] == "\\":
return R"u'\\'"
return "u'" + child.token_str()[1:-1].replace("'", R"\'") + "'"
return child.to_str()
if self.has_string_case: # C++ does not support strings in case labels so insert if-elif-else chain in this case
r = ''
for case in self.cases:
if case.expression.token_str() in ('E', 'И', 'else', 'иначе'):
assert(id(case) == id(self.cases[-1]))
r += case.children_to_str_detect_single_stmt(indent, 'else')
else:
r += case.children_to_str_detect_single_stmt(indent, ('if' if id(case) == id(self.cases[0]) else 'else if') + ' (' + self.expression.to_str() + ' == ' + char_if_len_1(case.expression) + ')', check_for_if = True)
return r
r = ' ' * (indent*4) + 'switch (' + self.expression.to_str() + ")\n" + ' ' * (indent*4) + "{\n"
for case in self.cases:
r += ' ' * (indent*4) + ('default' if case.expression.token_str() in ('E', 'И', 'else', 'иначе') else 'case ' + char_if_len_1(case.expression)) + ":\n"
for c in case.children:
r += c.to_str(indent+1)
r += ' ' * ((indent+1)*4) + "break;\n"
return r + ' ' * (indent*4) + "}\n"
class ASTLoopWasNoBreak(ASTNodeWithChildren):
def to_str(self, indent):
return ''
class ASTLoop(ASTNodeWithChildren, ASTNodeWithExpression):
loop_variable : str = None
is_loop_variable_a_reference = False
copy_loop_variable = False
break_label_needed = -1
has_continue = False
has_L_index = False
has_L_last_iteration = False
has_L_remove_current_element_and_continue = False
is_loop_variable_a_ptr = False
was_no_break_node : ASTLoopWasNoBreak = None
def has_L_was_no_break(self):
return self.was_no_break_node is not None
def to_str(self, indent):
r = ''
if self.has_L_was_no_break():
r = ' ' * (indent*4) + "{bool was_break = false;\n"
loop_auto = False
if self.expression is not None and self.expression.token.category == Token.Category.NUMERIC_LITERAL:
lv = self.loop_variable if self.loop_variable is not None else 'Lindex'
tr = 'for (int ' + lv + ' = 0; ' + lv + ' < ' + self.expression.to_str() + '; ' + lv + '++)'
else:
if self.loop_variable is not None or (self.expression is not None and self.expression.symbol.id in ('..', '.<')):
if self.loop_variable is not None and ',' in self.loop_variable:
tr = 'for (auto ' + '&&'*(not self.copy_loop_variable) + '[' + self.loop_variable + '] : ' + self.expression.to_str() + ')'
else:
loop_auto = True
tr = 'for (auto ' + ('&' if self.is_loop_variable_a_reference else '&&'*(self.is_loop_variable_a_ptr or (not self.copy_loop_variable and not (
self.expression.symbol.id in ('..', '.<') or (self.expression.symbol.id == '(' and self.expression.children[0].symbol.id == '.' and self.expression.children[0].children[0].symbol.id == '(' and self.expression.children[0].children[0].children[0].symbol.id in ('..', '.<'))))) # ))
) + (self.loop_variable if self.loop_variable is not None else '__unused') + ' : ' + self.expression.to_str() + ')'
else:
if self.expression is not None and self.expression.token.category == Token.Category.NAME:
l = tokens[self.tokeni].value(source)
raise Error('please write `' + l + ' ' + self.expression.token_str() + ' != 0` or `'
+ l + ' 1..' + self.expression.token_str() + '` instead of `'
+ l + ' ' + self.expression.token_str() + '`', Token(tokens[self.tokeni].start, self.expression.token.end, Token.Category.NAME))
tr = 'while (' + (self.expression.to_str() if self.expression is not None else 'true') + ')'
rr = self.children_to_str_detect_single_stmt(indent, tr)
if self.has_L_remove_current_element_and_continue:
if not loop_auto:
raise Error('this kind of loop does not support `L.remove_current_element_and_continue`', tokens[self.tokeni])
if self.has_L_last_iteration:
raise Error('`L.last_iteration` can not be used with `L.remove_current_element_and_continue`', tokens[self.tokeni])
if self.has_L_index:
raise Error('`L.index` can not be used with `L.remove_current_element_and_continue`', tokens[self.tokeni]) # {
rr = ' ' * (indent*4) + '{auto &&__range = ' + self.expression.to_str() + ";\n" \
+ ' ' * (indent*4) + "auto __end = __range.end();\n" \
+ ' ' * (indent*4) + "auto __dst = __range.begin();\n" \
+ self.children_to_str(indent, 'for (auto __src = __range.begin(); __src != __end;)', False,
add_at_beginning = ' ' * ((indent+1)*4) + 'auto &&'+ self.loop_variable + " = *__src;\n")[:-indent*4-2] \
+ ' ' * ((indent+1)*4) + "if (__dst != __src)\n" \
+ ' ' * ((indent+1)*4) + " *__dst = std::move(*__src);\n" \
+ ' ' * ((indent+1)*4) + "++__dst;\n" \
+ ' ' * ((indent+1)*4) + "++__src;\n" \
+ ' ' * (indent*4) + "}\n" \
+ ' ' * (indent*4) + "__range.erase(__dst, __end);}\n"
if self.has_L_last_iteration:
if not loop_auto:
raise Error('this kind of loop does not support `L.last_iteration`', tokens[self.tokeni])
rr = ' ' * (indent*4) + '{auto &&__range = ' + self.expression.to_str() \
+ ";\n" + self.children_to_str(indent, 'for (auto __begin = __range.begin(), __end = __range.end(); __begin != __end;)', False,
add_at_beginning = ' ' * ((indent+1)*4) + 'auto &&'+ self.loop_variable + " = *__begin; ++__begin;\n")
elif self.has_L_index and not (self.loop_variable is None and self.expression is not None and self.expression.token.category == Token.Category.NUMERIC_LITERAL):
rr = self.children_to_str(indent, tr, False)
if self.has_L_index and not (self.loop_variable is None and self.expression is not None and self.expression.token.category == Token.Category.NUMERIC_LITERAL):
if self.has_continue:
brace_pos = int(rr[0] == "\n") + indent*4 + len(tr) + 1
rr = rr[:brace_pos+1] + rr[brace_pos:] # {
r += ' ' * (indent*4) + "{int Lindex = 0;\n" + rr[:-indent*4-2] + "} on_continue:\n"*self.has_continue + ' ' * ((indent+1)*4) + "Lindex++;\n" + ' ' * (indent*4) + "}}\n"
else:
r += rr
if self.has_L_last_iteration:
r = r[:-1] + "}\n"
if self.has_L_was_no_break(): # {
r += self.was_no_break_node.children_to_str_detect_single_stmt(indent, 'if (!was_break)') + ' ' * (indent*4) + "}\n"
if self.break_label_needed != -1:
r += ' ' * (indent*4) + 'break_' + ('' if self.break_label_needed == 0 else str(self.break_label_needed)) + ":;\n"
return r
def walk_expressions(self, f):
if self.expression is not None: f(self.expression)
class ASTContinue(ASTNode):
token : Token
def to_str(self, indent):
n = self.parent
while True:
if type(n) == ASTLoop:
n.has_continue = True
break
n = n.parent
if n is None:
raise Error('loop corresponding to this statement is not found', self.token)
return ' ' * (indent*4) + 'goto on_'*n.has_L_index + "continue;\n"
break_label_index = -1
class ASTLoopBreak(ASTNode):
loop_variable : str = ''
loop_level = 0
token : Token
def to_str(self, indent):
r = ''
n = self.parent
loop_level = 0
while True:
if type(n) == ASTLoop:
if loop_level == self.loop_level if self.loop_variable == '' else self.loop_variable == n.loop_variable:
if n.has_L_was_no_break():
r = ' ' * (indent*4) + "was_break = true;\n"
if loop_level > 0:
if n.break_label_needed == -1:
global break_label_index
break_label_index += 1
n.break_label_needed = break_label_index
return r + ' ' * (indent*4) + 'goto break_' + ('' if n.break_label_needed == 0 else str(n.break_label_needed)) + ";\n"
break
loop_level += 1
n = n.parent
if n is None:
raise Error('loop corresponding to this `' + '^'*self.loop_level + 'L' + ('(' + self.loop_variable + ')')*(self.loop_variable != '') + '.break` statement is not found', self.token)
n = self.parent
while True:
if type(n) == ASTSwitch:
n = n.parent
while True:
if type(n) == ASTLoop:
if n.break_label_needed == -1:
break_label_index += 1
n.break_label_needed = break_label_index
return r + ' ' * (indent*4) + 'goto break_' + ('' if n.break_label_needed == 0 else str(n.break_label_needed)) + ";\n"
n = n.parent
if type(n) == ASTLoop:
break
n = n.parent
return r + ' ' * (indent*4) + "break;\n"
class ASTLoopRemoveCurrentElementAndContinue(ASTNode):
def to_str(self, indent):
n = self.parent
while True:
if type(n) == ASTLoop:
n.has_L_remove_current_element_and_continue = True
break
n = n.parent
return ' ' * (indent*4) + "++__src;\n" \
+ ' ' * (indent*4) + "continue;\n"
class ASTReturn(ASTNodeWithExpression):
def to_str(self, indent):
expr_str = ''
if self.expression is not None:
if self.expression.is_list and len(self.expression.children) == 0: # `R []`
n = self.parent
while type(n) != ASTFunctionDefinition:
n = n.parent
if n.function_return_type == '':
raise Error('Function returning an empty array should have return type specified', self.expression.left_to_right_token())
if not n.function_return_type.startswith('Array['): # ]
raise Error('Function returning an empty array should have an Array based return type', self.expression.left_to_right_token())
expr_str = trans_type(n.function_return_type, self.expression.scope, self.expression.token) + '()'
elif self.expression.function_call and self.expression.children[0].token_str() == 'Dict' and len(self.expression.children) == 1: # `R Dict()`
n = self.parent
while type(n) != ASTFunctionDefinition:
n = n.parent
if n.function_return_type == '':
raise Error('Function returning an empty dict should have return type specified', self.expression.left_to_right_token())
if not n.function_return_type.startswith('Dict['): # ]
raise Error('Function returning an empty dict should have a Dict based return type', self.expression.left_to_right_token())
expr_str = trans_type(n.function_return_type, self.expression.scope, self.expression.token) + '()'
else:
expr_str = self.expression.to_str()
return ' ' * (indent*4) + 'return' + (' ' + expr_str if expr_str != '' else '') + ";\n"
def walk_expressions(self, f):
if self.expression is not None: f(self.expression)
class ASTException(ASTNodeWithExpression):
def to_str(self, indent):
return ' ' * (indent*4) + 'throw ' + self.expression.to_str() + ";\n"
class ASTExceptionTry(ASTNodeWithChildren):
def to_str(self, indent):
return self.children_to_str(indent, 'try')
class ASTExceptionCatch(ASTNodeWithChildren):
exception_object_type : str
exception_object_name : str = ''
def to_str(self, indent):
if self.exception_object_type == '':
return self.children_to_str(indent, 'catch (...)')
return self.children_to_str(indent, 'catch (const ' + self.exception_object_type + '&' + (' ' + self.exception_object_name if self.exception_object_name != '' else '') + ')')
class ASTTypeDefinition(ASTNodeWithChildren):
base_types : List[str]
type_name : str
constructors : List[ASTFunctionDefinition]
has_virtual_functions = False
has_pointers_to_the_same_type = False
forward_declared_types : Set[str]
serializable = False
def __init__(self, constructors = None):
super().__init__()
self.base_types = []
self.constructors = constructors or []
self.scope = scope # needed for built-in types, e.g. `File(full_fname, ‘w’, encoding' ‘utf-8-sig’).write(...)`
self.forward_declared_types = set()
def serialize_to_dict(self):
return {'node_type': 'type', 'constructors': [c.serialize_to_dict(False) for c in self.constructors]}
def deserialize_from_dict(self, d):
for c_dict in d['constructors']:
c = ASTFunctionDefinition()
c.deserialize_from_dict(c_dict)
self.constructors.append(c)
def find_id_including_base_types(self, id):
tid = self.scope.ids.get(id)
if tid is None:
for base_type_name in self.base_types:
tid = self.scope.parent.find(base_type_name)
assert(tid is not None and len(tid.ast_nodes) == 1)
assert(isinstance(tid.ast_nodes[0], ASTTypeDefinition))
tid = tid.ast_nodes[0].find_id_including_base_types(id)
if tid is not None:
break
return tid
def set_serializable_to_children(self):
self.serializable = True
for c in self.children:
if type(c) == ASTTypeDefinition:
c.set_serializable_to_children()
def to_str(self, indent):
r = ''
if self.tokeni > 0:
ti = self.tokeni - 1
while ti > 0 and tokens[ti].category in (Token.Category.SCOPE_END, Token.Category.STATEMENT_SEPARATOR):
ti -= 1
r = (source[tokens[ti].end:tokens[self.tokeni].start].count("\n")-1) * "\n"
base_types = []
# if self.has_pointers_to_the_same_type:
# base_types += ['SharedObject']
base_types += self.base_types
r += ' ' * (indent*4) \
+ 'class ' + self.type_name + (' : ' + ', '.join(map(lambda c: 'public ' + c, base_types)) if len(base_types) else '') \
+ "\n" + ' ' * (indent*4) + "{\n"
access_specifier_public = -1
for c in self.children:
if c.access_specifier_public != access_specifier_public:
r += ' ' * (indent*4) + ['private', 'public'][c.access_specifier_public] + ":\n"
access_specifier_public = c.access_specifier_public
r += c.to_str(indent+1)
if len(self.forward_declared_types):
r = "\n".join(' ' * (indent*4) + 'class ' + t + ';' for t in self.forward_declared_types) + "\n\n" + r
if self.serializable:
r += "\n" + ' ' * ((indent+1)*4) + "void serialize(ldf::Serializer &s)\n" + ' ' * ((indent+1)*4) + "{\n"
for c in self.children:
if type(c) in (ASTVariableDeclaration, ASTVariableInitialization):
for var in c.vars:
r += ' ' * ((indent+2)*4) + 's(u"' + var + '", ' + (var if var != 's' else 'this->s') + ");\n"
r += ' ' * ((indent+1)*4) + "}\n"
return r + ' ' * (indent*4) + "};\n"
class ASTTypeAlias(ASTNode):
name : str
defining_type : str # this term is taken from C++ Standard (‘using identifier attribute-specifier-seqopt = defining-type-id ;’)
template_params : List[str]
def __init__(self):
self.template_params = []
def to_str(self, indent):
r = ' ' * (indent*4)
if len(self.template_params):
r += 'template <' + ', '.join(self.template_params) + '> '
return r + 'using ' + self.name + ' = ' + self.defining_type + ";\n"
class ASTTypeEnum(ASTNode):
enum_name : str
enumerators : List[str]
def __init__(self):
super().__init__()
self.enumerators = []
def to_str(self, indent):
r = ' ' * (indent*4) + 'enum class ' + self.enum_name + " {\n"
for i in range(len(self.enumerators)):
r += ' ' * ((indent+1)*4) + self.enumerators[i]
if i < len(self.enumerators) - 1:
r += ','
r += "\n"
return r + ' ' * (indent*4) + "};\n"
class ASTMain(ASTNodeWithChildren):
found_reference_to_argv = False
def to_str(self, indent):
if importing_module:
return ''
if not self.found_reference_to_argv:
return self.children_to_str(indent, 'int main()')
return self.children_to_str(indent, 'int MAIN_WITH_ARGV()', add_at_beginning = ' ' * ((indent+1)*4) + "INIT_ARGV();\n\n")
def type_of(sn):
assert(sn.symbol.id == '.' and len(sn.children) == 2)
if sn.children[0].symbol.id == '.':
if len(sn.children[0].children) == 1:
return None
left = type_of(sn.children[0])
if left is None: # `Array[Array[Array[String]]] table... table.last.append([...])`
return None
elif sn.children[0].symbol.id == '[': # ]
return None
elif sn.children[0].symbol.id == '(': # )
if not sn.children[0].function_call:
return None
if sn.children[0].children[0].symbol.id == '.':
return None
tid = sn.scope.find(sn.children[0].children[0].token_str())
if tid is None:
return None
if type(tid.ast_nodes[0]) == ASTFunctionDefinition: # `input().split(...)`
if tid.ast_nodes[0].function_return_type == '':
return None
type_name = tid.ast_nodes[0].function_return_type
tid = tid.ast_nodes[0].scope.find(type_name)
else: # `Converter(habr_html, ohd).to_html(instr, outfilef)`
type_name = sn.children[0].children[0].token_str()
assert(tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition)
tid = tid.ast_nodes[0].scope.ids.get(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) in (ASTVariableDeclaration, ASTVariableInitialization, ASTFunctionDefinition)):
if type_name == 'auto&':
return None
raise Error('method `' + sn.children[1].token_str() + '` is not found in type `' + type_name + '`', sn.left_to_right_token())
return tid.ast_nodes[0]
elif sn.children[0].symbol.id == ':':
if len(sn.children[0].children) == 2:
return None # [-TODO-]
assert(len(sn.children[0].children) == 1)
tid = global_scope.find(sn.children[0].children[0].token_str())
if tid is None or len(tid.ast_nodes) != 1:
raise Error('`' + sn.children[0].children[0].token_str() + '` is not found in global scope', sn.left_to_right_token()) # this error occurs without this code: ` or (self.token_str()[0].isupper() and self.token_str() != self.token_str().upper())`
left = tid.ast_nodes[0]
elif sn.children[0].token_str() == '@':
s = sn.scope
while True:
if s.is_function:
if s.is_lambda:
assert(s.node is None)
snp = s.parent.node
else:
snp = s.node.parent
if type(snp) == ASTFunctionDefinition:
if type(snp.parent) == ASTTypeDefinition:
fid = snp.parent.find_id_including_base_types(sn.children[1].token_str())
if fid is None:
raise Error('call of undefined method `' + sn.children[1].token_str() + '`', sn.left_to_right_token())
if len(fid.ast_nodes) > 1:
raise Error('methods\' overloading is not supported for now', sn.left_to_right_token())
f_node = fid.ast_nodes[0]
if type(f_node) == ASTFunctionDefinition:
return f_node
break
s = s.parent
assert(s)
return None
elif sn.children[0].token_str().startswith('@'):
return None # [-TODO-]
else:
if sn.children[0].token.category == Token.Category.STRING_LITERAL:
tid = builtins_scope.ids.get('String')
tid = tid.ast_nodes[0].scope.ids.get(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTFunctionDefinition):
raise Error('method `' + sn.children[1].token_str() + '` is not found in type `String`', sn.left_to_right_token())
return tid.ast_nodes[0]
tid, s = sn.scope.find_and_return_scope(sn.children[0].token_str())
if tid is None:
raise Error('identifier is not found', sn.children[0].token)
if len(tid.ast_nodes) != 1: # for `F f(active_window, s)... R s.find(‘.’) ? s.len`
if tid.type != '' and s.is_function: # for `F nud(ASTNode self)... self.symbol.nud_bp`
if '[' in tid.type: # ] # for `F decompress(Array[Int] &compressed)`
return None
tid = s.find(tid.type)
assert(tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition)
tid = tid.ast_nodes[0].scope.ids.get(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) in (ASTVariableDeclaration, ASTVariableInitialization, ASTFunctionDefinition, ASTExpression)): # `ASTExpression` is needed to fix an error ‘identifier `disInter` is not found in `r`’ in '9.yopyra.py' (when there is no `disInter : float`)
raise Error('identifier `' + sn.children[1].token_str() + '` is not found in `' + sn.children[0].token_str() + '`', sn.children[1].token)
if isinstance(tid.ast_nodes[0], ASTExpression):
return None
return tid.ast_nodes[0]
return None
left = tid.ast_nodes[0]
if type(left) == ASTLoop:
return None
if type(left) in (ASTTypeDefinition, ASTTupleInitialization, ASTTupleAssignment):
return None # [-TODO-]
if type(left) not in (ASTVariableDeclaration, ASTVariableInitialization):
raise Error('left type is `' + str(type(left)) + '`', sn.left_to_right_token())
if left.type in ('V', 'П', 'var', 'перем', 'V?', 'П?', 'var?', 'перем?', 'V&', 'П&', 'var&', 'перем&'): # for `V selection_strings = ... selection_strings.map(...)`
assert(type(left) == ASTVariableInitialization)
if left.expression.function_call and left.expression.children[0].token.category == Token.Category.NAME and left.expression.children[0].token_str()[0].isupper(): # for `V n = Node()`
tid = sn.scope.find(left.expression.children[0].token_str())
assert(tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition)
tid = tid.ast_nodes[0].find_id_including_base_types(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) in (ASTVariableDeclaration, ASTVariableInitialization, ASTFunctionDefinition, ASTExpression)): # `ASTExpression` is needed to fix an error ‘identifier `Vhor` is not found in type `Scene`’ in '9.yopyra.py' (when `Vhor = .look.pVectorial(.upCamara)`, i.e. when there is no `Vhor : Vector`)
raise Error('identifier `' + sn.children[1].token_str() + '` is not found in type `' + left.expression.children[0].token_str() + '`', sn.left_to_right_token()) # error message example: method `remove` is not found in type `Array`
if isinstance(tid.ast_nodes[0], ASTExpression):
return None
return tid.ast_nodes[0]
if ((left.expression.function_call and left.expression.children[0].symbol.id == '.' and len(left.expression.children[0].children) == 2 and left.expression.children[0].children[1].token_str() in ('map', 'filter')) # for `V a = ....map(Int); a.sort(reverse' 1B)`
or left.expression.is_list): # for `V employees = [...]; employees.sort(key' e -> e.name)`
tid = builtins_scope.find('Array').ast_nodes[0].scope.ids.get(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) in (ASTVariableDeclaration, ASTVariableInitialization, ASTFunctionDefinition)):
raise Error('member `' + sn.children[1].token_str() + '` is not found in type `Array`', sn.left_to_right_token())
return tid.ast_nodes[0]
return None
# if len(left.type_args): # `Array[String] ending_tags... ending_tags.append(‘</blockquote>’)`
# return None # [-TODO-]
if left.type == 'T':
return None
tid = left.scope.find(left.type.rstrip('?'))
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition):
if left.type.startswith('('): # )
return None
raise Error('type `' + left.type + '` is not found', sn.left_to_right_token())
tid = tid.ast_nodes[0].scope.ids.get(sn.children[1].token_str())
if not (tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) in (ASTVariableDeclaration, ASTVariableInitialization, ASTFunctionDefinition)):
raise Error('member `' + sn.children[1].token_str() + '` is not found in type `' + left.type.rstrip('?') + '`', sn.left_to_right_token())
return tid.ast_nodes[0]
# List of C++ keywords is taken from here[https://en.cppreference.com/w/cpp/keyword]
cpp_keywords = {'alignas', 'alignof', 'and', 'and_eq', 'asm', 'auto', 'bitand', 'bitor', 'bool', 'break', 'case', 'catch', 'char', 'char8_t', 'char16_t', 'char32_t', 'class', 'compl', 'concept', 'const',
'consteval', 'constexpr', 'constinit', 'const_cast', 'continue', 'co_await', 'co_return', 'co_yield', 'decltype', 'default', 'delete', 'do', 'double', 'dynamic_cast', 'else', 'enum', 'explicit',
'export', 'extern', 'false', 'float', 'for', 'friend', 'goto', 'if', 'inline', 'int', 'long', 'mutable', 'namespace', 'new', 'noexcept', 'not', 'not_eq', 'nullptr', 'operator', 'or', 'or_eq',
'private', 'protected', 'public', 'reflexpr', 'register', 'reinterpret_cast', 'requires', 'return', 'short', 'signed', 'sizeof', 'static', 'static_assert', 'static_cast', 'struct', 'switch',
'template', 'this', 'thread_local', 'throw', 'true', 'try', 'typedef', 'typeid', 'typename', 'union', 'unsigned', 'using', 'virtual', 'void', 'volatile', 'wchar_t', 'while', 'xor', 'xor_eq',
'j0', 'j1', 'jn', 'y0', 'y1', 'yn', 'pascal', 'main'}
def next_token(): # why ‘next_token’: >[https://youtu.be/Nlqv6NtBXcA?t=1203]:‘we'll have an advance method which will fetch the next token’
global token, tokeni, tokensn
if token is None and tokeni != -1:
raise Error('no more tokens', Token(len(source), len(source), Token.Category.STATEMENT_SEPARATOR))
tokeni += 1
if tokeni == len(tokens):
token = None
tokensn = None
else:
token = tokens[tokeni]
tokensn = SymbolNode(token)
if token.category != Token.Category.KEYWORD or token.value(source) in allowed_keywords_in_expressions:
key : str
if token.category in (Token.Category.NUMERIC_LITERAL, Token.Category.STRING_LITERAL):
key = '(literal)'
elif token.category == Token.Category.NAME:
key = '(name)'
if token.value(source)[0] == '@':
if token.value(source)[1:2] == '=':
if token.value(source)[2:] in cpp_keywords:
tokensn.token_str_override = '@=_' + token.value(source)[2:] + '_'
elif token.value(source)[1:] in cpp_keywords:
tokensn.token_str_override = '@_' + token.value(source)[1:] + '_'
elif token.value(source) in cpp_keywords:
tokensn.token_str_override = '_' + token.value(source) + '_'
elif token.category == Token.Category.CONSTANT:
key = '(constant)'
elif token.category == Token.Category.STRING_CONCATENATOR:
key = '(concat)'
elif token.category == Token.Category.SCOPE_BEGIN:
key = '{' # }
elif token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.SCOPE_END):
key = ';'
else:
key = token.value(source)
tokensn.symbol = symbol_table[key]
def advance(value):
if token.value(source) != value:
raise Error('expected `' + value + '`', token)
next_token()
def peek_token(how_much = 1):
return tokens[tokeni+how_much] if tokeni+how_much < len(tokens) else Token()
# This implementation is based on [http://svn.effbot.org/public/stuff/sandbox/topdown/tdop-4.py]
def expression(rbp = 0):
def check_tokensn():
if tokensn is None:
raise Error('unexpected end of source', Token(len(source), len(source), Token.Category.STATEMENT_SEPARATOR))
if tokensn.symbol is None:
raise Error('no symbol corresponding to token `' + token.value(source) + '` (belonging to ' + str(token.category) +') found while parsing expression', token)
check_tokensn()
t = tokensn
next_token()
check_tokensn()
left = t.symbol.nud(t)
while rbp < tokensn.symbol.lbp:
t = tokensn
next_token()
left = t.symbol.led(t, left)
check_tokensn()
return left
def infix(id, bp):
def led(self, left):
self.append_child(left)
self.append_child(expression(self.symbol.led_bp))
return self
symbol(id, bp).set_led_bp(bp, led)
def infix_r(id, bp):
def led(self, left):
self.append_child(left)
self.append_child(expression(self.symbol.led_bp - 1))
return self
symbol(id, bp).set_led_bp(bp, led)
def postfix(id, bp):
def led(self, left):
self.postfix = True
self.append_child(left)
return self
symbol(id, bp).led = led
def prefix(id, bp):
def nud(self):
self.append_child(expression(self.symbol.nud_bp))
return self
symbol(id).set_nud_bp(bp, nud)
infix('[+]', 20); #infix('->', 15) # for `(0 .< h).map(_ -> [0] * @w [+] [1])`
infix('?', 25) # based on C# operator precedence ([http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-334.pdf])
infix('|', 30); infix('&', 40)
infix('==', 50); infix('!=', 50); infix('C', 50); infix('С', 50); infix('in', 50); infix('!C', 50); infix('!С', 50); infix('!in', 50)
#infix('(concat)', 52) # `instr[prevci - 1 .< prevci]‘’prevc C ("/\\", "\\/")` = `(instr[prevci - 1 .< prevci]‘’prevc) C ("/\\", "\\/")`
infix('..', 55); infix('.<', 55); infix('.+', 55); infix('<.', 55); infix('<.<', 55) # ch C ‘0’..‘9’ = ch C (‘0’..‘9’)
#postfix('..', 55)
infix('<', 60); infix('<=', 60)
infix('>', 60); infix('>=', 60)
infix('[|]', 70); infix('(+)', 80); infix('[&]', 90)
infix('<<', 100); infix('>>', 100)
infix('+', 110); infix('-', 110)
infix('(concat)', 115) # `print(‘id = ’id+1)` = `print((‘id = ’id)+1)`, `str(c) + str(1-c)*charstack[0]` -> `String(c)‘’String(1 - c) * charstack[0]` = `String(c)‘’(String(1 - c) * charstack[0])`
infix('*', 120); infix('/', 120); infix('I/', 120); infix('Ц/', 120)
infix('%', 120)
prefix('-', 130); prefix('+', 130); prefix('!', 130); prefix('(-)', 130); prefix('--', 130); prefix('++', 130); prefix('&', 130)
infix_r('^', 140)
symbol('.', 150); symbol(':', 150); symbol('[', 150); symbol('(', 150); symbol(')'); symbol(']'); postfix('--', 150); postfix('++', 150)
prefix('.', 150); prefix(':', 150)
infix_r('=', 10); infix_r('+=', 10); infix_r('-=', 10); infix_r('*=', 10); infix_r('/=', 10); infix_r('I/=', 10); infix_r('Ц/=', 10); infix_r('%=', 10); infix_r('>>=', 10); infix_r('<<=', 10); infix_r('^=', 10)
infix_r('[+]=', 10); infix_r('[&]=', 10); infix_r('[|]=', 10); infix_r('(+)=', 10); infix_r('‘’=', 10)
symbol('(name)').nud = lambda self: self
symbol('(literal)').nud = lambda self: self
symbol('(constant)').nud = lambda self: self
symbol('(.)').nud = lambda self: self
symbol('L.last_iteration').nud = lambda self: self
symbol('Ц.последняя_итерация').nud = lambda self: self
symbol('loop.last_iteration').nud = lambda self: self
symbol('цикл.последняя_итерация').nud = lambda self: self
symbol(';')
symbol(',')
symbol("',")
def led(self, left):
self.append_child(left)
global scope
prev_scope = scope
scope = Scope([])
scope.parent = prev_scope
scope.is_lambda = True
tokensn.scope = scope
for c in left.children if left.symbol.id == '(' else [left]: # )
if not c.token_str()[0].isupper(): # for `((ASTNode, ASTNode) -> ASTNode) led` and `[String = ((Float, Float) -> Float)] b` (fix error 'redefinition of already defined identifier is not allowed')
scope.add_name(c.token_str(), None)
self.append_child(expression(self.symbol.led_bp))
scope = prev_scope
return self
symbol('->', 15).set_led_bp(15, led)
def led(self, left):
self.append_child(left) # [(
if token.value(source) not in (']', ')') and token.category != Token.Category.SCOPE_BEGIN:
self.append_child(expression(self.symbol.led_bp))
return self
symbol('..', 55).set_led_bp(55, led)
def led(self, left):
if token.category == Token.Category.SCOPE_BEGIN:
self.append_child(left)
self.append_child(tokensn)
if token.value(source) == '{': # } # if current token is a `{` then it is "with"-expression, but not "with"-statement
next_token()
self.append_child(expression())
advance('}')
return self
if token.category != Token.Category.NAME:
raise Error('expected an attribute name', token)
self.append_child(left)
self.append_child(tokensn)
next_token()
return self
symbol('.').led = led
class Module:
scope : Scope
def __init__(self, scope):
self.scope = scope
modules : Dict[str, Module] = {}
builtin_modules : Dict[str, Module] = {}
def find_module(name):
if name in modules:
return modules[name]
return builtin_modules[name]
def led(self, left):
if token.category != Token.Category.NAME and token.value(source) != '(' and token.category != Token.Category.STRING_LITERAL: # )
raise Error('expected an identifier name or string literal', token)
# Process module [transpile it if necessary and load it]
global scope
module_name = left.to_str()
if module_name not in modules and module_name not in builtin_modules:
module_file_name = os.path.join(os.path.dirname(file_name), module_name.replace('::', '/')).replace('\\', '/') # `os.path.join()` is needed for case when `os.path.dirname(file_name)` is empty string, `replace('\\', '/')` is needed for passing 'tests/parser/errors.txt'
try:
modulefstat = os.stat(module_file_name + '.11l')
except FileNotFoundError:
raise Error('can not import module `' + module_name + "`: file '" + module_file_name + ".11l' is not found", left.token)
hpp_file_mtime = 0
if os.path.isfile(module_file_name + '.hpp'):
hpp_file_mtime = os.stat(module_file_name + '.hpp').st_mtime
if hpp_file_mtime == 0 \
or modulefstat.st_mtime > hpp_file_mtime \
or os.stat(__file__).st_mtime > hpp_file_mtime \
or os.stat(os.path.dirname(__file__) + '/tokenizer.py').st_mtime > hpp_file_mtime \
or not os.path.isfile(module_file_name + '.11l_global_scope'):
module_source = open(module_file_name + '.11l', encoding = 'utf-8-sig').read()
prev_scope = scope
s = parse_and_to_str(tokenizer.tokenize(module_source), module_source, module_file_name + '.11l', True)
open(module_file_name + '.hpp', 'w', encoding = 'utf-8-sig', newline = "\n").write(s) # utf-8-sig is for MSVC (fix of error C2015: too many characters in constant [`u'‘'`]) # ’
modules[module_name] = Module(scope)
assert(scope.is_function == False) # serializing `is_function` member variable is not necessary because it is always equal to `False`
open(module_file_name + '.11l_global_scope', 'w', encoding = 'utf-8', newline = "\n").write(eldf.to_eldf(scope.serialize_to_dict()))
scope = prev_scope
else:
module_scope = Scope(None)
module_scope.deserialize_from_dict(eldf.parse(open(module_file_name + '.11l_global_scope', encoding = 'utf-8-sig').read()))
modules[module_name] = Module(module_scope)
self.append_child(left)
if token.category == Token.Category.STRING_LITERAL: # for `re:‘pattern’`
self.append_child(SymbolNode(Token(token.start, token.start, Token.Category.NAME), symbol = symbol_table['(name)']))
sn = SymbolNode(Token(token.start, token.start, Token.Category.DELIMITER))
sn.symbol = symbol_table['('] # )
sn.function_call = True
sn.append_child(self)
sn.children.append(None)
sn.append_child(tokensn)
next_token()
return sn
elif token.value(source) != '(': # )
self.append_child(tokensn)
next_token()
else: # for `os:(...)` and `time:(...)`
self.append_child(SymbolNode(Token(token.start, token.start, Token.Category.NAME), symbol = symbol_table['(name)']))
return self
symbol(':').led = led
def led(self, left):
self.function_call = True
self.append_child(left) # (
if token.value(source) != ')':
while True:
if token.category != Token.Category.STRING_LITERAL and token.value(source)[-1] == "'":
self.append_child(tokensn)
next_token()
self.append_child(expression())
else:
self.children.append(None)
self.append_child(expression())
if token.value(source) != ',':
break
advance(',') # (
advance(')')
return self
symbol('(').led = led
def nud(self):
comma = False # ((
if token.value(source) != ')':
while True:
if token.value(source) == ')':
break
self.append_child(expression())
if token.value(source) != ',':
break
comma = True
advance(',')
advance(')')
if len(self.children) == 0 or comma:
self.tuple = True
return self
symbol('(').nud = nud # )
def led(self, left):
self.append_child(left)
if token.value(source)[0].isupper() or (token.value(source) == '(' and source[token.start+1].isupper()): # ) # type name must starts with an upper case letter
self.is_type = True
while True:
self.append_child(expression())
if token.value(source) != ',':
break
advance(',')
else:
self.append_child(expression()) # [
advance(']')
return self
symbol('[').led = led
def nud(self):
i = 1 # [[
if token.value(source) != ']': # for `R []`
if token.value(source) == '(': # for `V celltable = [(1, 2) = 1, (1, 3) = 1, (0, 3) = 1]`
while peek_token(i).value(source) != ')':
i += 1
while peek_token(i).value(source) not in ('=', ',', ']'): # for `V cat_to_class_python = [python_to_11l:tokenizer:Token.Category.NAME = ‘identifier’, ...]`
i += 1
if peek_token(i).value(source) == '=':
self.is_dict = True
while True: # [
self.append_child(expression())
if token.value(source) != ',':
break
advance(',')
advance(']')
else:
self.is_list = True
if token.value(source) != ']':
while True: # [[
# if token.value(source) == ']':
# break
self.append_child(expression())
if token.value(source) != ',':
break
advance(',')
advance(']')
return self
symbol('[').nud = nud # ]
def advance_scope_begin():
if token.category != Token.Category.SCOPE_BEGIN:
raise Error('expected a new scope (indented block or opening curly bracket)', token)
next_token()
def nud(self):
self.append_child(expression())
advance_scope_begin()
while token.category != Token.Category.SCOPE_END:
if token.value(source) in ('E', 'И', 'else', 'иначе'):
self.append_child(tokensn)
next_token()
if token.category == Token.Category.SCOPE_BEGIN:
next_token()
self.append_child(expression())
if token.category != Token.Category.SCOPE_END:
raise Error('expected end of scope (dedented block or closing curly bracket)', token)
next_token()
else:
self.append_child(expression())
else:
self.append_child(expression())
advance_scope_begin()
self.append_child(expression())
if token.category != Token.Category.SCOPE_END:
raise Error('expected end of scope (dedented block or closing curly bracket)', token)
next_token()
if token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
next_token()
return self
symbol('S').nud = nud
symbol('В').nud = nud
symbol('switch').nud = nud
symbol('выбрать').nud = nud
def nud(self):
self.append_child(expression())
advance_scope_begin()
self.append_child(expression())
if token.category != Token.Category.SCOPE_END:
raise Error('expected end of scope (dedented block or closing curly bracket)', token)
next_token()
if not token.value(source) in ('E', 'И', 'else', 'иначе'):
raise Error('expected else block', token)
next_token()
self.append_child(expression())
return self
symbol('I').nud = nud
symbol('Е').nud = nud
symbol('if').nud = nud
symbol('если').nud = nud
symbol('{') # }
def parse_internal(this_node):
global token, scope
def new_scope(node, func_args = None, call_advance_scope_begin = True):
if call_advance_scope_begin:
advance_scope_begin()
global scope
prev_scope = scope
scope = Scope(func_args)
scope.parent = prev_scope
scope.init_ids_type_node()
scope.node = node
tokensn.scope = scope # можно избавиться от этой строки, если не делать вызов next_token() в advance_scope_begin()
node.scope = scope
parse_internal(node)
scope = prev_scope
if token is not None:
tokensn.scope = scope
def expected_name(what_name):
next_token()
if token.category != Token.Category.NAME:
raise Error('expected ' + what_name, token)
token_value = tokensn.token_str()
next_token()
return token_value
def is_tuple_assignment():
if token.value(source) == '(':
ti = 1
while peek_token(ti).value(source) != ')':
if peek_token(ti).value(source) in ('[', '.'): # ] # `(u[i], u[j]) = (u[j], u[i])`, `(.x, .y, .z) = (vx, vy, vz)`
return False
ti += 1
return peek_token(ti + 1).value(source) == '='
return False
access_specifier_private = False
while token is not None:
if token.value(source) == ':' and peek_token().value(source) in ('start', 'старт') and peek_token(2).value(source) == ':':
node = ASTMain()
next_token()
next_token()
advance(':')
assert(token.category == Token.Category.STATEMENT_SEPARATOR)
next_token()
new_scope(node, [], False)
elif token.value(source) == '.' and type(this_node) == ASTTypeDefinition:
access_specifier_private = True
next_token()
continue
elif token.category == Token.Category.KEYWORD:
if token.value(source).startswith(('F', 'Ф', 'fn', 'фн')):
node = ASTFunctionDefinition()
if '.virtual.' in token.value(source) or \
'.виртуал.' in token.value(source):
subkw = token.value(source)[token.value(source).rfind('.')+1:]
if subkw in ('new', 'новая' ): node.virtual_category = node.VirtualCategory.NEW
elif subkw in ('override', 'переопр' ): node.virtual_category = node.VirtualCategory.OVERRIDE
elif subkw in ('abstract', 'абстракт'): node.virtual_category = node.VirtualCategory.ABSTRACT
elif subkw in ('assign', 'опред' ): node.virtual_category = node.VirtualCategory.ASSIGN
elif subkw in ('final', 'финал' ): node.virtual_category = node.VirtualCategory.FINAL
elif token.value(source) in ('F.destructor', 'Ф.деструктор', 'fn.destructor', 'фн.деструктор'):
if type(this_node) != ASTTypeDefinition:
raise Error('destructor declaration allowed only inside types', token)
node.function_name = '(destructor)' # can not use `~` here because `~` can be an operator overload
if '.const' in token.value(source) or \
'.конст' in token.value(source):
node.is_const = True
next_token()
if node.function_name != '(destructor)':
if token.category == Token.Category.NAME:
node.function_name = tokensn.token_str()
next_token()
elif token.value(source) == '(': # this is constructor [`F () {...}` or `F (...) {...}`] or operator() [`F ()(...) {...}`]
if peek_token().value(source) == ')' and peek_token(2).value(source) == '(': # ) # this is operator()
next_token()
next_token()
node.function_name = '()'
else:
node.function_name = ''
if type(this_node) == ASTTypeDefinition:
this_node.constructors.append(node)
elif token.category == Token.Category.OPERATOR:
node.function_name = token.value(source)
next_token()
else:
raise Error('incorrect function name', token)
if token.value(source) != '(': # )
raise Error('expected `(` after function name', token) # )(
next_token()
was_default_argument = False
prev_type_name = ''
while token.value(source) != ')':
if token.value(source) == "',":
assert(node.first_named_only_argument is None)
node.first_named_only_argument = len(node.function_arguments)
next_token()
continue
type_ = '' # (
if token.value(source)[0].isupper() and peek_token().value(source) not in (',', ')'): # this is a type name
type_ = token.value(source)
next_token()
if token.value(source) == '[': # ]
nesting_level = 0
while True:
type_ += token.value(source)
if token.value(source) == '[':
next_token()
nesting_level += 1
elif token.value(source) == ']':
next_token()
nesting_level -= 1
if nesting_level == 0:
break
elif token.value(source) == ',':
type_ += ' '
next_token()
elif token.value(source) == '=':
next_token()
else:
if token.category != Token.Category.NAME:
raise Error('expected subtype name', token)
next_token()
if token.value(source) == '(':
type_ += '('
next_token()
while token.value(source) != ')':
type_ += token.value(source)
if token.value(source) == ',':
type_ += ' '
next_token()
next_token()
type_ += ')'
if token.value(source) == '?':
type_ += '?'
next_token()
if token.value(source) == '(': # )
type_ = expression().to_type_str()
if type_ == '':
type_ = prev_type_name
qualifier = ''
if token.value(source) == '=':
qualifier = '='
next_token()
elif token.value(source) == '&':
qualifier = '&'
next_token()
if token.category != Token.Category.NAME:
raise Error('expected function\'s argument name', token)
func_arg_name = tokensn.token_str()
next_token()
if token.value(source) == '=':
next_token()
default = expression().to_str()
was_default_argument = True
else:
if was_default_argument and node.first_named_only_argument is None:
raise Error('non-default argument follows default argument', tokens[tokeni-1])
default = ''
node.function_arguments.append((func_arg_name, default, type_, qualifier)) # ((
if token.value(source) not in ',;)':
raise Error('expected `)`, `;` or `,` in function\'s arguments list', token)
if token.value(source) == ',':
next_token()
prev_type_name = type_
elif token.value(source) == ';':
next_token()
prev_type_name = ''
node.last_non_default_argument = len(node.function_arguments) - 1
while node.last_non_default_argument >= 0 and node.function_arguments[node.last_non_default_argument][1] != '':
node.last_non_default_argument -= 1
if node.function_name not in cpp_type_from_11l: # there was an error in line `String sitem` because of `F String()`
scope.add_function(node.function_name, node)
next_token()
if token.value(source) == '->':
next_token()
if token.value(source) in ('N', 'Н', 'null', 'нуль'):
node.function_return_type = token.value(source)
next_token()
elif token.value(source) == '&':
node.function_return_type = 'auto&'
next_token()
else:
node.function_return_type = expression().to_type_str()
if node.virtual_category != node.VirtualCategory.ABSTRACT:
new_scope(node, map(lambda arg: (arg[0], arg[2]), node.function_arguments))
else:
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('T', 'Т', 'type', 'тип', 'T.serializable', 'Т.сериализуемый', 'type.serializable', 'тип.сериализуемый'):
serializable = token.value(source) in ('T.serializable', 'Т.сериализуемый', 'type.serializable', 'тип.сериализуемый')
node = ASTTypeDefinition()
node.type_name = expected_name('type name')
if token.value(source) in ('[', '='): # ] # this is a type alias
n = ASTTypeAlias()
n.name = node.type_name
node = n
scope.add_name(node.name, node)
prev_scope = scope
scope = Scope(None)
scope.parent = prev_scope
if token.value(source) == '[':
next_token()
while True:
if token.category == Token.Category.KEYWORD and token.value(source) in ('T', 'Т', 'type', 'тип'):
next_token()
assert(token.category == Token.Category.NAME)
scope.add_name(token.value(source), ASTTypeDefinition())
node.template_params.append('typename ' + token.value(source))
else:
expr = expression()
type_name = trans_type(expr.to_type_str(), scope, expr.left_to_right_token())
assert(token.category == Token.Category.NAME)
scope.add_name(token.value(source), ASTTypeDefinition()) # :(hack):
node.template_params.append(type_name + ' ' + token.value(source))
next_token()
if token.value(source) == ']':
next_token()
break
advance(',')
advance('=')
expr = expression()
node.defining_type = trans_type(expr.to_type_str(), scope, expr.left_to_right_token())
scope = prev_scope
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
else:
scope.add_name(node.type_name, node)
if token.value(source) == '(':
while True:
node.base_types.append(expected_name('base type name'))
if token.value(source) != ',':
break
if token.value(source) != ')': # (
raise Error('expected `)`', token)
next_token()
new_scope(node)
if serializable:
node.set_serializable_to_children()
for child in node.children:
if type(child) == ASTFunctionDefinition and child.virtual_category != child.VirtualCategory.NO:
node.has_virtual_functions = True
break
elif token.value(source) in ('T.enum', 'Т.перечисл', 'type.enum', 'тип.перечисл'):
node = ASTTypeEnum()
node.enum_name = expected_name('enum name')
scope.add_name(node.enum_name, node)
advance_scope_begin()
while True:
if token.category != Token.Category.NAME:
raise Error('expected an enumerator name', token)
enumerator = token.value(source)
if not enumerator.isupper():
raise Error('enumerators must be uppercase', token)
next_token()
if token.value(source) == '=':
next_token()
enumerator += ' = ' + expression().to_str()
node.enumerators.append(enumerator)
if token.category == Token.Category.SCOPE_END:
next_token()
break
assert(token.category == Token.Category.STATEMENT_SEPARATOR)
next_token()
elif token.value(source).startswith(('I', 'Е', 'if', 'если')):
node = ASTIf()
if '.' in token.value(source):
subkw = token.value(source)[token.value(source).find('.')+1:]
if subkw in ('likely', 'часто'):
node.likely = 1
else:
assert(subkw in ('unlikely', 'редко'))
node.likely = -1
next_token()
node.set_expression(expression())
new_scope(node)
n = node
while token is not None and token.value(source) in ('E', 'И', 'else', 'иначе'):
if peek_token().value(source) in ('I', 'Е', 'if', 'если'):
n.else_or_elif = ASTElseIf()
n.else_or_elif.parent = n
n = n.else_or_elif
next_token()
next_token()
n.set_expression(expression())
new_scope(n)
if token is not None and token.value(source) in ('E', 'И', 'else', 'иначе') and not peek_token().value(source) in ('I', 'Е', 'if', 'если'):
n.else_or_elif = ASTElse()
n.else_or_elif.parent = n
next_token()
if token.category == Token.Category.SCOPE_BEGIN:
new_scope(n.else_or_elif)
else: # for support `I fs:is_dir(_fname) {...} E ...` (without this `else` only `I fs:is_dir(_fname) {...} E {...}` is allowed)
expr_node = ASTExpression()
expr_node.set_expression(expression())
expr_node.parent = n.else_or_elif
n.else_or_elif.children.append(expr_node)
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.SCOPE_END)):
raise Error('expected end of statement', token)
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
break
elif token.value(source) in ('S', 'В', 'switch', 'выбрать'):
node = ASTSwitch()
next_token()
node.set_expression(expression())
advance_scope_begin()
while token.category != Token.Category.SCOPE_END:
case = ASTSwitch.Case()
case.parent = node
if token.value(source) in ('E', 'И', 'else', 'иначе'):
case.set_expression(tokensn)
next_token()
else:
case.set_expression(expression())
ts = case.expression.token_str()
if case.expression.token.category == Token.Category.STRING_LITERAL and not (len(ts) == 3 or (ts[:2] == '"\\' and len(ts) == 4)):
node.has_string_case = True
new_scope(case)
node.cases.append(case)
next_token()
elif token.value(source) in ('L', 'Ц', 'loop', 'цикл'):
if peek_token().value(source) == '(' and peek_token(4).value(source) == '.' and peek_token(4).start == peek_token(3).end:
assert(peek_token(5).value(source) in ('break', 'прервать'))
node = ASTLoopBreak()
node.token = token
next_token()
node.loop_variable = expected_name('loop variable')
advance(')')
advance('.')
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
else:
node = ASTLoop()
next_token()
prev_scope = scope
scope = Scope(None)
scope.parent = prev_scope
if token.category == Token.Category.SCOPE_BEGIN:
node.expression = None
else:
if token.value(source) == '(' and token.start == tokens[tokeni-1].end:
if peek_token().value(source) == '&':
node.is_loop_variable_a_reference = True
next_token()
elif peek_token().value(source) == '=':
node.copy_loop_variable = True
next_token()
node.loop_variable = expected_name('loop variable')
while token.value(source) == ',':
if peek_token().value(source) == '=':
node.copy_loop_variable = True
next_token()
node.loop_variable += ', ' + expected_name('loop variable')
advance(')')
node.set_expression(expression())
if node.loop_variable is not None: # check if loop variable is a [smart] pointer
lv_node = None
if node.expression.token.category == Token.Category.NAME:
id = scope.find(node.expression.token_str())
if id is not None and len(id.ast_nodes) == 1:
lv_node = id.ast_nodes[0]
elif node.expression.symbol.id == '.' and len(node.expression.children) == 2:
lv_node = type_of(node.expression)
if lv_node is not None and isinstance(lv_node, ASTVariableDeclaration) and lv_node.type == 'Array':
tid = scope.find(lv_node.type_args[0])
if tid is not None and len(tid.ast_nodes) == 1 and type(tid.ast_nodes[0]) == ASTTypeDefinition and tid.ast_nodes[0].has_virtual_functions:
node.is_loop_variable_a_ptr = True
scope.add_name(node.loop_variable, node)
new_scope(node)
scope = prev_scope
if token is not None and token.value(source) in ('L.was_no_break', 'Ц.не_был_прерван', 'loop.was_no_break', 'цикл.не_был_прерван'):
node.was_no_break_node = ASTLoopWasNoBreak()
node.was_no_break_node.parent = node
next_token()
new_scope(node.was_no_break_node)
elif token.value(source) in ('L.continue', 'Ц.продолжить', 'loop.continue', 'цикл.продолжить'):
node = ASTContinue()
node.token = token
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('L.break', 'Ц.прервать', 'loop.break', 'цикл.прервать'):
node = ASTLoopBreak()
node.token = token
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('L.remove_current_element_and_continue', 'Ц.удалить_текущий_элемент_и_продолжить', 'loop.remove_current_element_and_continue', 'цикл.удалить_текущий_элемент_и_продолжить'):
node = ASTLoopRemoveCurrentElementAndContinue()
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('R', 'Р', 'return', 'вернуть'):
node = ASTReturn()
next_token()
if token.category in (Token.Category.SCOPE_END, Token.Category.STATEMENT_SEPARATOR):
node.expression = None
else:
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('X', 'Х', 'exception', 'исключение'):
node = ASTException()
next_token()
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.value(source) in ('X.try', 'Х.контроль', 'exception.try', 'исключение.контроль'):
node = ASTExceptionTry()
next_token()
new_scope(node)
elif token.value(source) in ('X.catch', 'Х.перехват', 'exception.catch', 'исключение.перехват'):
node = ASTExceptionCatch()
if peek_token().category != Token.Category.SCOPE_BEGIN:
if peek_token().value(source) == '.':
next_token()
node.exception_object_type = expected_name('exception object type name').replace(':', '::')
if token.value(source) == ':':
next_token()
node.exception_object_type += '::' + token.value(source)
next_token()
if token.category == Token.Category.NAME:
node.exception_object_name = token.value(source)
next_token()
else:
next_token()
node.exception_object_type = ''
new_scope(node)
else:
raise Error('unrecognized statement started with keyword', token)
elif token.value(source) == '^':
node = ASTLoopBreak()
node.token = token
node.loop_level = 1
next_token()
while token.value(source) == '^':
node.loop_level += 1
next_token()
if token.value(source) not in ('L.break', 'Ц.прервать', 'loop.break', 'цикл.прервать'):
raise Error('expected `L.break`', token)
next_token()
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif token.category == Token.Category.SCOPE_END:
next_token()
if token.category == Token.Category.STATEMENT_SEPARATOR and token.end == len(source): # Token.Category.EOF
next_token()
assert(token is None)
return
elif token.category == Token.Category.STATEMENT_SEPARATOR: # this `if` was added in revision 105[‘Almost complete work on tests/python_to_cpp/pqmarkup.txt’] in order to support `hor_col_align = S instr[j .< j + 2] {‘<<’ {‘left’}; ‘>>’ {‘right’}; ‘><’ {‘center’}; ‘<>’ {‘justify’}}` [there was no STATEMENT_SEPARATOR after this line of code]
next_token()
if token is not None:
assert(token.category != Token.Category.STATEMENT_SEPARATOR)
continue
elif ((token.value(source) in ('V', 'П', 'var', 'перем') and peek_token().value(source) == '(') # ) # this is `V (a, b) = ...`
or (token.value(source) == '-' and
peek_token().value(source) in ('V', 'П', 'var', 'перем') and peek_token(2).value(source) == '(')): # this is `-V (a, b) = ...`
node = ASTTupleInitialization()
if token.value(source) == '-':
node.is_const = True
next_token()
next_token()
next_token()
while True:
assert(token.category == Token.Category.NAME)
name = tokensn.token_str()
node.dest_vars.append(name)
scope.add_name(name, node)
next_token()
if token.value(source) == ')':
break
advance(',')
next_token()
advance('=')
node.set_expression(expression())
if node.expression.function_call and node.expression.children[0].symbol.id == '.' \
and len(node.expression.children[0].children) == 2 \
and (node.expression.children[0].children[1].token_str() in ('split', 'split_py') # `V (name, ...) = ....split(...)` ~> `(V name, V ...) = ....split(...)` -> `...assign_from_tuple(name, ...);` (because `auto [name, ...] = ....split(...);` does not working)
or (node.expression.children[0].children[1].token_str() == 'map' # for `V (w, h) = lines[1].split_py().map(i -> Int(i))`
and node.expression.children[0].children[0].function_call)
and node.expression.children[0].children[0].children[0].symbol.id == '.'
and len(node.expression.children[0].children[0].children[0].children) == 2
and node.expression.children[0].children[0].children[0].children[1].token_str() in ('split', 'split_py')):
# n = node
# node = ASTTupleAssignment()
# for dv in n.dest_vars:
# node.dest_vars.append((dv, True))
# node.set_expression(n.expression)
node.bind_array = True
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
elif is_tuple_assignment(): # this is `(a, b) = ...` or `(a, V b) = ...` or `(V a, b) = ...`
node = ASTTupleAssignment()
next_token()
while True:
if token.category != Token.Category.NAME:
raise Error('expected variable name', token)
add_var = False
if token.value(source) in ('V', 'П', 'var', 'перем'):
add_var = True
next_token()
assert(token.category == Token.Category.NAME)
name = tokensn.token_str()
node.dest_vars.append((name, add_var))
if add_var:
scope.add_name(name, node)
next_token() # (
if token.value(source) == ')':
break
advance(',')
next_token()
advance('=')
node.set_expression(expression())
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
else:
node_expression = expression()
if node_expression.symbol.id == '.' and node_expression.children[1].token.category == Token.Category.SCOPE_BEGIN: # this is a "with"-statement
node = ASTWith()
node.set_expression(node_expression.children[0])
new_scope(node)
else:
if node_expression.symbol.id == '&' and node_expression.children[0].token.category == Token.Category.NAME and node_expression.children[1].token.category == Token.Category.NAME: # this is a reference declaration (e.g. `Symbol& symbol`)
node = ASTVariableDeclaration()
node.is_reference = True
node.vars = [node_expression.children[1].token_str()]
node.type = node_expression.children[0].token_str()
node.type_token = node_expression.token
node.type_args = []
scope.add_name(node.vars[0], node)
elif token.category == Token.Category.NAME and tokens[tokeni-1].category != Token.Category.SCOPE_END:
var_name = tokensn.token_str()
next_token()
if token.value(source) == '=':
next_token()
node = ASTVariableInitialization()
node.set_expression(expression())
if node_expression.token.value(source) not in ('V', 'П', 'var', 'перем'):
if node_expression.token.value(source) in ('V?', 'П?', 'var?', 'перем?'):
node.is_ptr = True
node.nullable = True
else:
id = scope.find(node_expression.token_str())
if id is not None and len(id.ast_nodes) != 0:
if type(id.ast_nodes[0]) == ASTTypeDefinition and (id.ast_nodes[0].has_virtual_functions or id.ast_nodes[0].has_pointers_to_the_same_type):
node.is_ptr = True
elif node.expression.function_call and node.expression.children[0].token.category == Token.Category.NAME and node.expression.children[0].token_str()[0].isupper(): # for `V animal = Sheep(); animal.say()` -> `...; animal->say();`
id = scope.find(node.expression.children[0].token_str())
if not (id is not None and len(id.ast_nodes) != 0):
raise Error('identifier `' + node.expression.children[0].token_str() + '` is not found', node.expression.children[0].token)
if type(id.ast_nodes[0]) == ASTTypeDefinition: # support for functions beginning with an uppercase letter (e.g. Extract_Min)
if id.ast_nodes[0].has_virtual_functions or id.ast_nodes[0].has_pointers_to_the_same_type:
node.is_ptr = True
# elif id.ast_nodes[0].has_pointers_to_the_same_type:
# node.is_shared_ptr = True
node.vars = [var_name]
else:
node = ASTVariableDeclaration()
id = scope.find(node_expression.token_str().rstrip('?'))
if id is not None:
assert(len(id.ast_nodes) == 1)
if type(id.ast_nodes[0]) not in (ASTTypeDefinition, ASTTypeEnum):
raise Error('identifier is of type `' + type(id.ast_nodes[0]).__name__ + '` (should be ASTTypeDefinition or ASTTypeEnum)', node_expression.token) # this error was in line `String sitem` because of `F String()`
if type(id.ast_nodes[0]) == ASTTypeDefinition:
if id.ast_nodes[0].has_virtual_functions or id.ast_nodes[0].has_pointers_to_the_same_type:
node.is_ptr = True
# elif id.ast_nodes[0].has_pointers_to_the_same_type:
# node.is_shared_ptr = True
node.vars = [var_name]
while token.value(source) == ',':
node.vars.append(expected_name('variable name'))
node.type = node_expression.token.value(source)
if node.type == '-' and len(node_expression.children) == 1:
node.is_const = True
node_expression = node_expression.children[0]
node.type = node_expression.token.value(source)
node.type_token = node_expression.token
node.type_args = []
if node.type == '[': # ]
if node_expression.is_dict:
assert(len(node_expression.children) == 1)
node.type = 'Dict'
node.type_args = [node_expression.children[0].children[0].to_type_str(), node_expression.children[0].children[1].to_type_str()]
elif node_expression.is_list:
assert(len(node_expression.children) == 1)
node.type = 'Array'
node.type_args = [node_expression.children[0].to_type_str()]
else:
assert(node_expression.is_type)
node.type = node_expression.children[0].token.value(source)
for i in range(1, len(node_expression.children)):
node.type_args.append(node_expression.children[i].to_type_str())
elif node.type == '(': # )
if len(node_expression.children) == 1 and node_expression.children[0].symbol.id == '->':
node.function_pointer = True
c0 = node_expression.children[0]
assert(c0.children[1].token.category == Token.Category.NAME or c0.children[1].token_str() in ('N', 'Н', 'null', 'нуль'))
node.type = c0.children[1].token_str() # return value type
if c0.children[0].token.category == Token.Category.NAME:
node.type_args.append(c0.children[0].token_str())
else:
assert(c0.children[0].symbol.id == '(') # )
for child in c0.children[0].children:
assert(child.token.category == Token.Category.NAME)
node.type_args.append(child.token_str())
else: # this is a tuple
for child in node_expression.children:
node.type_args.append(child.to_type_str())
node.type = '(' + ', '.join(node.type_args) + ')'
node.type_args.clear()
elif node.type == '.':
node.type = node_expression.to_str()
if not (node.type[0].isupper() or node.type[0] == '(' or node.type in ('var', 'перем')): # )
raise Error('type name must starts with an upper case letter', node.type_token)
for var in node.vars:
scope.add_name(var, node)
if type(this_node) == ASTTypeDefinition and this_node.type_name == node.type.rstrip('?'):
this_node.has_pointers_to_the_same_type = True
node.is_ptr = True # node.is_shared_ptr = True
else:
node = ASTExpression()
node.set_expression(node_expression)
if isinstance(this_node, ASTTypeDefinition) and node_expression.symbol.id == '=': # fix error ‘identifier `disInter` is not found in `r`’ in '9.yopyra.py'
scope.add_name(node_expression.children[0].token_str(), node)
if not (token is None or token.category in (Token.Category.STATEMENT_SEPARATOR, Token.Category.SCOPE_END) or tokens[tokeni-1].category == Token.Category.SCOPE_END):
raise Error('expected end of statement', token)
if token is not None and token.category == Token.Category.STATEMENT_SEPARATOR:
next_token()
if access_specifier_private:
node.access_specifier_public = 0
access_specifier_private = False
node.parent = this_node
this_node.children.append(node)
return
tokens = []
source = ''
tokeni = -1
token = Token(0, 0, Token.Category.STATEMENT_SEPARATOR)
#scope = Scope(None)
#tokensn = SymbolNode(token)
file_name = ''
importing_module = False
def token_to_str(token_str_override, token_category = Token.Category.STRING_LITERAL):
return SymbolNode(Token(0, 0, token_category), token_str_override).to_str()
builtins_scope = Scope(None)
scope = builtins_scope
global_scope : Scope
tokensn = SymbolNode(token)
f = ASTFunctionDefinition([('object', token_to_str('‘’'), ''), ('end', token_to_str(R'"\n"'), 'String'), ('flush', token_to_str('0B', Token.Category.CONSTANT), 'Bool')])
f.first_named_only_argument = 1
builtins_scope.add_function('print', f)
f = ASTFunctionDefinition([('object', token_to_str('‘’'), ''), ('sep', token_to_str('‘ ’'), 'String'),
('end', token_to_str(R'"\n"'), 'String'), ('flush', token_to_str('0B', Token.Category.CONSTANT), 'Bool')])
f.first_named_only_argument = 1
builtins_scope.add_function('print_elements', f)
builtins_scope.add_function('input', ASTFunctionDefinition([('prompt', token_to_str('‘’'), 'String')], 'String'))
builtins_scope.add_function('assert', ASTFunctionDefinition([('expression', '', 'Bool'), ('message', token_to_str('‘’'), 'String')]))
builtins_scope.add_function('exit', ASTFunctionDefinition([('arg', '0', '')]))
builtins_scope.add_function('swap', ASTFunctionDefinition([('a', '', '', '&'), ('b', '', '', '&')]))
builtins_scope.add_function('zip', ASTFunctionDefinition([('iterable1', '', ''), ('iterable2', '', ''), ('iterable3', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('all', ASTFunctionDefinition([('iterable', '', '')]))
builtins_scope.add_function('any', ASTFunctionDefinition([('iterable', '', '')]))
builtins_scope.add_function('cart_product', ASTFunctionDefinition([('iterable1', '', ''), ('iterable2', '', ''), ('iterable3', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('multiloop', ASTFunctionDefinition([('iterable1', '', ''), ('iterable2', '', ''), ('function', '', ''), ('optional', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('multiloop_filtered', ASTFunctionDefinition([('iterable1', '', ''), ('iterable2', '', ''), ('filter_function', '', ''), ('function', '', ''), ('optional', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('sum', ASTFunctionDefinition([('iterable', '', '')]))
builtins_scope.add_function('product', ASTFunctionDefinition([('iterable', '', '')]))
builtins_scope.add_function('enumerate', ASTFunctionDefinition([('iterable', '', ''), ('start', '0', 'Int')]))
builtins_scope.add_function('sorted', ASTFunctionDefinition([('iterable', '', ''), ('key', token_to_str('N', Token.Category.CONSTANT), ''), ('reverse', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
builtins_scope.add_function('tuple_sorted', ASTFunctionDefinition([('tuple', '', ''), ('key', token_to_str('N', Token.Category.CONSTANT), ''), ('reverse', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
builtins_scope.add_function('reversed', ASTFunctionDefinition([('iterable', '', '')]))
builtins_scope.add_function('min', ASTFunctionDefinition([('arg1', '', ''), ('arg2', token_to_str('N', Token.Category.CONSTANT), ''), ('arg3', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('max', ASTFunctionDefinition([('arg1', '', ''), ('arg2', token_to_str('N', Token.Category.CONSTANT), ''), ('arg3', token_to_str('N', Token.Category.CONSTANT), '')]))
builtins_scope.add_function('divmod', ASTFunctionDefinition([('x', '', ''), ('y', '', '')]))
builtins_scope.add_function('factorial', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('gcd', ASTFunctionDefinition([('a', '', ''), ('b', '', '')]))
builtins_scope.add_function('hex', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('bin', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('copy', ASTFunctionDefinition([('object', '', '')]))
builtins_scope.add_function('move', ASTFunctionDefinition([('object', '', '')]))
builtins_scope.add_function('hash', ASTFunctionDefinition([('object', '', '')]))
builtins_scope.add_function('rotl', ASTFunctionDefinition([('value', '', 'Int'), ('shift', '', 'Int')]))
builtins_scope.add_function('rotr', ASTFunctionDefinition([('value', '', 'Int'), ('shift', '', 'Int')]))
builtins_scope.add_function('bsr', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('bsf', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('bit_length', ASTFunctionDefinition([('x', '', '')]))
builtins_scope.add_function('round', ASTFunctionDefinition([('number', '', 'Float'), ('ndigits', '0', '')]))
builtins_scope.add_function('sleep', ASTFunctionDefinition([('secs', '', 'Float')]))
builtins_scope.add_function('ceil', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('floor', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('trunc', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('fract', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('wrap', ASTFunctionDefinition([('x', '', 'Float'), ('min_value', '', 'Float'), ('max_value', '', 'Float')]))
builtins_scope.add_function('abs', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('exp', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('log', ASTFunctionDefinition([('x', '', 'Float'), ('base', '0', 'Float')]))
builtins_scope.add_function('log2', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('log10', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('pow', ASTFunctionDefinition([('x', '', 'Float'), ('y', '', 'Float')]))
builtins_scope.add_function('sqrt', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('acos', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('asin', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('atan', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('atan2', ASTFunctionDefinition([('x', '', 'Float'), ('y', '', 'Float')]))
builtins_scope.add_function('cos', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('sin', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('tan', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('degrees', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('radians', ASTFunctionDefinition([('x', '', 'Float')]))
builtins_scope.add_function('dot', ASTFunctionDefinition([('v1', '', ''), ('v2', '', '')]))
builtins_scope.add_function('cross', ASTFunctionDefinition([('v1', '', ''), ('v2', '', '')]))
builtins_scope.add_function('perp', ASTFunctionDefinition([('v', '', '')]))
builtins_scope.add_function('sqlen', ASTFunctionDefinition([('v', '', '')]))
builtins_scope.add_function('length', ASTFunctionDefinition([('v', '', '')]))
builtins_scope.add_function('normalize', ASTFunctionDefinition([('v', '', '')]))
builtins_scope.add_function('conjugate', ASTFunctionDefinition([('c', '', '')]))
builtins_scope.add_function('ValueError', ASTFunctionDefinition([('s', '', 'String')]))
builtins_scope.add_function('IndexError', ASTFunctionDefinition([('index', '', 'Int')]))
def add_builtin_global_var(var_name, var_type, var_type_args = []):
var = ASTVariableDeclaration()
var.vars = [var_name]
var.type = var_type
var.type_args = var_type_args
builtins_scope.add_name(var_name, var)
add_builtin_global_var('argv', 'Array', ['String'])
add_builtin_global_var('stdin', 'File')
add_builtin_global_var('stdout', 'File')
add_builtin_global_var('stderr', 'File')
builtins_scope.add_name('Char', ASTTypeDefinition([ASTFunctionDefinition([('code', '', 'Int')])]))
char_scope = Scope(None)
char_scope.add_name('is_digit', ASTFunctionDefinition([]))
builtins_scope.ids['Char'].ast_nodes[0].scope = char_scope
builtins_scope.add_name('File', ASTTypeDefinition([ASTFunctionDefinition([('name', '', 'String'), ('mode', token_to_str('‘r’'), 'String'), ('encoding', token_to_str('‘utf-8’'), 'String')])]))
file_scope = Scope(None)
file_scope.add_name('read_bytes', ASTFunctionDefinition([]))
file_scope.add_name('write_bytes', ASTFunctionDefinition([('bytes', '', '[Byte]')]))
file_scope.add_name('read', ASTFunctionDefinition([('size', token_to_str('N', Token.Category.CONSTANT), 'Int?')]))
file_scope.add_name('write', ASTFunctionDefinition([('s', '', 'String')]))
file_scope.add_name('read_lines', ASTFunctionDefinition([('keep_newline', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
file_scope.add_name('read_line', ASTFunctionDefinition([('keep_newline', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
file_scope.add_name('flush', ASTFunctionDefinition([]))
file_scope.add_name('close', ASTFunctionDefinition([]))
builtins_scope.ids['File'].ast_nodes[0].scope = file_scope
for type_ in cpp_type_from_11l:
builtins_scope.add_name(type_, ASTTypeDefinition([ASTFunctionDefinition([('object', token_to_str('‘’'), '')])]))
f = ASTFunctionDefinition([('x', '', ''), ('radix', '10', 'Int')])
f.first_named_only_argument = 1
builtins_scope.ids['Int'].ast_nodes[0] = ASTTypeDefinition([f])
string_scope = Scope(None)
str_last_member_var_decl = ASTVariableDeclaration()
str_last_member_var_decl.type = 'Char'
string_scope.add_name('last', str_last_member_var_decl)
string_scope.add_name('starts_with', ASTFunctionDefinition([('prefix', '', 'String')]))
string_scope.add_name('ends_with', ASTFunctionDefinition([('suffix', '', 'String')]))
string_scope.add_name('split', ASTFunctionDefinition([('delim', '', 'String'), ('limit', token_to_str('N', Token.Category.CONSTANT), 'Int?'), ('group_delimiters', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
string_scope.add_name('split_py', ASTFunctionDefinition([]))
string_scope.add_name('rtrim', ASTFunctionDefinition([('s', '', 'String'), ('limit', token_to_str('N', Token.Category.CONSTANT), 'Int?')]))
string_scope.add_name('ltrim', ASTFunctionDefinition([('s', '', 'String'), ('limit', token_to_str('N', Token.Category.CONSTANT), 'Int?')]))
string_scope.add_name('trim', ASTFunctionDefinition([('s', '', 'String')]))
string_scope.add_name('find', ASTFunctionDefinition([('s', '', 'String')]))
string_scope.add_name('findi', ASTFunctionDefinition([('s', '', 'String'), ('start', '0', 'Int')]))
string_scope.add_name('rfindi', ASTFunctionDefinition([('s', '', 'String'), ('start', '0', 'Int'), ('end', token_to_str('N', Token.Category.CONSTANT), 'Int?')]))
string_scope.add_name('count', ASTFunctionDefinition([('s', '', 'String')]))
string_scope.add_name('replace', ASTFunctionDefinition([('old', '', 'String'), ('new', '', 'String')]))
string_scope.add_name('lowercase', ASTFunctionDefinition([]))
string_scope.add_name('uppercase', ASTFunctionDefinition([]))
string_scope.add_name('zfill', ASTFunctionDefinition([('width', '', 'Int')]))
string_scope.add_name('center', ASTFunctionDefinition([('width', '', 'Int'), ('fillchar', token_to_str('‘ ’'), 'Char')]))
string_scope.add_name('ljust', ASTFunctionDefinition([('width', '', 'Int'), ('fillchar', token_to_str('‘ ’'), 'Char')]))
string_scope.add_name('rjust', ASTFunctionDefinition([('width', '', 'Int'), ('fillchar', token_to_str('‘ ’'), 'Char')]))
string_scope.add_name('format', ASTFunctionDefinition([('arg', token_to_str('N', Token.Category.CONSTANT), '')] * 32))
string_scope.add_name('map', ASTFunctionDefinition([('function', '', '(Char -> T)')]))
builtins_scope.ids['String'].ast_nodes[0].scope = string_scope
array_scope = Scope(None)
arr_last_member_var_decl = ASTVariableDeclaration()
arr_last_member_var_decl.type = 'T'
array_scope.add_name('last', arr_last_member_var_decl)
array_scope.add_name('append', ASTFunctionDefinition([('x', '', '')]))
array_scope.add_name('extend', ASTFunctionDefinition([('t', '', '')]))
array_scope.add_name('remove', ASTFunctionDefinition([('x', '', '')]))
array_scope.add_name('count', ASTFunctionDefinition([('x', '', '')]))
array_scope.add_name('index', ASTFunctionDefinition([('x', '', ''), ('i', '0', 'Int')]))
array_scope.add_name('pop', ASTFunctionDefinition([('i', '-1', 'Int')]))
array_scope.add_name('insert', ASTFunctionDefinition([('i', '', 'Int'), ('x', '', '')]))
array_scope.add_name('reverse', ASTFunctionDefinition([]))
array_scope.add_name('reverse_range', ASTFunctionDefinition([('range', '', 'Range')]))
array_scope.add_name('next_permutation', ASTFunctionDefinition([]))
array_scope.add_name('clear', ASTFunctionDefinition([]))
array_scope.add_name('drop', ASTFunctionDefinition([]))
array_scope.add_name('map', ASTFunctionDefinition([('f', '', '')]))
array_scope.add_name('filter', ASTFunctionDefinition([('f', '', '')]))
array_scope.add_name('join', ASTFunctionDefinition([('sep', '', 'String')]))
array_scope.add_name('sort', ASTFunctionDefinition([('key', token_to_str('N', Token.Category.CONSTANT), ''), ('reverse', token_to_str('0B', Token.Category.CONSTANT), 'Bool')]))
builtins_scope.ids['Array'].ast_nodes[0].scope = array_scope
dict_scope = Scope(None)
dict_scope.add_name('find', ASTFunctionDefinition([('k', '', '')]))
dict_scope.add_name('keys', ASTFunctionDefinition([]))
dict_scope.add_name('values', ASTFunctionDefinition([]))
builtins_scope.ids['Dict'].ast_nodes[0].scope = dict_scope
builtins_scope.ids['DefaultDict'].ast_nodes[0].scope = dict_scope
set_scope = Scope(None)
set_scope.add_name('intersection', ASTFunctionDefinition([('other', '', 'Set')]))
set_scope.add_name('difference', ASTFunctionDefinition([('other', '', 'Set')]))
set_scope.add_name('symmetric_difference', ASTFunctionDefinition([('other', '', 'Set')]))
set_scope.add_name('is_subset', ASTFunctionDefinition([('other', '', 'Set')]))
set_scope.add_name('add', ASTFunctionDefinition([('elem', '', '')]))
set_scope.add_name('discard', ASTFunctionDefinition([('elem', '', '')]))
set_scope.add_name('map', ASTFunctionDefinition([('f', '', '')]))
builtins_scope.ids['Set'].ast_nodes[0].scope = set_scope
deque_scope = Scope(None)
deque_scope.add_name('append', ASTFunctionDefinition([('x', '', '')]))
deque_scope.add_name('pop_left', ASTFunctionDefinition([]))
builtins_scope.ids['Deque'].ast_nodes[0].scope = deque_scope
module_scope = Scope(None)
builtin_modules['math'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('get_temp_dir', ASTFunctionDefinition([]))
module_scope.add_function('list_dir', ASTFunctionDefinition([('path', token_to_str('‘.’'), 'String')]))
module_scope.add_function('walk_dir', ASTFunctionDefinition([('path', token_to_str('‘.’'), 'String'), ('dir_filter', token_to_str('N', Token.Category.CONSTANT), '(String -> Bool)?'), ('files_only', token_to_str('1B', Token.Category.CONSTANT), 'Bool')]))
module_scope.add_function('is_dir', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('is_file', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('is_symlink', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('file_size', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('create_dir', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('create_dirs', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('remove_file', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('remove_dir', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('remove_all', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('rename', ASTFunctionDefinition([('old_path', '', 'String'), ('new_path', '', 'String')]))
builtin_modules['fs'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('join', ASTFunctionDefinition([('path1', '', 'String'), ('path2', '', 'String')]))
module_scope.add_function('base_name', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('dir_name', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('absolute', ASTFunctionDefinition([('path', '', 'String')]))
module_scope.add_function('relative', ASTFunctionDefinition([('path', '', 'String'), ('base', '', 'String')]))
module_scope.add_function('split_ext', ASTFunctionDefinition([('path', '', 'String')]))
builtin_modules['fs::path'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('', ASTFunctionDefinition([('command', '', 'String')]))
module_scope.add_function('getenv', ASTFunctionDefinition([('name', '', 'String'), ('default', token_to_str('‘’'), 'String')]))
module_scope.add_function('setenv', ASTFunctionDefinition([('name', '', 'String'), ('value', '', 'String')]))
builtin_modules['os'] = Module(module_scope)
builtins_scope.add_name('Time', ASTTypeDefinition([ASTFunctionDefinition([('year', '0', 'Int'), ('month', '1', 'Int'), ('day', '1', 'Int'), ('hour', '0', 'Int'), ('minute', '0', 'Int'), ('second', '0', 'Float')])]))
time_scope = Scope(None)
time_scope.add_name('unix_time', ASTFunctionDefinition([]))
time_scope.add_name('strftime', ASTFunctionDefinition([('format', '', 'String')]))
time_scope.add_name('format', ASTFunctionDefinition([('format', '', 'String')]))
builtins_scope.ids['Time'].ast_nodes[0].scope = time_scope
f = ASTFunctionDefinition([('days', '0', 'Float'), ('hours', '0', 'Float'), ('minutes', '0', 'Float'), ('seconds', '0', 'Float'), ('milliseconds', '0', 'Float'), ('microseconds', '0', 'Float'), ('weeks', '0', 'Float')])
f.first_named_only_argument = 0
builtins_scope.add_name('TimeDelta', ASTTypeDefinition([f]))
time_delta_scope = Scope(None)
time_delta_scope.add_name('days', ASTFunctionDefinition([]))
builtins_scope.ids['TimeDelta'].ast_nodes[0].scope = time_delta_scope
module_scope = Scope(None)
module_scope.add_function('perf_counter', ASTFunctionDefinition([]))
module_scope.add_function('today', ASTFunctionDefinition([]))
module_scope.add_function('from_unix_time', ASTFunctionDefinition([('unix_time', '', 'Float')]))
module_scope.add_function('strptime', ASTFunctionDefinition([('datetime_string', '', 'String'), ('format', '', 'String')]))
builtin_modules['time'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('', ASTFunctionDefinition([('pattern', '', 'String')]))
builtin_modules['re'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('', ASTFunctionDefinition([('stop', '1', 'Float')]))
module_scope.add_function('seed', ASTFunctionDefinition([('s', '', 'Int')]))
module_scope.add_function('shuffle', ASTFunctionDefinition([('container', '', '', '&')]))
module_scope.add_function('choice', ASTFunctionDefinition([('container', '', '')]))
builtin_modules['random'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('push', ASTFunctionDefinition([('array', '', '', '&'), ('item', '', '')]))
module_scope.add_function('pop', ASTFunctionDefinition([('array', '', '', '&')]))
module_scope.add_function('heapify', ASTFunctionDefinition([('array', '', '', '&')]))
builtin_modules['minheap'] = Module(module_scope)
builtin_modules['maxheap'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('to_object', ASTFunctionDefinition([('json_str', '', 'String'), ('obj', '', '', '&')]))
module_scope.add_function('from_object', ASTFunctionDefinition([('obj', '', ''), ('indent', '4', '')]))
builtin_modules['json'] = Module(module_scope)
module_scope = Scope(None)
module_scope.add_function('to_object', ASTFunctionDefinition([('eldf_str', '', 'String'), ('obj', '', '', '&')]))
module_scope.add_function('from_object', ASTFunctionDefinition([('obj', '', ''), ('indent', '4', 'Int')]))
module_scope.add_function('from_json', ASTFunctionDefinition([('json_str', '', 'String')]))
module_scope.add_function('to_json', ASTFunctionDefinition([('eldf_str', '', 'String')]))
module_scope.add_function('reparse', ASTFunctionDefinition([('eldf_str', '', 'String')]))
module_scope.add_function('test_parse', ASTFunctionDefinition([('eldf_str', '', 'String')]))
builtin_modules['eldf'] = Module(module_scope)
def parse_and_to_str(tokens_, source_, file_name_, importing_module_ = False, append_main = False, suppress_error_please_wrap_in_copy = False): # option suppress_error_please_wrap_in_copy is needed to simplify conversion of large Python source into C++
if len(tokens_) == 0: return ASTProgram().to_str()
global tokens, source, tokeni, token, break_label_index, scope, global_scope, tokensn, file_name, importing_module, modules
prev_tokens = tokens
prev_source = source
prev_tokeni = tokeni
prev_token = token
# prev_scope = scope
prev_tokensn = tokensn
prev_file_name = file_name
prev_importing_module = importing_module
prev_break_label_index = break_label_index
tokens = tokens_ + [Token(len(source_), len(source_), Token.Category.STATEMENT_SEPARATOR)]
source = source_
tokeni = -1
token = None
break_label_index = -1
scope = Scope(None)
if not importing_module_:
global_scope = scope
scope.parent = builtins_scope
file_name = file_name_
importing_module = importing_module_
prev_modules = modules
modules = {}
next_token()
p = ASTProgram()
parse_internal(p)
if len(modules):
p.beginning_extra = "\n".join(map(lambda m: 'namespace ' + m.replace('::', ' { namespace ') + " {\n#include \"" + m.replace('::', '/') + ".hpp\"\n}" + '}'*m.count('::'), modules)) + "\n\n"
found_reference_to_argv = False
def find_reference_to_argv(node):
def f(e : SymbolNode):
if len(e.children) == 1 and e.symbol.id == ':' and e.children[0].token_str() == 'argv':
nonlocal found_reference_to_argv
found_reference_to_argv = True
return
for child in e.children:
if child is not None:
f(child)
node.walk_expressions(f)
node.walk_children(find_reference_to_argv)
find_reference_to_argv(p)
if found_reference_to_argv:
if type(p.children[-1]) != ASTMain:
raise Error("`sys.argv`->`:argv` can be used only after `if __name__ == '__main__':`->`:start:`", tokens[-1])
p.children[-1].found_reference_to_argv = True
p.beginning_extra += "Array<String> argv;\n\n"
s = p.to_str() # call `to_str()` moved here [from outside] because it accesses global variables `source` (via `token.value(source)`) and `tokens` (via `tokens[ti]`)
if append_main and type(p.children[-1]) != ASTMain:
s += "\nint main()\n{\n}\n"
tokens = prev_tokens
source = prev_source
tokeni = prev_tokeni
token = prev_token
# scope = prev_scope
tokensn = prev_tokensn
file_name = prev_file_name
importing_module = prev_importing_module
break_label_index = prev_break_label_index
modules = prev_modules
return s | 11l | /11l-2021.3-py3-none-any.whl/_11l_to_cpp/parse.py | parse.py |
R"""
После данной обработки отступы перестают играть роль — границу `scope` всегда определяют фигурные скобки.
Также здесь выполняется склеивание строк, и таким образом границу statement\утверждения задаёт либо символ `;`,
либо символ новой строки (при условии, что перед ним не стоит символ `…`!).
===============================================================================================================
Ошибки:
---------------------------------------------------------------------------------------------------------------
Error: `if/else/fn/loop/switch/type` scope is empty.
---------------------------------------------------------------------------------------------------------------
Существуют операторы, которые всегда требуют нового scope\блока, который можно обозначить двумя способами:
1. Начать следующую строку с отступом относительно предыдущей, например:
if condition\условие
scope\блок
2. Заключить блок\scope в фигурные скобки:
if condition\условие {scope\блок}
Примечание. При использовании второго способа блок\scope может иметь произвольный уровень отступа:
if condition\условие
{
scope\блок
}
---------------------------------------------------------------------------------------------------------------
Error: `if/else/fn/loop/switch/type` scope is empty, after applied implied line joining: ```...```
---------------------------------------------------------------------------------------------------------------
Сообщение об ошибке аналогично предыдущему, но выделено в отдельное сообщение об ошибке, так как может
возникать по вине ошибочного срабатывания автоматического склеивания строк (и показывается оно тогда, когда
было произведено склеивание строк в месте данной ошибки).
---------------------------------------------------------------------------------------------------------------
Error: mixing tabs and spaces in indentation: `...`
---------------------------------------------------------------------------------------------------------------
В одной строке для отступа используется смесь пробелов и символов табуляции.
Выберите что-либо одно (желательно сразу для всего файла): либо пробелы для отступа, либо табуляцию.
Примечание: внутри строковых литералов, в комментариях, а также внутри строк кода можно смешивать пробелы и
табуляцию. Эта ошибка генерируется только при проверке отступов (отступ — последовательность символов пробелов
или табуляции от самого начала строки до первого символа отличного от пробела и табуляции).
---------------------------------------------------------------------------------------------------------------
Error: inconsistent indentations: ```...```
---------------------------------------------------------------------------------------------------------------
В текущей строке кода для отступа используются пробелы, а в предыдущей строке — табуляция (либо наоборот).
[[[
Сообщение было предназначено для несколько другой ошибки: для любых двух соседних строк, если взять отступ
одной из них, то другой отступ должен начинаться с него же {если отступ текущей строки отличается от отступа
предыдущей, то:
1. Когда отступ текущей строки начинается на отступ предыдущей строки, это INDENT.
2. Когда отступ предыдущей строки начинается на отступ текущей строки, это DEDENT.
}. Например:
if a:
SSTABif b:
SSTABTABi = 0
SSTABSi = 0
Последняя пара строк не удовлетворяет этому требованию, так как ни строка ‘SSTABTAB’ не начинается на строку
‘SSTABS’, ни ‘SSTABS’ не начинается на ‘SSTABTAB’.
Эта проверка имела бы смысл в случае разрешения смешения пробелов и табуляции для отступа в пределах одной
строки (а это разрешено в Python). Но я решил отказаться от этой идеи, а лучшего текста сообщения для этой
ошибки не придумал.
]]]
---------------------------------------------------------------------------------------------------------------
Error: unindent does not match any outer indentation level
---------------------------------------------------------------------------------------------------------------
[-Добавить описание ошибки.-]
===============================================================================================================
"""
from enum import IntEnum
from typing import List, Tuple
Char = str
keywords = ['V', 'C', 'I', 'E', 'F', 'L', 'N', 'R', 'S', 'T', 'X',
'П', 'С', 'Е', 'И', 'Ф', 'Ц', 'Н', 'Р', 'В', 'Т', 'Х',
'var', 'in', 'if', 'else', 'fn', 'loop', 'null', 'return', 'switch', 'type', 'exception',
'перем', 'С', 'если', 'иначе', 'фн', 'цикл', 'нуль', 'вернуть', 'выбрать', 'тип', 'исключение']
#keywords.remove('C'); keywords.remove('С'); keywords.remove('in') # it is more convenient to consider C/in as an operator, not a keyword (however, this line is not necessary)
# new_scope_keywords = ['else', 'fn', 'if', 'loop', 'switch', 'type']
# Решил отказаться от учёта new_scope_keywords на уровне лексического анализатора из-за loop.break и case в switch
empty_list_of_str : List[str] = []
binary_operators : List[List[str]] = [empty_list_of_str, [str('+'), '-', '*', '/', '%', '^', '&', '|', '<', '>', '=', '?'], ['<<', '>>', '<=', '>=', '==', '!=', '+=', '-=', '*=', '/=', '%=', '&=', '|=', '^=', '->', '..', '.<', '.+', '<.', 'I/', 'Ц/', 'C ', 'С '], ['<<=', '>>=', '‘’=', '[+]', '[&]', '[|]', '(+)', '<.<', 'I/=', 'Ц/=', 'in ', '!C ', '!С '], ['[+]=', '[&]=', '[|]=', '(+)=', '!in ']]
unary_operators : List[List[str]] = [empty_list_of_str, [str('!')], ['++', '--'], ['(-)']]
sorted_operators = sorted(binary_operators[1] + binary_operators[2] + binary_operators[3] + binary_operators[4] + unary_operators[1] + unary_operators[2] + unary_operators[3], key = lambda x: len(x), reverse = True)
binary_operators[1].remove('^') # for `^L.break` support
binary_operators[2].remove('..') # for `L(n) 1..`
class Error(Exception):
message : str
pos : int
end : int
def __init__(self, message, pos):
self.message = message
self.pos = pos
self.end = pos
class Token:
class Category(IntEnum): # why ‘Category’: >[https://docs.python.org/3/reference/lexical_analysis.html#other-tokens]:‘the following categories of tokens exist’
NAME = 0 # or IDENTIFIER
KEYWORD = 1
CONSTANT = 2
DELIMITER = 3 # SEPARATOR = 3
OPERATOR = 4
NUMERIC_LITERAL = 5
STRING_LITERAL = 6
STRING_CONCATENATOR = 7 # special token inserted between adjacent string literal and some identifier
SCOPE_BEGIN = 8 # similar to ‘INDENT token in Python’[https://docs.python.org/3/reference/lexical_analysis.html][-1]
SCOPE_END = 9 # similar to ‘DEDENT token in Python’[-1]
STATEMENT_SEPARATOR = 10
start : int
end : int
category : Category
def __init__(self, start, end, category):
self.start = start
self.end = end
self.category = category
def __repr__(self):
return str(self.start)
def value(self, source):
return source[self.start:self.end]
def to_str(self, source):
return 'Token('+str(self.category)+', "'+self.value(source)+'")'
def tokenize(source : str, implied_scopes : List[Tuple[Char, int]] = None, line_continuations : List[int] = None, comments : List[Tuple[int, int]] = None):
tokens : List[Token] = []
indentation_levels : List[Tuple[int, bool]] = []
nesting_elements : List[Tuple[Char, int]] = [] # логически этот стек можно объединить с indentation_levels, но так немного удобнее (конкретно: для проверок `nesting_elements[-1][0] != ...`)
i = 0
begin_of_line = True
indentation_tabs : bool
prev_linestart : int
def skip_multiline_comment():
nonlocal i, source, comments
comment_start = i
lbr = source[i+1]
rbr = {"‘": "’", "(": ")", "{": "}", "[": "]"}[lbr]
i += 2
nesting_level = 1
while True:
ch = source[i]
i += 1
if ch == lbr:
nesting_level += 1
elif ch == rbr:
nesting_level -= 1
if nesting_level == 0:
break
if i == len(source):
raise Error('there is no corresponding opening parenthesis/bracket/brace/qoute for `' + lbr + '`', comment_start+1)
if comments is not None:
comments.append((comment_start, i))
while i < len(source):
if begin_of_line: # at the beginning of each line, the line's indentation level is compared to the last indentation_levels [:1]
begin_of_line = False
linestart = i
tabs = False
spaces = False
while i < len(source):
if source[i] == ' ':
spaces = True
elif source[i] == "\t":
tabs = True
else:
break
i += 1
if i == len(source): # end of source
break
ii = i
if source[i:i+2] in (R'\‘', R'\(', R'\{', R'\['): # ]})’
skip_multiline_comment()
while i < len(source) and source[i] in " \t": # skip whitespace characters
i += 1
if i == len(source): # end of source
break
if source[i] in "\r\n" or source[i:i+2] in ('//', R'\\'): # lines with only whitespace and/or comments do not affect the indentation
continue
if source[i] in "{}": # Indentation level of lines starting with { or } is ignored
continue
if len(tokens) \
and tokens[-1].category == Token.Category.STRING_CONCATENATOR \
and source[i] in '"\'‘': # ’ and not source[i+1:i+2] in ({'"':'"', '‘':'’'}[source[i]],):
if line_continuations is not None:
line_continuations.append(tokens[-1].end)
if source[i:i+2] in ('""', '‘’'):
i += 2
continue
if len(tokens) \
and tokens[-1].category == Token.Category.STRING_LITERAL \
and source[i:i+2] in ('""', '‘’'):
if line_continuations is not None:
line_continuations.append(tokens[-1].end)
tokens.append(Token(i, i, Token.Category.STRING_CONCATENATOR))
i += 2
continue
if (len(tokens)
and tokens[-1].category == Token.Category.OPERATOR
and tokens[-1].value(source) in binary_operators[tokens[-1].end - tokens[-1].start] # ‘Every line of code which ends with any binary operator should be joined with the following line of code.’:[https://github.com/JuliaLang/julia/issues/2097#issuecomment-339924750][-339924750]<
and source[tokens[-1].end-4:tokens[-1].end] != '-> &'): # for `F symbol(id, bp = 0) -> &`
if line_continuations is not None:
line_continuations.append(tokens[-1].end)
continue
# if not (len(indentation_levels) and indentation_levels[-1][0] == -1): # сразу после символа `{` это [:правило] не действует ...а хотя не могу подобрать пример, который бы показывал необходимость такой проверки, а потому оставлю этот if закомментированным # }
if ((source[i ] in binary_operators[1]
or source[i:i+2] in binary_operators[2]
or source[i:i+3] in binary_operators[3]
or source[i:i+4] in binary_operators[4]) # [правило:] ‘Every line of code which begins with any binary operator should be joined with the previous line of code.’:[-339924750]<
and not (source[i ] in unary_operators[1] # Rude fix for:
or source[i:i+2] in unary_operators[2] # a=b
or source[i:i+3] in unary_operators[3]) # ++i // Plus symbol at the beginning here should not be treated as binary + operator, so there is no implied line joining
and (source[i] not in ('&', '-') or source[i+1:i+2] == ' ')): # Символы `&` и `-` обрабатываются по-особенному — склеивание строк происходит только если после одного из этих символов стоит пробел
if len(tokens) == 0:
raise Error('source can not starts with a binary operator', i)
if line_continuations is not None:
line_continuations.append(tokens[-1].end)
continue
if source[i:i+2] == R'\.': # // Support for constructions like: ||| You need just to add `\` at the each line starting from dot:
if len(tokens): # \\ result = abc.method1() ||| result = abc.method1()
i += 1 # \\ .method2() ||| \.method2()
#else: # with `if len(tokens): i += 1` there is no need for this else branch
# raise Error('unexpected character `\`')
if line_continuations is not None:
line_continuations.append(tokens[-1].end)
continue
if tabs and spaces:
next_line_pos = source.find("\n", i)
raise Error('mixing tabs and spaces in indentation: `' + source[linestart:i].replace(' ', 'S').replace("\t", 'TAB') + source[i:next_line_pos if next_line_pos != -1 else len(source)] + '`', i)
indentation_level = ii - linestart
if len(indentation_levels) and indentation_levels[-1][0] == -1: # сразу после символа `{` идёт новый произвольный отступ (понижение уровня отступа может быть полезно, если вдруг отступ оказался слишком большой), который действует вплоть до парного символа `}`
indentation_levels[-1] = (indentation_level, indentation_levels[-1][1]) #indentation_levels[-1][0] = indentation_level # || maybe this is unnecessary (actually it is necessary, see test "fn f()\n{\na = 1") // }
indentation_tabs = tabs
else:
prev_indentation_level = indentation_levels[-1][0] if len(indentation_levels) else 0
if indentation_level > 0 and prev_indentation_level > 0 and indentation_tabs != tabs:
e = i + 1
while e < len(source) and source[e] not in "\r\n":
e += 1
raise Error("inconsistent indentations:\n```\n" + prev_indentation_level*('TAB' if indentation_tabs else 'S') + source[prev_linestart:linestart]
+ (ii-linestart)*('TAB' if tabs else 'S') + source[ii:e] + "\n```", ii)
prev_linestart = ii
if indentation_level == prev_indentation_level: # [1:] [-1]:‘If it is equal, nothing happens.’ :)(: [:2]
if len(tokens) and tokens[-1].category != Token.Category.SCOPE_END:
tokens.append(Token(linestart-1, linestart, Token.Category.STATEMENT_SEPARATOR))
elif indentation_level > prev_indentation_level: # [2:] [-1]:‘If it is larger, it is pushed on the stack, and one INDENT token is generated.’ [:3]
if prev_indentation_level == 0: # len(indentation_levels) == 0 or indentation_levels[-1][0] == 0:
indentation_tabs = tabs # первоначальная/новая установка символа для отступа (либо табуляция, либо пробелы) производится только от нулевого уровня отступа
indentation_levels.append((indentation_level, False))
tokens.append(Token(linestart, ii, Token.Category.SCOPE_BEGIN))
if implied_scopes is not None:
implied_scopes.append((Char('{'), tokens[-2].end + (1 if source[tokens[-2].end] in " \n" else 0)))
else: # [3:] [-1]:‘If it is smaller, it ~‘must’ be one of the numbers occurring on the stack; all numbers on the stack that are larger are popped off, and for each number popped off a DEDENT token is generated.’ [:4]
while True:
if indentation_levels[-1][1]:
raise Error('too much unindent, what is this unindent intended for?', ii)
indentation_levels.pop()
tokens.append(Token(ii, ii, Token.Category.SCOPE_END))
if implied_scopes is not None:
implied_scopes.append((Char('}'), ii))
level = indentation_levels[-1][0] if len(indentation_levels) else 0 #level, explicit_scope_via_curly_braces = indentation_levels[-1] if len(indentation_levels) else [0, False]
if level == indentation_level:
break
if level < indentation_level:
raise Error('unindent does not match any outer indentation level', ii)
ch = source[i]
if ch in " \t":
i += 1 # just skip whitespace characters
elif ch in "\r\n":
#if newline_chars is not None: # rejected this code as it does not count newline characters inside comments and string literals
# newline_chars.append(i)
i += 1
if ch == "\r" and source[i:i+1] == "\n":
i += 1
if len(nesting_elements) == 0 or nesting_elements[-1][0] not in '([': # если мы внутри скобок, то начинать новую строку не нужно # ])
begin_of_line = True
elif (ch == '/' and source[i+1:i+2] == '/' ) \
or (ch == '\\' and source[i+1:i+2] == '\\'): # single-line comment
comment_start = i
i += 2
while i < len(source) and source[i] not in "\r\n":
i += 1
if comments is not None:
comments.append((comment_start, i))
elif ch == '\\' and source[i+1:i+2] in "‘({[": # multi-line comment # ]})’
skip_multiline_comment()
else:
def is_hexadecimal_digit(ch):
return '0' <= ch <= '9' or 'A' <= ch <= 'F' or 'a' <= ch <= 'f' or ch in 'абсдефАБСДЕФ'
operator_s = ''
# if ch in 'CС' and not (source[i+1:i+2].isalpha() or source[i+1:i+2].isdigit()): # without this check [and if 'C' is in binary_operators] when identifier starts with `C` (for example `Circle`), then this first letter of identifier is mistakenly considered as an operator
# operator_s = ch
# else:
for op in sorted_operators:
if source[i:i+len(op)] == op:
operator_s = op
break
lexem_start = i
i += 1
category : Token.Category
if operator_s != '':
i = lexem_start + len(operator_s)
if source[i-1] == ' ': # for correct handling of operator 'C '/'in ' in external tools (e.g. keyletters_to_keywords.py)
i -= 1
category = Token.Category.OPERATOR
elif ch.isalpha() or ch in ('_', '@'): # this is NAME/IDENTIFIER or KEYWORD
if ch == '@':
while i < len(source) and source[i] == '@':
i += 1
if i < len(source) and source[i] == '=':
i += 1
while i < len(source):
ch = source[i]
if not (ch.isalpha() or ch in '_?:' or '0' <= ch <= '9'):
break
i += 1
# Tokenize `fs:path:dirname` to ['fs:path', ':', 'dirname']
j = i - 1
while j > lexem_start:
if source[j] == ':':
i = j
break
j -= 1
if source[i:i+1] == '/' and source[i-1:i] in 'IЦ':
if source[i-2:i-1] == ' ':
category = Token.Category.OPERATOR
else:
raise Error('please clarify your intention by putting space character before or after `I`', i-1)
elif source[i:i+1] == "'": # this is a named argument, a raw string or a hexadecimal number
i += 1
if source[i:i+1] == ' ': # this is a named argument
category = Token.Category.NAME
elif source[i:i+1] in ('‘', "'"): # ’ # this is a raw string
i -= 1
category = Token.Category.NAME
else: # this is a hexadecimal number
while i < len(source) and (is_hexadecimal_digit(source[i]) or source[i] == "'"):
i += 1
if not (source[lexem_start+4:lexem_start+5] == "'" or source[i-3:i-2] == "'" or source[i-2:i-1] == "'"):
raise Error('digit separator in this hexadecimal number is located in the wrong place', lexem_start)
category = Token.Category.NUMERIC_LITERAL
elif source[lexem_start:i] in keywords:
if source[lexem_start:i] in ('V', 'П', 'var', 'перем'): # it is more convenient to consider V/var as [type] name, not a keyword
category = Token.Category.NAME
if source[i:i+1] == '&':
i += 1
elif source[lexem_start:i] in ('N', 'Н', 'null', 'нуль'):
category = Token.Category.CONSTANT
else:
category = Token.Category.KEYWORD
if source[i:i+1] == '.': # this is composite keyword like `L.break`
i += 1
while i < len(source) and (source[i].isalpha() or source[i] in '_.'):
i += 1
if source[lexem_start:i] in ('L.index', 'Ц.индекс', 'loop.index', 'цикл.индекс'): # for correct STRING_CONCATENATOR insertion
category = Token.Category.NAME
else:
category = Token.Category.NAME
elif '0' <= ch <= '9': # this is NUMERIC_LITERAL or CONSTANT 0B or 1B
if ch in '01' and source[i:i+1] in ('B', 'В') and not (is_hexadecimal_digit(source[i+1:i+2]) or source[i+1:i+2] == "'"):
i += 1
category = Token.Category.CONSTANT
else:
is_hex = False
while i < len(source) and is_hexadecimal_digit(source[i]):
if not ('0' <= source[i] <= '9'):
if source[i] in 'eE' and source[i+1:i+2] in ('-', '+'): # fix `1e-10`
break
is_hex = True
i += 1
next_digit_separator = 0
is_oct_or_bin = False
if i < len(source) and source[i] == "'":
if i - lexem_start in (2, 1): # special handling for 12'345/1'234 (чтобы это не считалось short/ultrashort hexadecimal number)
j = i + 1
while j < len(source) and is_hexadecimal_digit(source[j]):
if not ('0' <= source[j] <= '9'):
is_hex = True
j += 1
next_digit_separator = j - 1 - i
elif i - lexem_start == 4: # special handling for 1010'1111b (чтобы это не считалось hexadecimal number)
j = i + 1
while j < len(source) and ((is_hexadecimal_digit(source[j]) and not source[j] in 'bд') or source[j] == "'"): # I know, checking for `in 'bд'` is hacky
j += 1
if j < len(source) and source[j] in 'oоbд':
is_oct_or_bin = True
if i < len(source) and source[i] == "'" and ((i - lexem_start == 4 and not is_oct_or_bin) or (i - lexem_start in (2, 1) and (next_digit_separator != 3 or is_hex))): # this is a hexadecimal number
if i - lexem_start == 2: # this is a short hexadecimal number
while True:
i += 1
if i + 2 > len(source) or not is_hexadecimal_digit(source[i]) or not is_hexadecimal_digit(source[i+1]):
raise Error('wrong short hexadecimal number', lexem_start)
i += 2
if i < len(source) and is_hexadecimal_digit(source[i]):
raise Error('expected end of short hexadecimal number', i)
if source[i:i+1] != "'":
break
elif i - lexem_start == 1: # this is an ultrashort hexadecimal number
i += 1
if i + 1 > len(source) or not is_hexadecimal_digit(source[i]):
raise Error('wrong ultrashort hexadecimal number', lexem_start)
i += 1
if i < len(source) and is_hexadecimal_digit(source[i]):
raise Error('expected end of ultrashort hexadecimal number', i)
else:
i += 1
while i < len(source) and is_hexadecimal_digit(source[i]):
i += 1
if (i - lexem_start) % 5 == 4 and i < len(source):
if source[i] != "'":
if not is_hexadecimal_digit(source[i]):
break
raise Error('here should be a digit separator in hexadecimal number', i)
i += 1
if i < len(source) and source[i] == "'":
raise Error('digit separator in hexadecimal number is located in the wrong place', i)
if (i - lexem_start) % 5 != 4:
raise Error('after this digit separator there should be 4 digits in hexadecimal number', source.rfind("'", 0, i))
else:
while i < len(source) and ('0' <= source[i] <= '9' or source[i] in "'.eE"):
if source[i:i+2] in ('..', '.<', '.+'):
break
if source[i] in 'eE':
if source[i+1:i+2] in '-+':
i += 1
i += 1
if source[i:i+1] in ('o', 'о', 'b', 'д', 's', 'i'):
i += 1
elif "'" in source[lexem_start:i] and not '.' in source[lexem_start:i]: # float numbers do not checked for a while
number = source[lexem_start:i].replace("'", '')
number_with_separators = ''
j = len(number)
while j > 3:
number_with_separators = "'" + number[j-3:j] + number_with_separators
j -= 3
number_with_separators = number[0:j] + number_with_separators
if source[lexem_start:i] != number_with_separators:
raise Error('digit separator in this number is located in the wrong place (should be: '+ number_with_separators +')', lexem_start)
category = Token.Category.NUMERIC_LITERAL
elif ch == "'" and source[i:i+1] == ',': # this is a named-only arguments mark
i += 1
category = Token.Category.DELIMITER
elif ch == '"':
if source[i] == '"' \
and tokens[-1].category == Token.Category.STRING_CONCATENATOR \
and tokens[-2].category == Token.Category.STRING_LITERAL \
and tokens[-2].value(source)[0] == '‘': # ’ // for cases like r = abc‘some big ...’""
i += 1 # \\ ‘... string’
continue # [(
startqpos = i - 1
if len(tokens) and tokens[-1].end == startqpos and ((tokens[-1].category == Token.Category.NAME and tokens[-1].value(source)[-1] != "'") or tokens[-1].value(source) in (')', ']')):
tokens.append(Token(lexem_start, lexem_start, Token.Category.STRING_CONCATENATOR))
while True:
if i == len(source):
raise Error('unclosed string literal', startqpos)
ch = source[i]
i += 1
if ch == '\\':
if i == len(source):
continue
i += 1
elif ch == '"':
break
if source[i:i+1].isalpha() or source[i:i+1] in ('_', '@', ':', '‘', '('): # )’
tokens.append(Token(lexem_start, i, Token.Category.STRING_LITERAL))
tokens.append(Token(i, i, Token.Category.STRING_CONCATENATOR))
continue
category = Token.Category.STRING_LITERAL
elif ch in "‘'":
if source[i] == '’' \
and tokens[-1].category == Token.Category.STRING_CONCATENATOR \
and tokens[-2].category == Token.Category.STRING_LITERAL \
and tokens[-2].value(source)[0] == '"': # // for cases like r = abc"some big ..."‘’
i += 1 # \\ ‘... string’
continue # ‘[(
if len(tokens) and tokens[-1].end == i - 1 and ((tokens[-1].category == Token.Category.NAME and tokens[-1].value(source)[-1] != "'") or tokens[-1].value(source) in (')', ']')):
tokens.append(Token(lexem_start, lexem_start, Token.Category.STRING_CONCATENATOR))
if source[i] == '’': # for cases like `a‘’b`
i += 1
continue
i -= 1
while i < len(source) and source[i] == "'":
i += 1
if source[i:i+1] != '‘': # ’
raise Error('expected left single quotation mark', i)
startqpos = i
i += 1
nesting_level = 1
while True:
if i == len(source):
raise Error('unpaired left single quotation mark', startqpos)
ch = source[i]
i += 1
if ch == "‘":
nesting_level += 1
elif ch == "’":
nesting_level -= 1
if nesting_level == 0:
break
while i < len(source) and source[i] == "'":
i += 1
if source[i:i+1].isalpha() or source[i:i+1] in ('_', '@', ':', '"', '('): # )
tokens.append(Token(lexem_start, i, Token.Category.STRING_LITERAL))
tokens.append(Token(i, i, Token.Category.STRING_CONCATENATOR))
continue
category = Token.Category.STRING_LITERAL
elif ch == '{':
indentation_levels.append((-1, True))
nesting_elements.append((Char('{'), lexem_start)) # }
category = Token.Category.SCOPE_BEGIN
elif ch == '}':
if len(nesting_elements) == 0 or nesting_elements[-1][0] != '{':
raise Error('there is no corresponding opening brace for `}`', lexem_start)
nesting_elements.pop()
while indentation_levels[-1][1] != True:
tokens.append(Token(lexem_start, lexem_start, Token.Category.SCOPE_END))
if implied_scopes is not None: # {
implied_scopes.append((Char('}'), lexem_start))
indentation_levels.pop()
assert(indentation_levels.pop()[1] == True)
category = Token.Category.SCOPE_END
elif ch == ';':
category = Token.Category.STATEMENT_SEPARATOR
elif ch in (',', '.', ':'):
category = Token.Category.DELIMITER
elif ch in '([':
if source[lexem_start:lexem_start+3] == '(.)':
i += 2
category = Token.Category.NAME
else:
nesting_elements.append((ch, lexem_start))
category = Token.Category.DELIMITER
elif ch in '])': # ([
if len(nesting_elements) == 0 or nesting_elements[-1][0] != {']':'[', ')':'('}[ch]: # ])
raise Error('there is no corresponding opening parenthesis/bracket for `' + ch + '`', lexem_start)
nesting_elements.pop()
category = Token.Category.DELIMITER
else:
raise Error('unexpected character `' + ch + '`', lexem_start)
tokens.append(Token(lexem_start, i, category))
if len(nesting_elements):
raise Error('there is no corresponding closing parenthesis/bracket/brace for `' + nesting_elements[-1][0] + '`', nesting_elements[-1][1])
# [4:] [-1]:‘At the end of the file, a DEDENT token is generated for each number remaining on the stack that is larger than zero.’
while len(indentation_levels):
assert(indentation_levels[-1][1] != True)
tokens.append(Token(i, i, Token.Category.SCOPE_END))
if implied_scopes is not None: # {
implied_scopes.append((Char('}'), i-1 if source[-1] == "\n" else i))
indentation_levels.pop()
return tokens | 11l | /11l-2021.3-py3-none-any.whl/_11l_to_cpp/tokenizer.py | tokenizer.py |
from io import BytesIO
from django.core.files.images import ImageFile
from faker.providers import BaseProvider
from x11x_wagtail_blog.models import AboutTheAuthor
class X11XWagtailBlogProvider(BaseProvider):
"""
Provider for the wonderful faker library. Add `X11XWagtailBlogProvider` to a standard faker to generate data for your test
code.
>>> from faker import Faker
>>> fake = Faker()
>>> fake.add_provider(X11XWagtailBlogProvider)
>>> fake.avatar_image_content() # doctest: +NORMALIZE_QUOTES
b'\\x89PNG...
"""
def avatar_image_content(self, *, size=(32, 32)) -> bytes:
"""
Generate an avatar image of the given size. By default, the image
will be a PNG 32 pixels by 32 pixels.
The use of the image generation functions require the PIL library to be installed.
:param tuple[int, int] size: The width and height of the image to generate.
:return bytes: Returns the binary content of the PNG.
>>> fake.avatar_image_content(size=(4, 4)) # doctest: +NORMALIZE_QUOTES
b'\\x89PNG...
"""
return self.generator.image(
size=size,
image_format="png",
)
def avatar_image_file(self) -> ImageFile:
"""
Generates a `django.core.files.images.ImageFile` that can be assigned to a user's profile.
The use of the image generation functions require the PIL library to be installed.
>>> fake.avatar_image_file()
<ImageFile: ....png>
"""
return ImageFile(
BytesIO(self.avatar_image_content()),
self.generator.file_name(extension="png"),
)
def about_the_author(self, author) -> AboutTheAuthor:
"""
Generates an AboutTheAuthor snippet.
"""
return AboutTheAuthor(
author=author,
body=self.generator.paragraph(),
)
def title_image_content(self, *, size=(2, 2)) -> bytes:
"""
Generates image content suitable for the 'title_image'. Unless ``size`` is given, a 2x2 pixel image will be generated.
>>> fake.title_image_content() # doctest: +NORMALIZE_QUOTES
b'\\x89PNG...
:param tuple[int, int] size: The width and height of the image to generate.
:return bytes: Returns the content of the title image.
"""
return self.generator.image(
size=size,
image_format="png",
)
def title_image_file(self, *, name=None) -> ImageFile:
"""
Generates a `django.core.files.images.ImageFile` that can be assigned to a user's profile.
>>> fake.title_image_file(name="this-name.png")
<ImageFile: this-name.png>
:param str name: The name of the image file to generate.
:return ImageFile: Returns an `ImageFile`
"""
name = name or self.generator.file_name(extension="png")
return ImageFile(
BytesIO(self.title_image_content()),
name,
) | 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/fakers.py | fakers.py |
from django.conf import settings
from django.db import models
from django.utils import timezone
from modelcluster.fields import ParentalKey
from wagtail.admin.panels import FieldPanel, InlinePanel
from wagtail.fields import StreamField, RichTextField
from wagtail.models import Page
from wagtail.snippets.blocks import SnippetChooserBlock
from wagtail.snippets.models import register_snippet
_RICH_TEXT_SUMMARY_FEATURES = getattr(settings, "X11X_WAGTAIL_BLOG_SUMMARY_FEATURES", ["bold", "italic", "code", "superscript", "subscript", "strikethrough"])
@register_snippet
class AboutTheAuthor(models.Model):
"""
A snippet holding the content of an 'About the Author' section for particular authors.
These snippets are intended to be organized by the various authors of a website. Individual users
may have several 'about' blurbs that they can choose depending on what a particular article calls
for.
"""
author = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.RESTRICT,
editable=True,
blank=False,
related_name="about_the_author_snippets",
)
"A reference to the author this snippet is about."
body = RichTextField()
"A paragraph or two describing the associated author."
panels = [
FieldPanel("author"),
FieldPanel("body"),
]
def __str__(self):
return str(self.author)
class RelatedArticles(models.Model):
"""
You should never have to instantiate ``RelatedArticles`` directly. This is a
model to implement the m2m relationship between articles.
"""
related_to = ParentalKey("ExtensibleArticlePage", verbose_name="Article", related_name="related_article_to")
related_from = ParentalKey("ExtensibleArticlePage", verbose_name="Article", related_name="related_article_from")
class ExtensibleArticlePage(Page):
"""
`ExtensibleArticlePage` is the base class for blog articles. Inherit from `ExtensibleArticlePage` when
and add your own ``body`` element. `ExtensibleArticlePage` are NOT creatable through the wagtail admin.
"""
date = models.DateTimeField(default=timezone.now, null=False, blank=False, editable=True)
"Date to appear in the article subheading."
summary = RichTextField(features=_RICH_TEXT_SUMMARY_FEATURES, default="", blank=True, null=False)
"The article's summary. `summary` will show up in index pages."
title_image = models.ForeignKey(
"wagtailimages.Image",
on_delete=models.RESTRICT,
related_name="+",
null=True,
blank=True,
)
"The image to use in the title header or section of the article."
authors = StreamField(
[
("about_the_authors", SnippetChooserBlock(AboutTheAuthor)),
],
default=list,
use_json_field=True,
blank=True,
)
"About the author sections to include with the article.."
is_creatable = False
settings_panels = Page.settings_panels + [
FieldPanel("date"),
FieldPanel("owner"),
]
pre_body_content_panels = Page.content_panels + [
FieldPanel("title_image"),
FieldPanel("summary"),
]
"Admin `FieldPanels` intended to be displayed BEFORE a ``body`` field."
post_body_content_panels = [
FieldPanel("authors"),
InlinePanel(
"related_article_from",
label="Related Articles",
panels=[FieldPanel("related_to")]
)
]
"Admin `FieldPanel` s intended to be displayed AFTER a ``body`` field."
def has_authors(self):
"""
Returns ``True`` if this article has one or more 'about the authors' snippet. ``False`` otherwise.
"""
return len(self.authors) > 0
@classmethod
def with_body_panels(cls, panels):
"""
A helper method that concatenates all the admin panels of this class with the admin panels intended to enter content
of the main body.
:param panels: Panels intended to show up under the "Title" and "Summary" sections, but before
the 'trailing' sections.
"""
return cls.pre_body_content_panels + panels + cls.post_body_content_panels
def get_template(self, request, *args, **kwargs):
"""
Returns the default template. This method will likely be removed in the (very) near future.
This method may be overridden (like all wagtail pages) to return the intended template.
:deprecated:
"""
return getattr(settings, "X11X_WAGTAIL_BLOG_ARTICLE_TEMPLATE", "x11x_wagtail_blog/article_page.html")
def has_related_articles(self):
"""
Returns `True` if this page has related articles associated with it. Returns ``False`` otherwise.
"""
return self.related_article_from.all().count() > 0
@property
def related_articles(self):
"""
An iterable of related articles related to this one.
"""
return [to.related_to for to in self.related_article_from.all()]
@related_articles.setter
def related_articles(self, value):
"""
Sets the articles related to this one.
:param list[ExtensibleArticlePage] value: A list of related articles.
"""
self.related_article_from = [
RelatedArticles(
related_from=self,
related_to=v
) for v in value
] | 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/models.py | models.py |
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import modelcluster.fields
import wagtail.fields
import wagtail.snippets.blocks
import x11x_wagtail_blog.models
class Migration(migrations.Migration):
initial = True
dependencies = [
("wagtailcore", "0083_workflowcontenttype"),
("wagtailimages", "0025_alter_image_file_alter_rendition_file"),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name="ExtensibleArticlePage",
fields=[
(
"page_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="wagtailcore.page",
),
),
("date", models.DateTimeField(default=django.utils.timezone.now)),
("summary", wagtail.fields.RichTextField(blank=True, default="")),
(
"authors",
wagtail.fields.StreamField(
[
(
"about_the_authors",
wagtail.snippets.blocks.SnippetChooserBlock(x11x_wagtail_blog.models.AboutTheAuthor),
)
],
blank=True,
default=list,
use_json_field=True,
),
),
(
"title_image",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.RESTRICT,
related_name="+",
to="wagtailimages.image",
),
),
],
options={
"abstract": False,
},
bases=("wagtailcore.page",),
),
migrations.CreateModel(
name="RelatedArticles",
fields=[
("id", models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
(
"related_from",
modelcluster.fields.ParentalKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="related_article_from",
to="x11x_wagtail_blog.extensiblearticlepage",
verbose_name="Article",
),
),
(
"related_to",
modelcluster.fields.ParentalKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="related_article_to",
to="x11x_wagtail_blog.extensiblearticlepage",
verbose_name="Article",
),
),
],
),
migrations.CreateModel(
name="AboutTheAuthor",
fields=[
("id", models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
("body", wagtail.fields.RichTextField()),
(
"author",
models.ForeignKey(
on_delete=django.db.models.deletion.RESTRICT,
related_name="about_the_author_snippets",
to=settings.AUTH_USER_MODEL,
),
),
],
),
] | 11x-wagtail-blog | /11x_wagtail_blog-0.2.0-py3-none-any.whl/x11x_wagtail_blog/migrations/0001_initial.py | 0001_initial.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | 12-distributions | /12_distributions-0.1.tar.gz/12_distributions-0.1/12_distributions/Gaussiandistribution.py | Gaussiandistribution.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) number of trials
TODO: Fill out all functions below
"""
def __init__(self, prob=.5, size=20):
self.n = size
self.p = prob
Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.p * self.n
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev = math.sqrt(self.n * self.p * (1 - self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = 1.0 * sum(self.data) / len(self.data)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n])
plt.title('Bar Chart of Data')
plt.xlabel('outcome')
plt.ylabel('count')
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k)))
b = (self.p ** k) * (1 - self.p) ** (self.n - k)
return a * b
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x = []
y = []
# calculate the x values to visualize
for i in range(self.n + 1):
x.append(i)
y.append(self.pdf(i))
# make the plots
plt.bar(x, y)
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
result = Binomial()
result.n = self.n + other.n
result.p = self.p
result.calculate_mean()
result.calculate_stdev()
return result
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}, p {}, n {}".\
format(self.mean, self.stdev, self.p, self.n) | 12-distributions | /12_distributions-0.1.tar.gz/12_distributions-0.1/12_distributions/Binomialdistribution.py | Binomialdistribution.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | 12-test | /12@test-0.1.tar.gz/12@test-0.1/distributions/Gaussiandistribution.py | Gaussiandistribution.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) number of trials
TODO: Fill out all functions below
"""
def __init__(self, prob=.5, size=20):
self.n = size
self.p = prob
Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.p * self.n
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev = math.sqrt(self.n * self.p * (1 - self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = 1.0 * sum(self.data) / len(self.data)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n])
plt.title('Bar Chart of Data')
plt.xlabel('outcome')
plt.ylabel('count')
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k)))
b = (self.p ** k) * (1 - self.p) ** (self.n - k)
return a * b
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x = []
y = []
# calculate the x values to visualize
for i in range(self.n + 1):
x.append(i)
y.append(self.pdf(i))
# make the plots
plt.bar(x, y)
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
result = Binomial()
result.n = self.n + other.n
result.p = self.p
result.calculate_mean()
result.calculate_stdev()
return result
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}, p {}, n {}".\
format(self.mean, self.stdev, self.p, self.n) | 12-test | /12@test-0.1.tar.gz/12@test-0.1/distributions/Binomialdistribution.py | Binomialdistribution.py |
![Logo](https://storage.googleapis.com/tf_model_garden/tf_model_garden_logo.png)
# TensorFlow Research Models
This directory contains code implementations and pre-trained models of published research papers.
The research models are maintained by their respective authors.
## Table of Contents
- [TensorFlow Research Models](#tensorflow-research-models)
- [Table of Contents](#table-of-contents)
- [Modeling Libraries and Models](#modeling-libraries-and-models)
- [Models and Implementations](#models-and-implementations)
- [Computer Vision](#computer-vision)
- [Natural Language Processing](#natural-language-processing)
- [Audio and Speech](#audio-and-speech)
- [Reinforcement Learning](#reinforcement-learning)
- [Others](#others)
- [Old Models and Implementations in TensorFlow 1](#old-models-and-implementations-in-tensorflow-1)
- [Contributions](#contributions)
## Modeling Libraries and Models
| Directory | Name | Description | Maintainer(s) |
|-----------|------|-------------|---------------|
| [object_detection](object_detection) | TensorFlow Object Detection API | A framework that makes it easy to construct, train and deploy object detection models<br /><br />A collection of object detection models pre-trained on the COCO dataset, the Kitti dataset, the Open Images dataset, the AVA v2.1 dataset, and the iNaturalist Species Detection Dataset| jch1, tombstone, pkulzc |
| [slim](slim) | TensorFlow-Slim Image Classification Model Library | A lightweight high-level API of TensorFlow for defining, training and evaluating image classification models <br />• Inception V1/V2/V3/V4<br />• Inception-ResNet-v2<br />• ResNet V1/V2<br />• VGG 16/19<br />• MobileNet V1/V2/V3<br />• NASNet-A_Mobile/Large<br />• PNASNet-5_Large/Mobile | sguada, marksandler2 |
## Models and Implementations
### Computer Vision
| Directory | Paper(s) | Conference | Maintainer(s) |
|-----------|----------|------------|---------------|
| [attention_ocr](attention_ocr) | [Attention-based Extraction of Structured Information from Street View Imagery](https://arxiv.org/abs/1704.03549) | ICDAR 2017 | xavigibert |
| [autoaugment](autoaugment) | [1] [AutoAugment](https://arxiv.org/abs/1805.09501)<br />[2] [Wide Residual Networks](https://arxiv.org/abs/1605.07146)<br />[3] [Shake-Shake regularization](https://arxiv.org/abs/1705.07485)<br />[4] [ShakeDrop Regularization for Deep Residual Learning](https://arxiv.org/abs/1802.02375) | [1] CVPR 2019<br />[2] BMVC 2016<br /> [3] ICLR 2017<br /> [4] ICLR 2018 | barretzoph |
| [deeplab](deeplab) | [1] [DeepLabv1: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs](https://arxiv.org/abs/1412.7062)<br />[2] [DeepLabv2: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs](https://arxiv.org/abs/1606.00915)<br />[3] [DeepLabv3: Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)<br />[4] [DeepLabv3+: Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1802.02611)<br />| [1] ICLR 2015 <br />[2] TPAMI 2017 <br />[4] ECCV 2018 | aquariusjay, yknzhu |
| [delf](delf) | [1] DELF (DEep Local Features): [Large-Scale Image Retrieval with Attentive Deep Local Features](https://arxiv.org/abs/1612.06321)<br />[2] [Detect-to-Retrieve: Efficient Regional Aggregation for Image Search](https://arxiv.org/abs/1812.01584)<br />[3] DELG (DEep Local and Global features): [Unifying Deep Local and Global Features for Image Search](https://arxiv.org/abs/2001.05027)<br />[4] GLDv2: [Google Landmarks Dataset v2 -- A Large-Scale Benchmark for Instance-Level Recognition and Retrieval](https://arxiv.org/abs/2004.01804) | [1] ICCV 2017<br />[2] CVPR 2019<br />[4] CVPR 2020 | andrefaraujo |
| [lstm_object_detection](lstm_object_detection) | [Mobile Video Object Detection with Temporally-Aware Feature Maps](https://arxiv.org/abs/1711.06368) | CVPR 2018 | yinxiaoli, yongzhe2160, lzyuan |
| [marco](marco) | MARCO: [Classification of crystallization outcomes using deep convolutional neural networks](https://arxiv.org/abs/1803.10342) | | vincentvanhoucke |
| [vid2depth](vid2depth) | [Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints](https://arxiv.org/abs/1802.05522) | CVPR 2018 | rezama |
### Natural Language Processing
| Directory | Paper(s) | Conference | Maintainer(s) |
|-----------|----------|------------|---------------|
| [adversarial_text](adversarial_text) | [1] [Adversarial Training Methods for Semi-Supervised Text](https://arxiv.org/abs/1605.07725) Classification<br />[2] [Semi-supervised Sequence Learning](https://arxiv.org/abs/1511.01432) | [1] ICLR 2017<br />[2] NIPS 2015 | rsepassi, a-dai |
| [cvt_text](cvt_text) | [Semi-Supervised Sequence Modeling with Cross-View Training](https://arxiv.org/abs/1809.08370) | EMNLP 2018 | clarkkev, lmthang |
### Audio and Speech
| Directory | Paper(s) | Conference | Maintainer(s) |
|-----------|----------|------------|---------------|
| [audioset](audioset) | [1] [Audio Set: An ontology and human-labeled dataset for audio events](https://research.google/pubs/pub45857/)<br />[2] [CNN Architectures for Large-Scale Audio Classification](https://research.google/pubs/pub45611/) | ICASSP 2017 | plakal, dpwe |
| [deep_speech](deep_speech) | [Deep Speech 2](https://arxiv.org/abs/1512.02595) | ICLR 2016 | yhliang2018 |
### Reinforcement Learning
| Directory | Paper(s) | Conference | Maintainer(s) |
|-----------|----------|------------|---------------|
| [efficient-hrl](efficient-hrl) | [1] [Data-Efficient Hierarchical Reinforcement Learning](https://arxiv.org/abs/1805.08296)<br />[2] [Near-Optimal Representation Learning for Hierarchical Reinforcement Learning](https://arxiv.org/abs/1810.01257) | [1] NIPS 2018<br /> [2] ICLR 2019 | ofirnachum |
| [pcl_rl](pcl_rl) | [1] [Improving Policy Gradient by Exploring Under-appreciated Rewards](https://arxiv.org/abs/1611.09321)<br />[2] [Bridging the Gap Between Value and Policy Based Reinforcement Learning](https://arxiv.org/abs/1702.08892)<br />[3] [Trust-PCL: An Off-Policy Trust Region Method for Continuous Control](https://arxiv.org/abs/1707.01891) | [1] ICLR 2017<br />[2] NIPS 2017<br />[3] ICLR 2018 | ofirnachum |
### Others
| Directory | Paper(s) | Conference | Maintainer(s) |
|-----------|----------|------------|---------------|
| [lfads](lfads) | [LFADS - Latent Factor Analysis via Dynamical Systems](https://arxiv.org/abs/1608.06315) | | jazcollins, sussillo |
| [rebar](rebar) | [REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models](https://arxiv.org/abs/1703.07370) | NIPS 2017 | gjtucker |
### Old Models and Implementations in TensorFlow 1
:warning: If you are looking for old models, please visit the [Archive branch](https://github.com/tensorflow/models/tree/archive/research).
---
## Contributions
If you want to contribute, please review the [contribution guidelines](https://github.com/tensorflow/models/wiki/How-to-contribute).
| 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/README.md | README.md |
"""Build and train mobilenet_v1 with options for quantization."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from tensorflow.contrib import quantize as contrib_quantize
from datasets import dataset_factory
from nets import mobilenet_v1
from preprocessing import preprocessing_factory
flags = tf.app.flags
flags.DEFINE_string('master', '', 'Session master')
flags.DEFINE_integer('task', 0, 'Task')
flags.DEFINE_integer('ps_tasks', 0, 'Number of ps')
flags.DEFINE_integer('batch_size', 64, 'Batch size')
flags.DEFINE_integer('num_classes', 1001, 'Number of classes to distinguish')
flags.DEFINE_integer('number_of_steps', None,
'Number of training steps to perform before stopping')
flags.DEFINE_integer('image_size', 224, 'Input image resolution')
flags.DEFINE_float('depth_multiplier', 1.0, 'Depth multiplier for mobilenet')
flags.DEFINE_bool('quantize', False, 'Quantize training')
flags.DEFINE_string('fine_tune_checkpoint', '',
'Checkpoint from which to start finetuning.')
flags.DEFINE_string('checkpoint_dir', '',
'Directory for writing training checkpoints and logs')
flags.DEFINE_string('dataset_dir', '', 'Location of dataset')
flags.DEFINE_integer('log_every_n_steps', 100, 'Number of steps per log')
flags.DEFINE_integer('save_summaries_secs', 100,
'How often to save summaries, secs')
flags.DEFINE_integer('save_interval_secs', 100,
'How often to save checkpoints, secs')
FLAGS = flags.FLAGS
_LEARNING_RATE_DECAY_FACTOR = 0.94
def get_learning_rate():
if FLAGS.fine_tune_checkpoint:
# If we are fine tuning a checkpoint we need to start at a lower learning
# rate since we are farther along on training.
return 1e-4
else:
return 0.045
def get_quant_delay():
if FLAGS.fine_tune_checkpoint:
# We can start quantizing immediately if we are finetuning.
return 0
else:
# We need to wait for the model to train a bit before we quantize if we are
# training from scratch.
return 250000
def imagenet_input(is_training):
"""Data reader for imagenet.
Reads in imagenet data and performs pre-processing on the images.
Args:
is_training: bool specifying if train or validation dataset is needed.
Returns:
A batch of images and labels.
"""
if is_training:
dataset = dataset_factory.get_dataset('imagenet', 'train',
FLAGS.dataset_dir)
else:
dataset = dataset_factory.get_dataset('imagenet', 'validation',
FLAGS.dataset_dir)
provider = slim.dataset_data_provider.DatasetDataProvider(
dataset,
shuffle=is_training,
common_queue_capacity=2 * FLAGS.batch_size,
common_queue_min=FLAGS.batch_size)
[image, label] = provider.get(['image', 'label'])
image_preprocessing_fn = preprocessing_factory.get_preprocessing(
'mobilenet_v1', is_training=is_training)
image = image_preprocessing_fn(image, FLAGS.image_size, FLAGS.image_size)
images, labels = tf.train.batch([image, label],
batch_size=FLAGS.batch_size,
num_threads=4,
capacity=5 * FLAGS.batch_size)
labels = slim.one_hot_encoding(labels, FLAGS.num_classes)
return images, labels
def build_model():
"""Builds graph for model to train with rewrites for quantization.
Returns:
g: Graph with fake quantization ops and batch norm folding suitable for
training quantized weights.
train_tensor: Train op for execution during training.
"""
g = tf.Graph()
with g.as_default(), tf.device(
tf.train.replica_device_setter(FLAGS.ps_tasks)):
inputs, labels = imagenet_input(is_training=True)
with slim.arg_scope(mobilenet_v1.mobilenet_v1_arg_scope(is_training=True)):
logits, _ = mobilenet_v1.mobilenet_v1(
inputs,
is_training=True,
depth_multiplier=FLAGS.depth_multiplier,
num_classes=FLAGS.num_classes)
tf.losses.softmax_cross_entropy(labels, logits)
# Call rewriter to produce graph with fake quant ops and folded batch norms
# quant_delay delays start of quantization till quant_delay steps, allowing
# for better model accuracy.
if FLAGS.quantize:
contrib_quantize.create_training_graph(quant_delay=get_quant_delay())
total_loss = tf.losses.get_total_loss(name='total_loss')
# Configure the learning rate using an exponential decay.
num_epochs_per_decay = 2.5
imagenet_size = 1271167
decay_steps = int(imagenet_size / FLAGS.batch_size * num_epochs_per_decay)
learning_rate = tf.train.exponential_decay(
get_learning_rate(),
tf.train.get_or_create_global_step(),
decay_steps,
_LEARNING_RATE_DECAY_FACTOR,
staircase=True)
opt = tf.train.GradientDescentOptimizer(learning_rate)
train_tensor = slim.learning.create_train_op(
total_loss,
optimizer=opt)
slim.summaries.add_scalar_summary(total_loss, 'total_loss', 'losses')
slim.summaries.add_scalar_summary(learning_rate, 'learning_rate', 'training')
return g, train_tensor
def get_checkpoint_init_fn():
"""Returns the checkpoint init_fn if the checkpoint is provided."""
if FLAGS.fine_tune_checkpoint:
variables_to_restore = slim.get_variables_to_restore()
global_step_reset = tf.assign(
tf.train.get_or_create_global_step(), 0)
# When restoring from a floating point model, the min/max values for
# quantized weights and activations are not present.
# We instruct slim to ignore variables that are missing during restoration
# by setting ignore_missing_vars=True
slim_init_fn = slim.assign_from_checkpoint_fn(
FLAGS.fine_tune_checkpoint,
variables_to_restore,
ignore_missing_vars=True)
def init_fn(sess):
slim_init_fn(sess)
# If we are restoring from a floating point model, we need to initialize
# the global step to zero for the exponential decay to result in
# reasonable learning rates.
sess.run(global_step_reset)
return init_fn
else:
return None
def train_model():
"""Trains mobilenet_v1."""
g, train_tensor = build_model()
with g.as_default():
slim.learning.train(
train_tensor,
FLAGS.checkpoint_dir,
is_chief=(FLAGS.task == 0),
master=FLAGS.master,
log_every_n_steps=FLAGS.log_every_n_steps,
graph=g,
number_of_steps=FLAGS.number_of_steps,
save_summaries_secs=FLAGS.save_summaries_secs,
save_interval_secs=FLAGS.save_interval_secs,
init_fn=get_checkpoint_init_fn(),
global_step=tf.train.get_global_step())
def main(unused_arg):
train_model()
if __name__ == '__main__':
tf.app.run(main) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet_v1_train.py | mobilenet_v1_train.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
def block35(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
"""Builds the 35x35 resnet block."""
with tf.variable_scope(scope, 'Block35', [net], reuse=reuse):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 32, 1, scope='Conv2d_1x1')
with tf.variable_scope('Branch_1'):
tower_conv1_0 = slim.conv2d(net, 32, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1_0, 32, 3, scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
tower_conv2_0 = slim.conv2d(net, 32, 1, scope='Conv2d_0a_1x1')
tower_conv2_1 = slim.conv2d(tower_conv2_0, 48, 3, scope='Conv2d_0b_3x3')
tower_conv2_2 = slim.conv2d(tower_conv2_1, 64, 3, scope='Conv2d_0c_3x3')
mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_1, tower_conv2_2])
up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None,
activation_fn=None, scope='Conv2d_1x1')
scaled_up = up * scale
if activation_fn == tf.nn.relu6:
# Use clip_by_value to simulate bandpass activation.
scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0)
net += scaled_up
if activation_fn:
net = activation_fn(net)
return net
def block17(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
"""Builds the 17x17 resnet block."""
with tf.variable_scope(scope, 'Block17', [net], reuse=reuse):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 192, 1, scope='Conv2d_1x1')
with tf.variable_scope('Branch_1'):
tower_conv1_0 = slim.conv2d(net, 128, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1_0, 160, [1, 7],
scope='Conv2d_0b_1x7')
tower_conv1_2 = slim.conv2d(tower_conv1_1, 192, [7, 1],
scope='Conv2d_0c_7x1')
mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_2])
up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None,
activation_fn=None, scope='Conv2d_1x1')
scaled_up = up * scale
if activation_fn == tf.nn.relu6:
# Use clip_by_value to simulate bandpass activation.
scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0)
net += scaled_up
if activation_fn:
net = activation_fn(net)
return net
def block8(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
"""Builds the 8x8 resnet block."""
with tf.variable_scope(scope, 'Block8', [net], reuse=reuse):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 192, 1, scope='Conv2d_1x1')
with tf.variable_scope('Branch_1'):
tower_conv1_0 = slim.conv2d(net, 192, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1_0, 224, [1, 3],
scope='Conv2d_0b_1x3')
tower_conv1_2 = slim.conv2d(tower_conv1_1, 256, [3, 1],
scope='Conv2d_0c_3x1')
mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_2])
up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None,
activation_fn=None, scope='Conv2d_1x1')
scaled_up = up * scale
if activation_fn == tf.nn.relu6:
# Use clip_by_value to simulate bandpass activation.
scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0)
net += scaled_up
if activation_fn:
net = activation_fn(net)
return net
def inception_resnet_v2_base(inputs,
final_endpoint='Conv2d_7b_1x1',
output_stride=16,
align_feature_maps=False,
scope=None,
activation_fn=tf.nn.relu):
"""Inception model from http://arxiv.org/abs/1602.07261.
Constructs an Inception Resnet v2 network from inputs to the given final
endpoint. This method can construct the network up to the final inception
block Conv2d_7b_1x1.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
final_endpoint: specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3', 'MaxPool_5a_3x3',
'Mixed_5b', 'Mixed_6a', 'PreAuxLogits', 'Mixed_7a', 'Conv2d_7b_1x1']
output_stride: A scalar that specifies the requested ratio of input to
output spatial resolution. Only supports 8 and 16.
align_feature_maps: When true, changes all the VALID paddings in the network
to SAME padding so that the feature maps are aligned.
scope: Optional variable_scope.
activation_fn: Activation function for block scopes.
Returns:
tensor_out: output tensor corresponding to the final_endpoint.
end_points: a set of activations for external use, for example summaries or
losses.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
or if the output_stride is not 8 or 16, or if the output_stride is 8 and
we request an end point after 'PreAuxLogits'.
"""
if output_stride != 8 and output_stride != 16:
raise ValueError('output_stride must be 8 or 16.')
padding = 'SAME' if align_feature_maps else 'VALID'
end_points = {}
def add_and_check_final(name, net):
end_points[name] = net
return name == final_endpoint
with tf.variable_scope(scope, 'InceptionResnetV2', [inputs]):
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
# 149 x 149 x 32
net = slim.conv2d(inputs, 32, 3, stride=2, padding=padding,
scope='Conv2d_1a_3x3')
if add_and_check_final('Conv2d_1a_3x3', net): return net, end_points
# 147 x 147 x 32
net = slim.conv2d(net, 32, 3, padding=padding,
scope='Conv2d_2a_3x3')
if add_and_check_final('Conv2d_2a_3x3', net): return net, end_points
# 147 x 147 x 64
net = slim.conv2d(net, 64, 3, scope='Conv2d_2b_3x3')
if add_and_check_final('Conv2d_2b_3x3', net): return net, end_points
# 73 x 73 x 64
net = slim.max_pool2d(net, 3, stride=2, padding=padding,
scope='MaxPool_3a_3x3')
if add_and_check_final('MaxPool_3a_3x3', net): return net, end_points
# 73 x 73 x 80
net = slim.conv2d(net, 80, 1, padding=padding,
scope='Conv2d_3b_1x1')
if add_and_check_final('Conv2d_3b_1x1', net): return net, end_points
# 71 x 71 x 192
net = slim.conv2d(net, 192, 3, padding=padding,
scope='Conv2d_4a_3x3')
if add_and_check_final('Conv2d_4a_3x3', net): return net, end_points
# 35 x 35 x 192
net = slim.max_pool2d(net, 3, stride=2, padding=padding,
scope='MaxPool_5a_3x3')
if add_and_check_final('MaxPool_5a_3x3', net): return net, end_points
# 35 x 35 x 320
with tf.variable_scope('Mixed_5b'):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 96, 1, scope='Conv2d_1x1')
with tf.variable_scope('Branch_1'):
tower_conv1_0 = slim.conv2d(net, 48, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1_0, 64, 5,
scope='Conv2d_0b_5x5')
with tf.variable_scope('Branch_2'):
tower_conv2_0 = slim.conv2d(net, 64, 1, scope='Conv2d_0a_1x1')
tower_conv2_1 = slim.conv2d(tower_conv2_0, 96, 3,
scope='Conv2d_0b_3x3')
tower_conv2_2 = slim.conv2d(tower_conv2_1, 96, 3,
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
tower_pool = slim.avg_pool2d(net, 3, stride=1, padding='SAME',
scope='AvgPool_0a_3x3')
tower_pool_1 = slim.conv2d(tower_pool, 64, 1,
scope='Conv2d_0b_1x1')
net = tf.concat(
[tower_conv, tower_conv1_1, tower_conv2_2, tower_pool_1], 3)
if add_and_check_final('Mixed_5b', net): return net, end_points
# TODO(alemi): Register intermediate endpoints
net = slim.repeat(net, 10, block35, scale=0.17,
activation_fn=activation_fn)
# 17 x 17 x 1088 if output_stride == 8,
# 33 x 33 x 1088 if output_stride == 16
use_atrous = output_stride == 8
with tf.variable_scope('Mixed_6a'):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 384, 3, stride=1 if use_atrous else 2,
padding=padding,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
tower_conv1_0 = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1_0, 256, 3,
scope='Conv2d_0b_3x3')
tower_conv1_2 = slim.conv2d(tower_conv1_1, 384, 3,
stride=1 if use_atrous else 2,
padding=padding,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
tower_pool = slim.max_pool2d(net, 3, stride=1 if use_atrous else 2,
padding=padding,
scope='MaxPool_1a_3x3')
net = tf.concat([tower_conv, tower_conv1_2, tower_pool], 3)
if add_and_check_final('Mixed_6a', net): return net, end_points
# TODO(alemi): register intermediate endpoints
with slim.arg_scope([slim.conv2d], rate=2 if use_atrous else 1):
net = slim.repeat(net, 20, block17, scale=0.10,
activation_fn=activation_fn)
if add_and_check_final('PreAuxLogits', net): return net, end_points
if output_stride == 8:
# TODO(gpapan): Properly support output_stride for the rest of the net.
raise ValueError('output_stride==8 is only supported up to the '
'PreAuxlogits end_point for now.')
# 8 x 8 x 2080
with tf.variable_scope('Mixed_7a'):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1')
tower_conv_1 = slim.conv2d(tower_conv, 384, 3, stride=2,
padding=padding,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
tower_conv1 = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1')
tower_conv1_1 = slim.conv2d(tower_conv1, 288, 3, stride=2,
padding=padding,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
tower_conv2 = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1')
tower_conv2_1 = slim.conv2d(tower_conv2, 288, 3,
scope='Conv2d_0b_3x3')
tower_conv2_2 = slim.conv2d(tower_conv2_1, 320, 3, stride=2,
padding=padding,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_3'):
tower_pool = slim.max_pool2d(net, 3, stride=2,
padding=padding,
scope='MaxPool_1a_3x3')
net = tf.concat(
[tower_conv_1, tower_conv1_1, tower_conv2_2, tower_pool], 3)
if add_and_check_final('Mixed_7a', net): return net, end_points
# TODO(alemi): register intermediate endpoints
net = slim.repeat(net, 9, block8, scale=0.20, activation_fn=activation_fn)
net = block8(net, activation_fn=None)
# 8 x 8 x 1536
net = slim.conv2d(net, 1536, 1, scope='Conv2d_7b_1x1')
if add_and_check_final('Conv2d_7b_1x1', net): return net, end_points
raise ValueError('final_endpoint (%s) not recognized', final_endpoint)
def inception_resnet_v2(inputs, num_classes=1001, is_training=True,
dropout_keep_prob=0.8,
reuse=None,
scope='InceptionResnetV2',
create_aux_logits=True,
activation_fn=tf.nn.relu):
"""Creates the Inception Resnet V2 model.
Args:
inputs: a 4-D tensor of size [batch_size, height, width, 3].
Dimension batch_size may be undefined. If create_aux_logits is false,
also height and width may be undefined.
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
is_training: whether is training or not.
dropout_keep_prob: float, the fraction to keep before final layer.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
create_aux_logits: Whether to include the auxilliary logits.
activation_fn: Activation function for conv2d.
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the non-dropped-out input to the logits layer (if num_classes is 0 or
None).
end_points: the set of end_points from the inception model.
"""
end_points = {}
with tf.variable_scope(
scope, 'InceptionResnetV2', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_resnet_v2_base(inputs, scope=scope,
activation_fn=activation_fn)
if create_aux_logits and num_classes:
with tf.variable_scope('AuxLogits'):
aux = end_points['PreAuxLogits']
aux = slim.avg_pool2d(aux, 5, stride=3, padding='VALID',
scope='Conv2d_1a_3x3')
aux = slim.conv2d(aux, 128, 1, scope='Conv2d_1b_1x1')
aux = slim.conv2d(aux, 768, aux.get_shape()[1:3],
padding='VALID', scope='Conv2d_2a_5x5')
aux = slim.flatten(aux)
aux = slim.fully_connected(aux, num_classes, activation_fn=None,
scope='Logits')
end_points['AuxLogits'] = aux
with tf.variable_scope('Logits'):
# TODO(sguada,arnoegw): Consider adding a parameter global_pool which
# can be set to False to disable pooling here (as in resnet_*()).
kernel_size = net.get_shape()[1:3]
if kernel_size.is_fully_defined():
net = slim.avg_pool2d(net, kernel_size, padding='VALID',
scope='AvgPool_1a_8x8')
else:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if not num_classes:
return net, end_points
net = slim.flatten(net)
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='Dropout')
end_points['PreLogitsFlatten'] = net
logits = slim.fully_connected(net, num_classes, activation_fn=None,
scope='Logits')
end_points['Logits'] = logits
end_points['Predictions'] = tf.nn.softmax(logits, name='Predictions')
return logits, end_points
inception_resnet_v2.default_image_size = 299
def inception_resnet_v2_arg_scope(
weight_decay=0.00004,
batch_norm_decay=0.9997,
batch_norm_epsilon=0.001,
activation_fn=tf.nn.relu,
batch_norm_updates_collections=tf.GraphKeys.UPDATE_OPS,
batch_norm_scale=False):
"""Returns the scope with the default parameters for inception_resnet_v2.
Args:
weight_decay: the weight decay for weights variables.
batch_norm_decay: decay for the moving average of batch_norm momentums.
batch_norm_epsilon: small float added to variance to avoid dividing by zero.
activation_fn: Activation function for conv2d.
batch_norm_updates_collections: Collection for the update ops for
batch norm.
batch_norm_scale: If True, uses an explicit `gamma` multiplier to scale the
activations in the batch normalization layer.
Returns:
a arg_scope with the parameters needed for inception_resnet_v2.
"""
# Set weight_decay for weights in conv2d and fully_connected layers.
with slim.arg_scope([slim.conv2d, slim.fully_connected],
weights_regularizer=slim.l2_regularizer(weight_decay),
biases_regularizer=slim.l2_regularizer(weight_decay)):
batch_norm_params = {
'decay': batch_norm_decay,
'epsilon': batch_norm_epsilon,
'updates_collections': batch_norm_updates_collections,
'fused': None, # Use fused batch norm if possible.
'scale': batch_norm_scale,
}
# Set activation_fn and parameters for batch_norm.
with slim.arg_scope([slim.conv2d], activation_fn=activation_fn,
normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params) as scope:
return scope | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_resnet_v2.py | inception_resnet_v2.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import i3d_utils
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
conv3d_spatiotemporal = i3d_utils.conv3d_spatiotemporal
inception_block_v1_3d = i3d_utils.inception_block_v1_3d
arg_scope = slim.arg_scope
def s3dg_arg_scope(weight_decay=1e-7,
batch_norm_decay=0.999,
batch_norm_epsilon=0.001):
"""Defines default arg_scope for S3D-G.
Args:
weight_decay: The weight decay to use for regularizing the model.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
Returns:
sc: An arg_scope to use for the models.
"""
batch_norm_params = {
# Decay for the moving averages.
'decay': batch_norm_decay,
# epsilon to prevent 0s in variance.
'epsilon': batch_norm_epsilon,
# Turns off fused batch norm.
'fused': False,
# collection containing the moving mean and moving variance.
'variables_collections': {
'beta': None,
'gamma': None,
'moving_mean': ['moving_vars'],
'moving_variance': ['moving_vars'],
}
}
with arg_scope([slim.conv3d, conv3d_spatiotemporal],
weights_regularizer=slim.l2_regularizer(weight_decay),
activation_fn=tf.nn.relu,
normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params):
with arg_scope([conv3d_spatiotemporal], separable=True) as sc:
return sc
def self_gating(input_tensor, scope, data_format='NDHWC'):
"""Feature gating as used in S3D-G.
Transforms the input features by aggregating features from all
spatial and temporal locations, and applying gating conditioned
on the aggregated features. More details can be found at:
https://arxiv.org/abs/1712.04851
Args:
input_tensor: A 5-D float tensor of size [batch_size, num_frames,
height, width, channels].
scope: scope for `variable_scope`.
data_format: An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC".
The data format of the input and output data. With the default format
"NDHWC", the data is stored in the order of: [batch, in_depth, in_height,
in_width, in_channels]. Alternatively, the format could be "NCDHW", the
data storage order is:
[batch, in_channels, in_depth, in_height, in_width].
Returns:
A tensor with the same shape as input_tensor.
"""
index_c = data_format.index('C')
index_d = data_format.index('D')
index_h = data_format.index('H')
index_w = data_format.index('W')
input_shape = input_tensor.get_shape().as_list()
t = input_shape[index_d]
w = input_shape[index_w]
h = input_shape[index_h]
num_channels = input_shape[index_c]
spatiotemporal_average = slim.avg_pool3d(
input_tensor, [t, w, h],
stride=1,
data_format=data_format,
scope=scope + '/self_gating/avg_pool3d')
weights = slim.conv3d(
spatiotemporal_average,
num_channels, [1, 1, 1],
activation_fn=None,
normalizer_fn=None,
biases_initializer=None,
data_format=data_format,
weights_initializer=trunc_normal(0.01),
scope=scope + '/self_gating/transformer_W')
tile_multiples = [1, t, w, h]
tile_multiples.insert(index_c, 1)
weights = tf.tile(weights, tile_multiples)
weights = tf.nn.sigmoid(weights)
return tf.multiply(weights, input_tensor)
def s3dg_base(inputs,
first_temporal_kernel_size=3,
temporal_conv_startat='Conv2d_2c_3x3',
gating_startat='Conv2d_2c_3x3',
final_endpoint='Mixed_5c',
min_depth=16,
depth_multiplier=1.0,
data_format='NDHWC',
scope='InceptionV1'):
"""Defines the I3D/S3DG base architecture.
Note that we use the names as defined in Inception V1 to facilitate checkpoint
conversion from an image-trained Inception V1 checkpoint to I3D checkpoint.
Args:
inputs: A 5-D float tensor of size [batch_size, num_frames, height, width,
channels].
first_temporal_kernel_size: Specifies the temporal kernel size for the first
conv3d filter. A larger value slows down the model but provides little
accuracy improvement. The default is 7 in the original I3D and S3D-G but 3
gives better performance. Must be set to one of 1, 3, 5 or 7.
temporal_conv_startat: Specifies the first conv block to use 3D or separable
3D convs rather than 2D convs (implemented as [1, k, k] 3D conv). This is
used to construct the inverted pyramid models. 'Conv2d_2c_3x3' is the
first valid block to use separable 3D convs. If provided block name is
not present, all valid blocks will use separable 3D convs. Note that
'Conv2d_1a_7x7' cannot be made into a separable 3D conv, but can be made
into a 2D or 3D conv using the `first_temporal_kernel_size` option.
gating_startat: Specifies the first conv block to use self gating.
'Conv2d_2c_3x3' is the first valid block to use self gating. If provided
block name is not present, all valid blocks will use separable 3D convs.
final_endpoint: Specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e',
'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b', 'Mixed_5c']
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
data_format: An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC".
The data format of the input and output data. With the default format
"NDHWC", the data is stored in the order of: [batch, in_depth, in_height,
in_width, in_channels]. Alternatively, the format could be "NCDHW", the
data storage order is:
[batch, in_channels, in_depth, in_height, in_width].
scope: Optional variable_scope.
Returns:
A dictionary from components of the network to the corresponding activation.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values, or
if depth_multiplier <= 0.
"""
assert data_format in ['NDHWC', 'NCDHW']
end_points = {}
t = 1
# For inverted pyramid models, we start with gating switched off.
use_gating = False
self_gating_fn = None
def gating_fn(inputs, scope):
return self_gating(inputs, scope, data_format=data_format)
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
depth = lambda d: max(int(d * depth_multiplier), min_depth)
with tf.variable_scope(scope, 'InceptionV1', [inputs]):
with arg_scope([slim.conv3d], weights_initializer=trunc_normal(0.01)):
with arg_scope([slim.conv3d, slim.max_pool3d, conv3d_spatiotemporal],
stride=1,
data_format=data_format,
padding='SAME'):
# batch_size x 32 x 112 x 112 x 64
end_point = 'Conv2d_1a_7x7'
if first_temporal_kernel_size not in [1, 3, 5, 7]:
raise ValueError(
'first_temporal_kernel_size can only be 1, 3, 5 or 7.')
# Separable conv is slow when used at first conv layer.
net = conv3d_spatiotemporal(
inputs,
depth(64), [first_temporal_kernel_size, 7, 7],
stride=2,
separable=False,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 32 x 56 x 56 x 64
end_point = 'MaxPool_2a_3x3'
net = slim.max_pool3d(net, [1, 3, 3], stride=[1, 2, 2], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 32 x 56 x 56 x 64
end_point = 'Conv2d_2b_1x1'
net = slim.conv3d(net, depth(64), [1, 1, 1], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 32 x 56 x 56 x 192
end_point = 'Conv2d_2c_3x3'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = conv3d_spatiotemporal(net, depth(192), [t, 3, 3], scope=end_point)
if use_gating:
net = self_gating(net, scope=end_point, data_format=data_format)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 32 x 28 x 28 x 192
end_point = 'MaxPool_3a_3x3'
net = slim.max_pool3d(net, [1, 3, 3], stride=[1, 2, 2], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 32 x 28 x 28 x 256
end_point = 'Mixed_3b'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(64),
num_outputs_1_0a=depth(96),
num_outputs_1_0b=depth(128),
num_outputs_2_0a=depth(16),
num_outputs_2_0b=depth(32),
num_outputs_3_0b=depth(32),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'Mixed_3c'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(128),
num_outputs_1_0a=depth(128),
num_outputs_1_0b=depth(192),
num_outputs_2_0a=depth(32),
num_outputs_2_0b=depth(96),
num_outputs_3_0b=depth(64),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'MaxPool_4a_3x3'
net = slim.max_pool3d(net, [3, 3, 3], stride=[2, 2, 2], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 16 x 14 x 14 x 512
end_point = 'Mixed_4b'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(192),
num_outputs_1_0a=depth(96),
num_outputs_1_0b=depth(208),
num_outputs_2_0a=depth(16),
num_outputs_2_0b=depth(48),
num_outputs_3_0b=depth(64),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 16 x 14 x 14 x 512
end_point = 'Mixed_4c'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(160),
num_outputs_1_0a=depth(112),
num_outputs_1_0b=depth(224),
num_outputs_2_0a=depth(24),
num_outputs_2_0b=depth(64),
num_outputs_3_0b=depth(64),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 16 x 14 x 14 x 512
end_point = 'Mixed_4d'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(128),
num_outputs_1_0a=depth(128),
num_outputs_1_0b=depth(256),
num_outputs_2_0a=depth(24),
num_outputs_2_0b=depth(64),
num_outputs_3_0b=depth(64),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 16 x 14 x 14 x 528
end_point = 'Mixed_4e'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(112),
num_outputs_1_0a=depth(144),
num_outputs_1_0b=depth(288),
num_outputs_2_0a=depth(32),
num_outputs_2_0b=depth(64),
num_outputs_3_0b=depth(64),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 16 x 14 x 14 x 832
end_point = 'Mixed_4f'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(256),
num_outputs_1_0a=depth(160),
num_outputs_1_0b=depth(320),
num_outputs_2_0a=depth(32),
num_outputs_2_0b=depth(128),
num_outputs_3_0b=depth(128),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'MaxPool_5a_2x2'
net = slim.max_pool3d(net, [2, 2, 2], stride=[2, 2, 2], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 8 x 7 x 7 x 832
end_point = 'Mixed_5b'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(256),
num_outputs_1_0a=depth(160),
num_outputs_1_0b=depth(320),
num_outputs_2_0a=depth(32),
num_outputs_2_0b=depth(128),
num_outputs_3_0b=depth(128),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
# batch_size x 8 x 7 x 7 x 1024
end_point = 'Mixed_5c'
if temporal_conv_startat == end_point:
t = 3
if gating_startat == end_point:
use_gating = True
self_gating_fn = gating_fn
net = inception_block_v1_3d(
net,
num_outputs_0_0a=depth(384),
num_outputs_1_0a=depth(192),
num_outputs_1_0b=depth(384),
num_outputs_2_0a=depth(48),
num_outputs_2_0b=depth(128),
num_outputs_3_0b=depth(128),
temporal_kernel_size=t,
self_gating_fn=self_gating_fn,
data_format=data_format,
scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def s3dg(inputs,
num_classes=1000,
first_temporal_kernel_size=3,
temporal_conv_startat='Conv2d_2c_3x3',
gating_startat='Conv2d_2c_3x3',
final_endpoint='Mixed_5c',
min_depth=16,
depth_multiplier=1.0,
dropout_keep_prob=0.8,
is_training=True,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
data_format='NDHWC',
scope='InceptionV1'):
"""Defines the S3D-G architecture.
The default image size used to train this network is 224x224.
Args:
inputs: A 5-D float tensor of size [batch_size, num_frames, height, width,
channels].
num_classes: number of predicted classes.
first_temporal_kernel_size: Specifies the temporal kernel size for the first
conv3d filter. A larger value slows down the model but provides little
accuracy improvement. Must be set to one of 1, 3, 5 or 7.
temporal_conv_startat: Specifies the first conv block to use separable 3D
convs rather than 2D convs (implemented as [1, k, k] 3D conv). This is
used to construct the inverted pyramid models. 'Conv2d_2c_3x3' is the
first valid block to use separable 3D convs. If provided block name is
not present, all valid blocks will use separable 3D convs.
gating_startat: Specifies the first conv block to use self gating.
'Conv2d_2c_3x3' is the first valid block to use self gating. If provided
block name is not present, all valid blocks will use separable 3D convs.
final_endpoint: Specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e',
'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b', 'Mixed_5c']
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
dropout_keep_prob: the percentage of activation values that are retained.
is_training: whether is training or not.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape is [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
data_format: An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC".
The data format of the input and output data. With the default format
"NDHWC", the data is stored in the order of: [batch, in_depth, in_height,
in_width, in_channels]. Alternatively, the format could be "NCDHW", the
data storage order is:
[batch, in_channels, in_depth, in_height, in_width].
scope: Optional variable_scope.
Returns:
logits: the pre-softmax activations, a tensor of size
[batch_size, num_classes]
end_points: a dictionary from components of the network to the corresponding
activation.
"""
assert data_format in ['NDHWC', 'NCDHW']
# Final pooling and prediction
with tf.variable_scope(
scope, 'InceptionV1', [inputs, num_classes], reuse=reuse) as scope:
with arg_scope([slim.batch_norm, slim.dropout], is_training=is_training):
net, end_points = s3dg_base(
inputs,
first_temporal_kernel_size=first_temporal_kernel_size,
temporal_conv_startat=temporal_conv_startat,
gating_startat=gating_startat,
final_endpoint=final_endpoint,
min_depth=min_depth,
depth_multiplier=depth_multiplier,
data_format=data_format,
scope=scope)
with tf.variable_scope('Logits'):
if data_format.startswith('NC'):
net = tf.transpose(a=net, perm=[0, 2, 3, 4, 1])
kernel_size = i3d_utils.reduced_kernel_size_3d(net, [2, 7, 7])
net = slim.avg_pool3d(
net,
kernel_size,
stride=1,
data_format='NDHWC',
scope='AvgPool_0a_7x7')
net = slim.dropout(net, dropout_keep_prob, scope='Dropout_0b')
logits = slim.conv3d(
net,
num_classes, [1, 1, 1],
activation_fn=None,
normalizer_fn=None,
data_format='NDHWC',
scope='Conv2d_0c_1x1')
# Temporal average pooling.
logits = tf.reduce_mean(input_tensor=logits, axis=1)
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
s3dg.default_image_size = 224 | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/s3dg.py | s3dg.py |
"""Utilities for building I3D network models."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
import tf_slim as slim
add_arg_scope = slim.add_arg_scope
layers = slim.layers
def center_initializer():
"""Centering Initializer for I3D.
This initializer allows identity mapping for temporal convolution at the
initialization, which is critical for a desired convergence behavior
for training a seprable I3D model.
The centering behavior of this initializer requires an odd-sized kernel,
typically set to 3.
Returns:
A weight initializer op used in temporal convolutional layers.
Raises:
ValueError: Input tensor data type has to be tf.float32.
ValueError: If input tensor is not a 5-D tensor.
ValueError: If input and output channel dimensions are different.
ValueError: If spatial kernel sizes are not 1.
ValueError: If temporal kernel size is even.
"""
def _initializer(shape, dtype=tf.float32, partition_info=None): # pylint: disable=unused-argument
"""Initializer op."""
if dtype != tf.float32 and dtype != tf.bfloat16:
raise ValueError(
'Input tensor data type has to be tf.float32 or tf.bfloat16.')
if len(shape) != 5:
raise ValueError('Input tensor has to be 5-D.')
if shape[3] != shape[4]:
raise ValueError('Input and output channel dimensions must be the same.')
if shape[1] != 1 or shape[2] != 1:
raise ValueError('Spatial kernel sizes must be 1 (pointwise conv).')
if shape[0] % 2 == 0:
raise ValueError('Temporal kernel size has to be odd.')
center_pos = int(shape[0] / 2)
init_mat = np.zeros(
[shape[0], shape[1], shape[2], shape[3], shape[4]], dtype=np.float32)
for i in range(0, shape[3]):
init_mat[center_pos, 0, 0, i, i] = 1.0
init_op = tf.constant(init_mat, dtype=dtype)
return init_op
return _initializer
@add_arg_scope
def conv3d_spatiotemporal(inputs,
num_outputs,
kernel_size,
stride=1,
padding='SAME',
activation_fn=None,
normalizer_fn=None,
normalizer_params=None,
weights_regularizer=None,
separable=False,
data_format='NDHWC',
scope=''):
"""A wrapper for conv3d to model spatiotemporal representations.
This allows switching between original 3D convolution and separable 3D
convolutions for spatial and temporal features respectively. On Kinetics,
seprable 3D convolutions yields better classification performance.
Args:
inputs: a 5-D tensor `[batch_size, depth, height, width, channels]`.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 3
`[kernel_depth, kernel_height, kernel_width]` of the filters. Can be an
int if all values are the same.
stride: a list of length 3 `[stride_depth, stride_height, stride_width]`.
Can be an int if all strides are the same.
padding: one of `VALID` or `SAME`.
activation_fn: activation function.
normalizer_fn: normalization function to use instead of `biases`.
normalizer_params: dictionary of normalization function parameters.
weights_regularizer: Optional regularizer for the weights.
separable: If `True`, use separable spatiotemporal convolutions.
data_format: An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC".
The data format of the input and output data. With the default format
"NDHWC", the data is stored in the order of: [batch, in_depth, in_height,
in_width, in_channels]. Alternatively, the format could be "NCDHW", the
data storage order is:
[batch, in_channels, in_depth, in_height, in_width].
scope: scope for `variable_scope`.
Returns:
A tensor representing the output of the (separable) conv3d operation.
"""
assert len(kernel_size) == 3
if separable and kernel_size[0] != 1:
spatial_kernel_size = [1, kernel_size[1], kernel_size[2]]
temporal_kernel_size = [kernel_size[0], 1, 1]
if isinstance(stride, list) and len(stride) == 3:
spatial_stride = [1, stride[1], stride[2]]
temporal_stride = [stride[0], 1, 1]
else:
spatial_stride = [1, stride, stride]
temporal_stride = [stride, 1, 1]
net = layers.conv3d(
inputs,
num_outputs,
spatial_kernel_size,
stride=spatial_stride,
padding=padding,
activation_fn=activation_fn,
normalizer_fn=normalizer_fn,
normalizer_params=normalizer_params,
weights_regularizer=weights_regularizer,
data_format=data_format,
scope=scope)
net = layers.conv3d(
net,
num_outputs,
temporal_kernel_size,
stride=temporal_stride,
padding=padding,
scope=scope + '/temporal',
activation_fn=activation_fn,
normalizer_fn=None,
data_format=data_format,
weights_initializer=center_initializer())
return net
else:
return layers.conv3d(
inputs,
num_outputs,
kernel_size,
stride=stride,
padding=padding,
activation_fn=activation_fn,
normalizer_fn=normalizer_fn,
normalizer_params=normalizer_params,
weights_regularizer=weights_regularizer,
data_format=data_format,
scope=scope)
@add_arg_scope
def inception_block_v1_3d(inputs,
num_outputs_0_0a,
num_outputs_1_0a,
num_outputs_1_0b,
num_outputs_2_0a,
num_outputs_2_0b,
num_outputs_3_0b,
temporal_kernel_size=3,
self_gating_fn=None,
data_format='NDHWC',
scope=''):
"""A 3D Inception v1 block.
This allows use of separable 3D convolutions and self-gating, as
described in:
Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu and Kevin Murphy,
Rethinking Spatiotemporal Feature Learning For Video Understanding.
https://arxiv.org/abs/1712.04851.
Args:
inputs: a 5-D tensor `[batch_size, depth, height, width, channels]`.
num_outputs_0_0a: integer, the number of output filters for Branch 0,
operation Conv2d_0a_1x1.
num_outputs_1_0a: integer, the number of output filters for Branch 1,
operation Conv2d_0a_1x1.
num_outputs_1_0b: integer, the number of output filters for Branch 1,
operation Conv2d_0b_3x3.
num_outputs_2_0a: integer, the number of output filters for Branch 2,
operation Conv2d_0a_1x1.
num_outputs_2_0b: integer, the number of output filters for Branch 2,
operation Conv2d_0b_3x3.
num_outputs_3_0b: integer, the number of output filters for Branch 3,
operation Conv2d_0b_1x1.
temporal_kernel_size: integer, the size of the temporal convolutional
filters in the conv3d_spatiotemporal blocks.
self_gating_fn: function which optionally performs self-gating.
Must have two arguments, `inputs` and `scope`, and return one output
tensor the same size as `inputs`. If `None`, no self-gating is
applied.
data_format: An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC".
The data format of the input and output data. With the default format
"NDHWC", the data is stored in the order of: [batch, in_depth, in_height,
in_width, in_channels]. Alternatively, the format could be "NCDHW", the
data storage order is:
[batch, in_channels, in_depth, in_height, in_width].
scope: scope for `variable_scope`.
Returns:
A 5-D tensor `[batch_size, depth, height, width, out_channels]`, where
`out_channels = num_outputs_0_0a + num_outputs_1_0b + num_outputs_2_0b
+ num_outputs_3_0b`.
"""
use_gating = self_gating_fn is not None
with tf.variable_scope(scope):
with tf.variable_scope('Branch_0'):
branch_0 = layers.conv3d(
inputs, num_outputs_0_0a, [1, 1, 1], scope='Conv2d_0a_1x1')
if use_gating:
branch_0 = self_gating_fn(branch_0, scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = layers.conv3d(
inputs, num_outputs_1_0a, [1, 1, 1], scope='Conv2d_0a_1x1')
branch_1 = conv3d_spatiotemporal(
branch_1, num_outputs_1_0b, [temporal_kernel_size, 3, 3],
scope='Conv2d_0b_3x3')
if use_gating:
branch_1 = self_gating_fn(branch_1, scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = layers.conv3d(
inputs, num_outputs_2_0a, [1, 1, 1], scope='Conv2d_0a_1x1')
branch_2 = conv3d_spatiotemporal(
branch_2, num_outputs_2_0b, [temporal_kernel_size, 3, 3],
scope='Conv2d_0b_3x3')
if use_gating:
branch_2 = self_gating_fn(branch_2, scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = layers.max_pool3d(inputs, [3, 3, 3], scope='MaxPool_0a_3x3')
branch_3 = layers.conv3d(
branch_3, num_outputs_3_0b, [1, 1, 1], scope='Conv2d_0b_1x1')
if use_gating:
branch_3 = self_gating_fn(branch_3, scope='Conv2d_0b_1x1')
index_c = data_format.index('C')
assert 1 <= index_c <= 4, 'Cannot identify channel dimension.'
output = tf.concat([branch_0, branch_1, branch_2, branch_3], index_c)
return output
def reduced_kernel_size_3d(input_tensor, kernel_size):
"""Define kernel size which is automatically reduced for small input.
If the shape of the input images is unknown at graph construction time this
function assumes that the input images are large enough.
Args:
input_tensor: input tensor of size
[batch_size, time, height, width, channels].
kernel_size: desired kernel size of length 3, corresponding to time,
height and width.
Returns:
a tensor with the kernel size.
"""
assert len(kernel_size) == 3
shape = input_tensor.get_shape().as_list()
assert len(shape) == 5
if None in shape[1:4]:
kernel_size_out = kernel_size
else:
kernel_size_out = [min(shape[1], kernel_size[0]),
min(shape[2], kernel_size[1]),
min(shape[3], kernel_size[2])]
return kernel_size_out | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/i3d_utils.py | i3d_utils.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import functools
import tensorflow.compat.v1 as tf
import tf_slim as slim
def pix2pix_arg_scope():
"""Returns a default argument scope for isola_net.
Returns:
An arg scope.
"""
# These parameters come from the online port, which don't necessarily match
# those in the paper.
# TODO(nsilberman): confirm these values with Philip.
instance_norm_params = {
'center': True,
'scale': True,
'epsilon': 0.00001,
}
with slim.arg_scope(
[slim.conv2d, slim.conv2d_transpose],
normalizer_fn=slim.instance_norm,
normalizer_params=instance_norm_params,
weights_initializer=tf.random_normal_initializer(0, 0.02)) as sc:
return sc
def upsample(net, num_outputs, kernel_size, method='nn_upsample_conv'):
"""Upsamples the given inputs.
Args:
net: A `Tensor` of size [batch_size, height, width, filters].
num_outputs: The number of output filters.
kernel_size: A list of 2 scalars or a 1x2 `Tensor` indicating the scale,
relative to the inputs, of the output dimensions. For example, if kernel
size is [2, 3], then the output height and width will be twice and three
times the input size.
method: The upsampling method.
Returns:
An `Tensor` which was upsampled using the specified method.
Raises:
ValueError: if `method` is not recognized.
"""
net_shape = tf.shape(input=net)
height = net_shape[1]
width = net_shape[2]
if method == 'nn_upsample_conv':
net = tf.image.resize(
net, [kernel_size[0] * height, kernel_size[1] * width],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
net = slim.conv2d(net, num_outputs, [4, 4], activation_fn=None)
elif method == 'conv2d_transpose':
net = slim.conv2d_transpose(
net, num_outputs, [4, 4], stride=kernel_size, activation_fn=None)
else:
raise ValueError('Unknown method: [%s]' % method)
return net
class Block(
collections.namedtuple('Block', ['num_filters', 'decoder_keep_prob'])):
"""Represents a single block of encoder and decoder processing.
The Image-to-Image translation paper works a bit differently than the original
U-Net model. In particular, each block represents a single operation in the
encoder which is concatenated with the corresponding decoder representation.
A dropout layer follows the concatenation and convolution of the concatenated
features.
"""
pass
def _default_generator_blocks():
"""Returns the default generator block definitions.
Returns:
A list of generator blocks.
"""
return [
Block(64, 0.5),
Block(128, 0.5),
Block(256, 0.5),
Block(512, 0),
Block(512, 0),
Block(512, 0),
Block(512, 0),
]
def pix2pix_generator(net,
num_outputs,
blocks=None,
upsample_method='nn_upsample_conv',
is_training=False): # pylint: disable=unused-argument
"""Defines the network architecture.
Args:
net: A `Tensor` of size [batch, height, width, channels]. Note that the
generator currently requires square inputs (e.g. height=width).
num_outputs: The number of (per-pixel) outputs.
blocks: A list of generator blocks or `None` to use the default generator
definition.
upsample_method: The method of upsampling images, one of 'nn_upsample_conv'
or 'conv2d_transpose'
is_training: Whether or not we're in training or testing mode.
Returns:
A `Tensor` representing the model output and a dictionary of model end
points.
Raises:
ValueError: if the input heights do not match their widths.
"""
end_points = {}
blocks = blocks or _default_generator_blocks()
input_size = net.get_shape().as_list()
input_size[3] = num_outputs
upsample_fn = functools.partial(upsample, method=upsample_method)
encoder_activations = []
###########
# Encoder #
###########
with tf.variable_scope('encoder'):
with slim.arg_scope([slim.conv2d],
kernel_size=[4, 4],
stride=2,
activation_fn=tf.nn.leaky_relu):
for block_id, block in enumerate(blocks):
# No normalizer for the first encoder layers as per 'Image-to-Image',
# Section 5.1.1
if block_id == 0:
# First layer doesn't use normalizer_fn
net = slim.conv2d(net, block.num_filters, normalizer_fn=None)
elif block_id < len(blocks) - 1:
net = slim.conv2d(net, block.num_filters)
else:
# Last layer doesn't use activation_fn nor normalizer_fn
net = slim.conv2d(
net, block.num_filters, activation_fn=None, normalizer_fn=None)
encoder_activations.append(net)
end_points['encoder%d' % block_id] = net
###########
# Decoder #
###########
reversed_blocks = list(blocks)
reversed_blocks.reverse()
with tf.variable_scope('decoder'):
# Dropout is used at both train and test time as per 'Image-to-Image',
# Section 2.1 (last paragraph).
with slim.arg_scope([slim.dropout], is_training=True):
for block_id, block in enumerate(reversed_blocks):
if block_id > 0:
net = tf.concat([net, encoder_activations[-block_id - 1]], axis=3)
# The Relu comes BEFORE the upsample op:
net = tf.nn.relu(net)
net = upsample_fn(net, block.num_filters, [2, 2])
if block.decoder_keep_prob > 0:
net = slim.dropout(net, keep_prob=block.decoder_keep_prob)
end_points['decoder%d' % block_id] = net
with tf.variable_scope('output'):
# Explicitly set the normalizer_fn to None to override any default value
# that may come from an arg_scope, such as pix2pix_arg_scope.
logits = slim.conv2d(
net, num_outputs, [4, 4], activation_fn=None, normalizer_fn=None)
logits = tf.reshape(logits, input_size)
end_points['logits'] = logits
end_points['predictions'] = tf.tanh(logits)
return logits, end_points
def pix2pix_discriminator(net, num_filters, padding=2, pad_mode='REFLECT',
activation_fn=tf.nn.leaky_relu, is_training=False):
"""Creates the Image2Image Translation Discriminator.
Args:
net: A `Tensor` of size [batch_size, height, width, channels] representing
the input.
num_filters: A list of the filters in the discriminator. The length of the
list determines the number of layers in the discriminator.
padding: Amount of reflection padding applied before each convolution.
pad_mode: mode for tf.pad, one of "CONSTANT", "REFLECT", or "SYMMETRIC".
activation_fn: activation fn for slim.conv2d.
is_training: Whether or not the model is training or testing.
Returns:
A logits `Tensor` of size [batch_size, N, N, 1] where N is the number of
'patches' we're attempting to discriminate and a dictionary of model end
points.
"""
del is_training
end_points = {}
num_layers = len(num_filters)
def padded(net, scope):
if padding:
with tf.variable_scope(scope):
spatial_pad = tf.constant(
[[0, 0], [padding, padding], [padding, padding], [0, 0]],
dtype=tf.int32)
return tf.pad(tensor=net, paddings=spatial_pad, mode=pad_mode)
else:
return net
with slim.arg_scope([slim.conv2d],
kernel_size=[4, 4],
stride=2,
padding='valid',
activation_fn=activation_fn):
# No normalization on the input layer.
net = slim.conv2d(
padded(net, 'conv0'), num_filters[0], normalizer_fn=None, scope='conv0')
end_points['conv0'] = net
for i in range(1, num_layers - 1):
net = slim.conv2d(
padded(net, 'conv%d' % i), num_filters[i], scope='conv%d' % i)
end_points['conv%d' % i] = net
# Stride 1 on the last layer.
net = slim.conv2d(
padded(net, 'conv%d' % (num_layers - 1)),
num_filters[-1],
stride=1,
scope='conv%d' % (num_layers - 1))
end_points['conv%d' % (num_layers - 1)] = net
# 1-dim logits, stride 1, no activation, no normalization.
logits = slim.conv2d(
padded(net, 'conv%d' % num_layers),
1,
stride=1,
activation_fn=None,
normalizer_fn=None,
scope='conv%d' % num_layers)
end_points['logits'] = logits
end_points['predictions'] = tf.sigmoid(logits)
return logits, end_points | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/pix2pix.py | pix2pix.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import tensorflow.compat.v1 as tf
import tf_slim as slim
class Block(collections.namedtuple('Block', ['scope', 'unit_fn', 'args'])):
"""A named tuple describing a ResNet block.
Its parts are:
scope: The scope of the `Block`.
unit_fn: The ResNet unit function which takes as input a `Tensor` and
returns another `Tensor` with the output of the ResNet unit.
args: A list of length equal to the number of units in the `Block`. The list
contains one (depth, depth_bottleneck, stride) tuple for each unit in the
block to serve as argument to unit_fn.
"""
def subsample(inputs, factor, scope=None):
"""Subsamples the input along the spatial dimensions.
Args:
inputs: A `Tensor` of size [batch, height_in, width_in, channels].
factor: The subsampling factor.
scope: Optional variable_scope.
Returns:
output: A `Tensor` of size [batch, height_out, width_out, channels] with the
input, either intact (if factor == 1) or subsampled (if factor > 1).
"""
if factor == 1:
return inputs
else:
return slim.max_pool2d(inputs, [1, 1], stride=factor, scope=scope)
def conv2d_same(inputs, num_outputs, kernel_size, stride, rate=1, scope=None):
"""Strided 2-D convolution with 'SAME' padding.
When stride > 1, then we do explicit zero-padding, followed by conv2d with
'VALID' padding.
Note that
net = conv2d_same(inputs, num_outputs, 3, stride=stride)
is equivalent to
net = slim.conv2d(inputs, num_outputs, 3, stride=1, padding='SAME')
net = subsample(net, factor=stride)
whereas
net = slim.conv2d(inputs, num_outputs, 3, stride=stride, padding='SAME')
is different when the input's height or width is even, which is why we add the
current function. For more details, see ResnetUtilsTest.testConv2DSameEven().
Args:
inputs: A 4-D tensor of size [batch, height_in, width_in, channels].
num_outputs: An integer, the number of output filters.
kernel_size: An int with the kernel_size of the filters.
stride: An integer, the output stride.
rate: An integer, rate for atrous convolution.
scope: Scope.
Returns:
output: A 4-D tensor of size [batch, height_out, width_out, channels] with
the convolution output.
"""
if stride == 1:
return slim.conv2d(inputs, num_outputs, kernel_size, stride=1, rate=rate,
padding='SAME', scope=scope)
else:
kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1)
pad_total = kernel_size_effective - 1
pad_beg = pad_total // 2
pad_end = pad_total - pad_beg
inputs = tf.pad(
tensor=inputs,
paddings=[[0, 0], [pad_beg, pad_end], [pad_beg, pad_end], [0, 0]])
return slim.conv2d(inputs, num_outputs, kernel_size, stride=stride,
rate=rate, padding='VALID', scope=scope)
@slim.add_arg_scope
def stack_blocks_dense(net, blocks, output_stride=None,
store_non_strided_activations=False,
outputs_collections=None):
"""Stacks ResNet `Blocks` and controls output feature density.
First, this function creates scopes for the ResNet in the form of
'block_name/unit_1', 'block_name/unit_2', etc.
Second, this function allows the user to explicitly control the ResNet
output_stride, which is the ratio of the input to output spatial resolution.
This is useful for dense prediction tasks such as semantic segmentation or
object detection.
Most ResNets consist of 4 ResNet blocks and subsample the activations by a
factor of 2 when transitioning between consecutive ResNet blocks. This results
to a nominal ResNet output_stride equal to 8. If we set the output_stride to
half the nominal network stride (e.g., output_stride=4), then we compute
responses twice.
Control of the output feature density is implemented by atrous convolution.
Args:
net: A `Tensor` of size [batch, height, width, channels].
blocks: A list of length equal to the number of ResNet `Blocks`. Each
element is a ResNet `Block` object describing the units in the `Block`.
output_stride: If `None`, then the output will be computed at the nominal
network stride. If output_stride is not `None`, it specifies the requested
ratio of input to output spatial resolution, which needs to be equal to
the product of unit strides from the start up to some level of the ResNet.
For example, if the ResNet employs units with strides 1, 2, 1, 3, 4, 1,
then valid values for the output_stride are 1, 2, 6, 24 or None (which
is equivalent to output_stride=24).
store_non_strided_activations: If True, we compute non-strided (undecimated)
activations at the last unit of each block and store them in the
`outputs_collections` before subsampling them. This gives us access to
higher resolution intermediate activations which are useful in some
dense prediction problems but increases 4x the computation and memory cost
at the last unit of each block.
outputs_collections: Collection to add the ResNet block outputs.
Returns:
net: Output tensor with stride equal to the specified output_stride.
Raises:
ValueError: If the target output_stride is not valid.
"""
# The current_stride variable keeps track of the effective stride of the
# activations. This allows us to invoke atrous convolution whenever applying
# the next residual unit would result in the activations having stride larger
# than the target output_stride.
current_stride = 1
# The atrous convolution rate parameter.
rate = 1
for block in blocks:
with tf.variable_scope(block.scope, 'block', [net]) as sc:
block_stride = 1
for i, unit in enumerate(block.args):
if store_non_strided_activations and i == len(block.args) - 1:
# Move stride from the block's last unit to the end of the block.
block_stride = unit.get('stride', 1)
unit = dict(unit, stride=1)
with tf.variable_scope('unit_%d' % (i + 1), values=[net]):
# If we have reached the target output_stride, then we need to employ
# atrous convolution with stride=1 and multiply the atrous rate by the
# current unit's stride for use in subsequent layers.
if output_stride is not None and current_stride == output_stride:
net = block.unit_fn(net, rate=rate, **dict(unit, stride=1))
rate *= unit.get('stride', 1)
else:
net = block.unit_fn(net, rate=1, **unit)
current_stride *= unit.get('stride', 1)
if output_stride is not None and current_stride > output_stride:
raise ValueError('The target output_stride cannot be reached.')
# Collect activations at the block's end before performing subsampling.
net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net)
# Subsampling of the block's output activations.
if output_stride is not None and current_stride == output_stride:
rate *= block_stride
else:
net = subsample(net, block_stride)
current_stride *= block_stride
if output_stride is not None and current_stride > output_stride:
raise ValueError('The target output_stride cannot be reached.')
if output_stride is not None and current_stride != output_stride:
raise ValueError('The target output_stride cannot be reached.')
return net
def resnet_arg_scope(
weight_decay=0.0001,
batch_norm_decay=0.997,
batch_norm_epsilon=1e-5,
batch_norm_scale=True,
activation_fn=tf.nn.relu,
use_batch_norm=True,
batch_norm_updates_collections=tf.GraphKeys.UPDATE_OPS):
"""Defines the default ResNet arg scope.
TODO(gpapan): The batch-normalization related default values above are
appropriate for use in conjunction with the reference ResNet models
released at https://github.com/KaimingHe/deep-residual-networks. When
training ResNets from scratch, they might need to be tuned.
Args:
weight_decay: The weight decay to use for regularizing the model.
batch_norm_decay: The moving average decay when estimating layer activation
statistics in batch normalization.
batch_norm_epsilon: Small constant to prevent division by zero when
normalizing activations by their variance in batch normalization.
batch_norm_scale: If True, uses an explicit `gamma` multiplier to scale the
activations in the batch normalization layer.
activation_fn: The activation function which is used in ResNet.
use_batch_norm: Whether or not to use batch normalization.
batch_norm_updates_collections: Collection for the update ops for
batch norm.
Returns:
An `arg_scope` to use for the resnet models.
"""
batch_norm_params = {
'decay': batch_norm_decay,
'epsilon': batch_norm_epsilon,
'scale': batch_norm_scale,
'updates_collections': batch_norm_updates_collections,
'fused': None, # Use fused batch norm if possible.
}
with slim.arg_scope(
[slim.conv2d],
weights_regularizer=slim.l2_regularizer(weight_decay),
weights_initializer=slim.variance_scaling_initializer(),
activation_fn=activation_fn,
normalizer_fn=slim.batch_norm if use_batch_norm else None,
normalizer_params=batch_norm_params):
with slim.arg_scope([slim.batch_norm], **batch_norm_params):
# The following implies padding='SAME' for pool1, which makes feature
# alignment easier for dense prediction tasks. This is also used in
# https://github.com/facebook/fb.resnet.torch. However the accompanying
# code of 'Deep Residual Learning for Image Recognition' uses
# padding='VALID' for pool1. You can switch to that choice by setting
# slim.arg_scope([slim.max_pool2d], padding='VALID').
with slim.arg_scope([slim.max_pool2d], padding='SAME') as arg_sc:
return arg_sc | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/resnet_utils.py | resnet_utils.py |
"""Contains a factory for building various models."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import tf_slim as slim
from nets import alexnet
from nets import cifarnet
from nets import i3d
from nets import inception
from nets import lenet
from nets import mobilenet_v1
from nets import overfeat
from nets import resnet_v1
from nets import resnet_v2
from nets import s3dg
from nets import vgg
from nets.mobilenet import mobilenet_v2
from nets.mobilenet import mobilenet_v3
from nets.nasnet import nasnet
from nets.nasnet import pnasnet
networks_map = {
'alexnet_v2': alexnet.alexnet_v2,
'cifarnet': cifarnet.cifarnet,
'overfeat': overfeat.overfeat,
'vgg_a': vgg.vgg_a,
'vgg_16': vgg.vgg_16,
'vgg_19': vgg.vgg_19,
'inception_v1': inception.inception_v1,
'inception_v2': inception.inception_v2,
'inception_v3': inception.inception_v3,
'inception_v4': inception.inception_v4,
'inception_resnet_v2': inception.inception_resnet_v2,
'i3d': i3d.i3d,
's3dg': s3dg.s3dg,
'lenet': lenet.lenet,
'resnet_v1_50': resnet_v1.resnet_v1_50,
'resnet_v1_101': resnet_v1.resnet_v1_101,
'resnet_v1_152': resnet_v1.resnet_v1_152,
'resnet_v1_200': resnet_v1.resnet_v1_200,
'resnet_v2_50': resnet_v2.resnet_v2_50,
'resnet_v2_101': resnet_v2.resnet_v2_101,
'resnet_v2_152': resnet_v2.resnet_v2_152,
'resnet_v2_200': resnet_v2.resnet_v2_200,
'mobilenet_v1': mobilenet_v1.mobilenet_v1,
'mobilenet_v1_075': mobilenet_v1.mobilenet_v1_075,
'mobilenet_v1_050': mobilenet_v1.mobilenet_v1_050,
'mobilenet_v1_025': mobilenet_v1.mobilenet_v1_025,
'mobilenet_v2': mobilenet_v2.mobilenet,
'mobilenet_v2_140': mobilenet_v2.mobilenet_v2_140,
'mobilenet_v2_035': mobilenet_v2.mobilenet_v2_035,
'mobilenet_v3_small': mobilenet_v3.small,
'mobilenet_v3_large': mobilenet_v3.large,
'mobilenet_v3_small_minimalistic': mobilenet_v3.small_minimalistic,
'mobilenet_v3_large_minimalistic': mobilenet_v3.large_minimalistic,
'mobilenet_edgetpu': mobilenet_v3.edge_tpu,
'mobilenet_edgetpu_075': mobilenet_v3.edge_tpu_075,
'nasnet_cifar': nasnet.build_nasnet_cifar,
'nasnet_mobile': nasnet.build_nasnet_mobile,
'nasnet_large': nasnet.build_nasnet_large,
'pnasnet_large': pnasnet.build_pnasnet_large,
'pnasnet_mobile': pnasnet.build_pnasnet_mobile,
}
arg_scopes_map = {
'alexnet_v2': alexnet.alexnet_v2_arg_scope,
'cifarnet': cifarnet.cifarnet_arg_scope,
'overfeat': overfeat.overfeat_arg_scope,
'vgg_a': vgg.vgg_arg_scope,
'vgg_16': vgg.vgg_arg_scope,
'vgg_19': vgg.vgg_arg_scope,
'inception_v1': inception.inception_v3_arg_scope,
'inception_v2': inception.inception_v3_arg_scope,
'inception_v3': inception.inception_v3_arg_scope,
'inception_v4': inception.inception_v4_arg_scope,
'inception_resnet_v2': inception.inception_resnet_v2_arg_scope,
'i3d': i3d.i3d_arg_scope,
's3dg': s3dg.s3dg_arg_scope,
'lenet': lenet.lenet_arg_scope,
'resnet_v1_50': resnet_v1.resnet_arg_scope,
'resnet_v1_101': resnet_v1.resnet_arg_scope,
'resnet_v1_152': resnet_v1.resnet_arg_scope,
'resnet_v1_200': resnet_v1.resnet_arg_scope,
'resnet_v2_50': resnet_v2.resnet_arg_scope,
'resnet_v2_101': resnet_v2.resnet_arg_scope,
'resnet_v2_152': resnet_v2.resnet_arg_scope,
'resnet_v2_200': resnet_v2.resnet_arg_scope,
'mobilenet_v1': mobilenet_v1.mobilenet_v1_arg_scope,
'mobilenet_v1_075': mobilenet_v1.mobilenet_v1_arg_scope,
'mobilenet_v1_050': mobilenet_v1.mobilenet_v1_arg_scope,
'mobilenet_v1_025': mobilenet_v1.mobilenet_v1_arg_scope,
'mobilenet_v2': mobilenet_v2.training_scope,
'mobilenet_v2_035': mobilenet_v2.training_scope,
'mobilenet_v2_140': mobilenet_v2.training_scope,
'mobilenet_v3_small': mobilenet_v3.training_scope,
'mobilenet_v3_large': mobilenet_v3.training_scope,
'mobilenet_v3_small_minimalistic': mobilenet_v3.training_scope,
'mobilenet_v3_large_minimalistic': mobilenet_v3.training_scope,
'mobilenet_edgetpu': mobilenet_v3.training_scope,
'mobilenet_edgetpu_075': mobilenet_v3.training_scope,
'nasnet_cifar': nasnet.nasnet_cifar_arg_scope,
'nasnet_mobile': nasnet.nasnet_mobile_arg_scope,
'nasnet_large': nasnet.nasnet_large_arg_scope,
'pnasnet_large': pnasnet.pnasnet_large_arg_scope,
'pnasnet_mobile': pnasnet.pnasnet_mobile_arg_scope,
}
def get_network_fn(name, num_classes, weight_decay=0.0, is_training=False):
"""Returns a network_fn such as `logits, end_points = network_fn(images)`.
Args:
name: The name of the network.
num_classes: The number of classes to use for classification. If 0 or None,
the logits layer is omitted and its input features are returned instead.
weight_decay: The l2 coefficient for the model weights.
is_training: `True` if the model is being used for training and `False`
otherwise.
Returns:
network_fn: A function that applies the model to a batch of images. It has
the following signature:
net, end_points = network_fn(images)
The `images` input is a tensor of shape [batch_size, height, width, 3 or
1] with height = width = network_fn.default_image_size. (The
permissibility and treatment of other sizes depends on the network_fn.)
The returned `end_points` are a dictionary of intermediate activations.
The returned `net` is the topmost layer, depending on `num_classes`:
If `num_classes` was a non-zero integer, `net` is a logits tensor
of shape [batch_size, num_classes].
If `num_classes` was 0 or `None`, `net` is a tensor with the input
to the logits layer of shape [batch_size, 1, 1, num_features] or
[batch_size, num_features]. Dropout has not been applied to this
(even if the network's original classification does); it remains for
the caller to do this or not.
Raises:
ValueError: If network `name` is not recognized.
"""
if name not in networks_map:
raise ValueError('Name of network unknown %s' % name)
func = networks_map[name]
@functools.wraps(func)
def network_fn(images, **kwargs):
arg_scope = arg_scopes_map[name](weight_decay=weight_decay)
with slim.arg_scope(arg_scope):
return func(images, num_classes=num_classes, is_training=is_training,
**kwargs)
if hasattr(func, 'default_image_size'):
network_fn.default_image_size = func.default_image_size
return network_fn | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/nets_factory.py | nets_factory.py |
"""Contains the definition for inception v2 classification network."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception_utils
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
def inception_v2_base(inputs,
final_endpoint='Mixed_5c',
min_depth=16,
depth_multiplier=1.0,
use_separable_conv=True,
data_format='NHWC',
include_root_block=True,
scope=None):
"""Inception v2 (6a2).
Constructs an Inception v2 network from inputs to the given final endpoint.
This method can construct the network up to the layer inception(5b) as
described in http://arxiv.org/abs/1502.03167.
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
final_endpoint: specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c', 'Mixed_4a',
'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e', 'Mixed_5a', 'Mixed_5b',
'Mixed_5c']. If include_root_block is False, ['Conv2d_1a_7x7',
'MaxPool_2a_3x3', 'Conv2d_2b_1x1', 'Conv2d_2c_3x3', 'MaxPool_3a_3x3'] will
not be available.
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
use_separable_conv: Use a separable convolution for the first layer
Conv2d_1a_7x7. If this is False, use a normal convolution instead.
data_format: Data format of the activations ('NHWC' or 'NCHW').
include_root_block: If True, include the convolution and max-pooling layers
before the inception modules. If False, excludes those layers.
scope: Optional variable_scope.
Returns:
tensor_out: output tensor corresponding to the final_endpoint.
end_points: a set of activations for external use, for example summaries or
losses.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
or depth_multiplier <= 0
"""
# end_points will collect relevant activations for external use, for example
# summaries or losses.
end_points = {}
# Used to find thinned depths for each layer.
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
depth = lambda d: max(int(d * depth_multiplier), min_depth)
if data_format != 'NHWC' and data_format != 'NCHW':
raise ValueError('data_format must be either NHWC or NCHW.')
if data_format == 'NCHW' and use_separable_conv:
raise ValueError(
'separable convolution only supports NHWC layout. NCHW data format can'
' only be used when use_separable_conv is False.'
)
concat_dim = 3 if data_format == 'NHWC' else 1
with tf.variable_scope(scope, 'InceptionV2', [inputs]):
with slim.arg_scope(
[slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1,
padding='SAME',
data_format=data_format):
net = inputs
if include_root_block:
# Note that sizes in the comments below assume an input spatial size of
# 224x224, however, the inputs can be of any size greater 32x32.
# 224 x 224 x 3
end_point = 'Conv2d_1a_7x7'
if use_separable_conv:
# depthwise_multiplier here is different from depth_multiplier.
# depthwise_multiplier determines the output channels of the initial
# depthwise conv (see docs for tf.nn.separable_conv2d), while
# depth_multiplier controls the # channels of the subsequent 1x1
# convolution. Must have
# in_channels * depthwise_multipler <= out_channels
# so that the separable convolution is not overparameterized.
depthwise_multiplier = min(int(depth(64) / 3), 8)
net = slim.separable_conv2d(
inputs,
depth(64), [7, 7],
depth_multiplier=depthwise_multiplier,
stride=2,
padding='SAME',
weights_initializer=trunc_normal(1.0),
scope=end_point)
else:
# Use a normal convolution instead of a separable convolution.
net = slim.conv2d(
inputs,
depth(64), [7, 7],
stride=2,
weights_initializer=trunc_normal(1.0),
scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
# 112 x 112 x 64
end_point = 'MaxPool_2a_3x3'
net = slim.max_pool2d(net, [3, 3], scope=end_point, stride=2)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
# 56 x 56 x 64
end_point = 'Conv2d_2b_1x1'
net = slim.conv2d(
net,
depth(64), [1, 1],
scope=end_point,
weights_initializer=trunc_normal(0.1))
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
# 56 x 56 x 64
end_point = 'Conv2d_2c_3x3'
net = slim.conv2d(net, depth(192), [3, 3], scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
# 56 x 56 x 192
end_point = 'MaxPool_3a_3x3'
net = slim.max_pool2d(net, [3, 3], scope=end_point, stride=2)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
# 28 x 28 x 192
# Inception module.
end_point = 'Mixed_3b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(64), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(32), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 28 x 28 x 256
end_point = 'Mixed_3c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(64), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 28 x 28 x 320
end_point = 'Mixed_4a'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(
net, depth(128), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_0 = slim.conv2d(branch_0, depth(160), [3, 3], stride=2,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(
branch_1, depth(96), [3, 3], scope='Conv2d_0b_3x3')
branch_1 = slim.conv2d(
branch_1, depth(96), [3, 3], stride=2, scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(
net, [3, 3], stride=2, scope='MaxPool_1a_3x3')
net = tf.concat(axis=concat_dim, values=[branch_0, branch_1, branch_2])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 14 x 14 x 576
end_point = 'Mixed_4b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(224), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(64), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(
branch_1, depth(96), [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(96), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(128), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(128), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(128), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 14 x 14 x 576
end_point = 'Mixed_4c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(96), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(128), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(96), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(128), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(128), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(128), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 14 x 14 x 576
end_point = 'Mixed_4d'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(128), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(160), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(128), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(160), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(160), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(96), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 14 x 14 x 576
end_point = 'Mixed_4e'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(96), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(128), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(192), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(160), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(192), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(192), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(96), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 14 x 14 x 576
end_point = 'Mixed_5a'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(
net, depth(128), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_0 = slim.conv2d(branch_0, depth(192), [3, 3], stride=2,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(192), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(256), [3, 3],
scope='Conv2d_0b_3x3')
branch_1 = slim.conv2d(branch_1, depth(256), [3, 3], stride=2,
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(net, [3, 3], stride=2,
scope='MaxPool_1a_3x3')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 7 x 7 x 1024
end_point = 'Mixed_5b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(352), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(192), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(320), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(160), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(224), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(224), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(128), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 7 x 7 x 1024
end_point = 'Mixed_5c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(352), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(
net, depth(192), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(320), [3, 3],
scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(
net, depth(192), [1, 1],
weights_initializer=trunc_normal(0.09),
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(224), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(224), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(128), [1, 1],
weights_initializer=trunc_normal(0.1),
scope='Conv2d_0b_1x1')
net = tf.concat(
axis=concat_dim, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def inception_v2(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.8,
min_depth=16,
depth_multiplier=1.0,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
scope='InceptionV2',
global_pool=False):
"""Inception v2 model for classification.
Constructs an Inception v2 network for classification as described in
http://arxiv.org/abs/1502.03167.
The default image size used to train this network is 224x224.
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
is_training: whether is training or not.
dropout_keep_prob: the percentage of activation values that are retained.
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape [B, C], if false logits is of
shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
global_pool: Optional boolean flag to control the avgpooling before the
logits layer. If false or unset, pooling is done with a fixed window
that reduces default-sized inputs to 1x1, while larger inputs lead to
larger outputs. If true, any input size is pooled down to 1x1.
Returns:
net: a Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the non-dropped-out input to the logits layer
if num_classes is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
or depth_multiplier <= 0
"""
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
# Final pooling and prediction
with tf.variable_scope(
scope, 'InceptionV2', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_v2_base(
inputs, scope=scope, min_depth=min_depth,
depth_multiplier=depth_multiplier)
with tf.variable_scope('Logits'):
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
else:
# Pooling with a fixed kernel size.
kernel_size = _reduced_kernel_size_for_small_input(net, [7, 7])
net = slim.avg_pool2d(net, kernel_size, padding='VALID',
scope='AvgPool_1a_{}x{}'.format(*kernel_size))
end_points['AvgPool_1a'] = net
if not num_classes:
return net, end_points
# 1 x 1 x 1024
net = slim.dropout(net, keep_prob=dropout_keep_prob, scope='Dropout_1b')
end_points['PreLogits'] = net
logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='Conv2d_1c_1x1')
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
inception_v2.default_image_size = 224
def _reduced_kernel_size_for_small_input(input_tensor, kernel_size):
"""Define kernel size which is automatically reduced for small input.
If the shape of the input images is unknown at graph construction time this
function assumes that the input images are is large enough.
Args:
input_tensor: input tensor of size [batch_size, height, width, channels].
kernel_size: desired kernel size of length 2: [kernel_height, kernel_width]
Returns:
a tensor with the kernel size.
TODO(jrru): Make this function work with unknown shapes. Theoretically, this
can be done with the code below. Problems are two-fold: (1) If the shape was
known, it will be lost. (2) inception.slim.ops._two_element_tuple cannot
handle tensors that define the kernel size.
shape = tf.shape(input_tensor)
return = tf.stack([tf.minimum(shape[1], kernel_size[0]),
tf.minimum(shape[2], kernel_size[1])])
"""
shape = input_tensor.get_shape().as_list()
if shape[1] is None or shape[2] is None:
kernel_size_out = kernel_size
else:
kernel_size_out = [min(shape[1], kernel_size[0]),
min(shape[2], kernel_size[1])]
return kernel_size_out
inception_v2_arg_scope = inception_utils.inception_arg_scope | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v2.py | inception_v2.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
def inception_arg_scope(
weight_decay=0.00004,
use_batch_norm=True,
batch_norm_decay=0.9997,
batch_norm_epsilon=0.001,
activation_fn=tf.nn.relu,
batch_norm_updates_collections=tf.GraphKeys.UPDATE_OPS,
batch_norm_scale=False):
"""Defines the default arg scope for inception models.
Args:
weight_decay: The weight decay to use for regularizing the model.
use_batch_norm: "If `True`, batch_norm is applied after each convolution.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
activation_fn: Activation function for conv2d.
batch_norm_updates_collections: Collection for the update ops for
batch norm.
batch_norm_scale: If True, uses an explicit `gamma` multiplier to scale the
activations in the batch normalization layer.
Returns:
An `arg_scope` to use for the inception models.
"""
batch_norm_params = {
# Decay for the moving averages.
'decay': batch_norm_decay,
# epsilon to prevent 0s in variance.
'epsilon': batch_norm_epsilon,
# collection containing update_ops.
'updates_collections': batch_norm_updates_collections,
# use fused batch norm if possible.
'fused': None,
'scale': batch_norm_scale,
}
if use_batch_norm:
normalizer_fn = slim.batch_norm
normalizer_params = batch_norm_params
else:
normalizer_fn = None
normalizer_params = {}
# Set weight_decay for weights in Conv and FC layers.
with slim.arg_scope([slim.conv2d, slim.fully_connected],
weights_regularizer=slim.l2_regularizer(weight_decay)):
with slim.arg_scope(
[slim.conv2d],
weights_initializer=slim.variance_scaling_initializer(),
activation_fn=activation_fn,
normalizer_fn=normalizer_fn,
normalizer_params=normalizer_params) as sc:
return sc | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_utils.py | inception_utils.py |
"""Contains the definition for inception v1 classification network."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception_utils
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
def inception_v1_base(inputs,
final_endpoint='Mixed_5c',
include_root_block=True,
scope='InceptionV1'):
"""Defines the Inception V1 base architecture.
This architecture is defined in:
Going deeper with convolutions
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed,
Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich.
http://arxiv.org/pdf/1409.4842v1.pdf.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
final_endpoint: specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e',
'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b', 'Mixed_5c']. If
include_root_block is False, ['Conv2d_1a_7x7', 'MaxPool_2a_3x3',
'Conv2d_2b_1x1', 'Conv2d_2c_3x3', 'MaxPool_3a_3x3'] will not be available.
include_root_block: If True, include the convolution and max-pooling layers
before the inception modules. If False, excludes those layers.
scope: Optional variable_scope.
Returns:
A dictionary from components of the network to the corresponding activation.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values.
"""
end_points = {}
with tf.variable_scope(scope, 'InceptionV1', [inputs]):
with slim.arg_scope(
[slim.conv2d, slim.fully_connected],
weights_initializer=trunc_normal(0.01)):
with slim.arg_scope([slim.conv2d, slim.max_pool2d],
stride=1, padding='SAME'):
net = inputs
if include_root_block:
end_point = 'Conv2d_1a_7x7'
net = slim.conv2d(inputs, 64, [7, 7], stride=2, scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'MaxPool_2a_3x3'
net = slim.max_pool2d(net, [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'Conv2d_2b_1x1'
net = slim.conv2d(net, 64, [1, 1], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'Conv2d_2c_3x3'
net = slim.conv2d(net, 192, [3, 3], scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'MaxPool_3a_3x3'
net = slim.max_pool2d(net, [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point:
return net, end_points
end_point = 'Mixed_3b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 96, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 128, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 16, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 32, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 32, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_3c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 192, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 32, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'MaxPool_4a_3x3'
net = slim.max_pool2d(net, [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_4b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 96, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 208, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 16, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 48, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_4c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 112, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 224, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 24, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 64, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_4d'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 256, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 24, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 64, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_4e'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 112, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 144, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 288, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 32, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 64, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_4f'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 256, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 320, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 32, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 128, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 128, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'MaxPool_5a_2x2'
net = slim.max_pool2d(net, [2, 2], stride=2, scope=end_point)
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_5b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 256, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 320, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 32, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 128, [3, 3], scope='Conv2d_0a_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 128, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
end_point = 'Mixed_5c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 384, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 384, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 48, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 128, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 128, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(
axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if final_endpoint == end_point: return net, end_points
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def inception_v1(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.8,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
scope='InceptionV1',
global_pool=False):
"""Defines the Inception V1 architecture.
This architecture is defined in:
Going deeper with convolutions
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed,
Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich.
http://arxiv.org/pdf/1409.4842v1.pdf.
The default image size used to train this network is 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
is_training: whether is training or not.
dropout_keep_prob: the percentage of activation values that are retained.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape [B, C], if false logits is of
shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
global_pool: Optional boolean flag to control the avgpooling before the
logits layer. If false or unset, pooling is done with a fixed window
that reduces default-sized inputs to 1x1, while larger inputs lead to
larger outputs. If true, any input size is pooled down to 1x1.
Returns:
net: a Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the non-dropped-out input to the logits layer
if num_classes is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
"""
# Final pooling and prediction
with tf.variable_scope(
scope, 'InceptionV1', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_v1_base(inputs, scope=scope)
with tf.variable_scope('Logits'):
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
else:
# Pooling with a fixed kernel size.
net = slim.avg_pool2d(net, [7, 7], stride=1, scope='AvgPool_0a_7x7')
end_points['AvgPool_0a_7x7'] = net
if not num_classes:
return net, end_points
net = slim.dropout(net, dropout_keep_prob, scope='Dropout_0b')
logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='Conv2d_0c_1x1')
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
inception_v1.default_image_size = 224
inception_v1_arg_scope = inception_utils.inception_arg_scope | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v1.py | inception_v1.py |
"""Validate mobilenet_v1 with options for quantization."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow.compat.v1 as tf
import tf_slim as slim
from tensorflow.contrib import quantize as contrib_quantize
from datasets import dataset_factory
from nets import mobilenet_v1
from preprocessing import preprocessing_factory
flags = tf.app.flags
flags.DEFINE_string('master', '', 'Session master')
flags.DEFINE_integer('batch_size', 250, 'Batch size')
flags.DEFINE_integer('num_classes', 1001, 'Number of classes to distinguish')
flags.DEFINE_integer('num_examples', 50000, 'Number of examples to evaluate')
flags.DEFINE_integer('image_size', 224, 'Input image resolution')
flags.DEFINE_float('depth_multiplier', 1.0, 'Depth multiplier for mobilenet')
flags.DEFINE_bool('quantize', False, 'Quantize training')
flags.DEFINE_string('checkpoint_dir', '', 'The directory for checkpoints')
flags.DEFINE_string('eval_dir', '', 'Directory for writing eval event logs')
flags.DEFINE_string('dataset_dir', '', 'Location of dataset')
FLAGS = flags.FLAGS
def imagenet_input(is_training):
"""Data reader for imagenet.
Reads in imagenet data and performs pre-processing on the images.
Args:
is_training: bool specifying if train or validation dataset is needed.
Returns:
A batch of images and labels.
"""
if is_training:
dataset = dataset_factory.get_dataset('imagenet', 'train',
FLAGS.dataset_dir)
else:
dataset = dataset_factory.get_dataset('imagenet', 'validation',
FLAGS.dataset_dir)
provider = slim.dataset_data_provider.DatasetDataProvider(
dataset,
shuffle=is_training,
common_queue_capacity=2 * FLAGS.batch_size,
common_queue_min=FLAGS.batch_size)
[image, label] = provider.get(['image', 'label'])
image_preprocessing_fn = preprocessing_factory.get_preprocessing(
'mobilenet_v1', is_training=is_training)
image = image_preprocessing_fn(image, FLAGS.image_size, FLAGS.image_size)
images, labels = tf.train.batch(
tensors=[image, label],
batch_size=FLAGS.batch_size,
num_threads=4,
capacity=5 * FLAGS.batch_size)
return images, labels
def metrics(logits, labels):
"""Specify the metrics for eval.
Args:
logits: Logits output from the graph.
labels: Ground truth labels for inputs.
Returns:
Eval Op for the graph.
"""
labels = tf.squeeze(labels)
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
'Accuracy':
tf.metrics.accuracy(
tf.argmax(input=logits, axis=1), labels),
'Recall_5':
tf.metrics.recall_at_k(labels, logits, 5),
})
for name, value in names_to_values.iteritems():
slim.summaries.add_scalar_summary(
value, name, prefix='eval', print_summary=True)
return names_to_updates.values()
def build_model():
"""Build the mobilenet_v1 model for evaluation.
Returns:
g: graph with rewrites after insertion of quantization ops and batch norm
folding.
eval_ops: eval ops for inference.
variables_to_restore: List of variables to restore from checkpoint.
"""
g = tf.Graph()
with g.as_default():
inputs, labels = imagenet_input(is_training=False)
scope = mobilenet_v1.mobilenet_v1_arg_scope(
is_training=False, weight_decay=0.0)
with slim.arg_scope(scope):
logits, _ = mobilenet_v1.mobilenet_v1(
inputs,
is_training=False,
depth_multiplier=FLAGS.depth_multiplier,
num_classes=FLAGS.num_classes)
if FLAGS.quantize:
contrib_quantize.create_eval_graph()
eval_ops = metrics(logits, labels)
return g, eval_ops
def eval_model():
"""Evaluates mobilenet_v1."""
g, eval_ops = build_model()
with g.as_default():
num_batches = math.ceil(FLAGS.num_examples / float(FLAGS.batch_size))
slim.evaluation.evaluate_once(
FLAGS.master,
FLAGS.checkpoint_dir,
logdir=FLAGS.eval_dir,
num_evals=num_batches,
eval_op=eval_ops)
def main(unused_arg):
eval_model()
if __name__ == '__main__':
tf.app.run(main) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet_v1_eval.py | mobilenet_v1_eval.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception_utils
def block_inception_a(inputs, scope=None, reuse=None):
"""Builds Inception-A block for Inception v4 network."""
# By default use stride=1 and SAME padding
with slim.arg_scope([slim.conv2d, slim.avg_pool2d, slim.max_pool2d],
stride=1, padding='SAME'):
with tf.variable_scope(
scope, 'BlockInceptionA', [inputs], reuse=reuse):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(inputs, 96, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(inputs, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 96, [3, 3], scope='Conv2d_0b_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(inputs, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(inputs, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 96, [1, 1], scope='Conv2d_0b_1x1')
return tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
def block_reduction_a(inputs, scope=None, reuse=None):
"""Builds Reduction-A block for Inception v4 network."""
# By default use stride=1 and SAME padding
with slim.arg_scope([slim.conv2d, slim.avg_pool2d, slim.max_pool2d],
stride=1, padding='SAME'):
with tf.variable_scope(
scope, 'BlockReductionA', [inputs], reuse=reuse):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(inputs, 384, [3, 3], stride=2, padding='VALID',
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(inputs, 192, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 224, [3, 3], scope='Conv2d_0b_3x3')
branch_1 = slim.conv2d(branch_1, 256, [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(inputs, [3, 3], stride=2, padding='VALID',
scope='MaxPool_1a_3x3')
return tf.concat(axis=3, values=[branch_0, branch_1, branch_2])
def block_inception_b(inputs, scope=None, reuse=None):
"""Builds Inception-B block for Inception v4 network."""
# By default use stride=1 and SAME padding
with slim.arg_scope([slim.conv2d, slim.avg_pool2d, slim.max_pool2d],
stride=1, padding='SAME'):
with tf.variable_scope(
scope, 'BlockInceptionB', [inputs], reuse=reuse):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(inputs, 384, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(inputs, 192, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 224, [1, 7], scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, 256, [7, 1], scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(inputs, 192, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 192, [7, 1], scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, 224, [1, 7], scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, 224, [7, 1], scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, 256, [1, 7], scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(inputs, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 128, [1, 1], scope='Conv2d_0b_1x1')
return tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
def block_reduction_b(inputs, scope=None, reuse=None):
"""Builds Reduction-B block for Inception v4 network."""
# By default use stride=1 and SAME padding
with slim.arg_scope([slim.conv2d, slim.avg_pool2d, slim.max_pool2d],
stride=1, padding='SAME'):
with tf.variable_scope(
scope, 'BlockReductionB', [inputs], reuse=reuse):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(inputs, 192, [1, 1], scope='Conv2d_0a_1x1')
branch_0 = slim.conv2d(branch_0, 192, [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(inputs, 256, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 256, [1, 7], scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, 320, [7, 1], scope='Conv2d_0c_7x1')
branch_1 = slim.conv2d(branch_1, 320, [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(inputs, [3, 3], stride=2, padding='VALID',
scope='MaxPool_1a_3x3')
return tf.concat(axis=3, values=[branch_0, branch_1, branch_2])
def block_inception_c(inputs, scope=None, reuse=None):
"""Builds Inception-C block for Inception v4 network."""
# By default use stride=1 and SAME padding
with slim.arg_scope([slim.conv2d, slim.avg_pool2d, slim.max_pool2d],
stride=1, padding='SAME'):
with tf.variable_scope(
scope, 'BlockInceptionC', [inputs], reuse=reuse):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(inputs, 256, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(inputs, 384, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = tf.concat(axis=3, values=[
slim.conv2d(branch_1, 256, [1, 3], scope='Conv2d_0b_1x3'),
slim.conv2d(branch_1, 256, [3, 1], scope='Conv2d_0c_3x1')])
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(inputs, 384, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 448, [3, 1], scope='Conv2d_0b_3x1')
branch_2 = slim.conv2d(branch_2, 512, [1, 3], scope='Conv2d_0c_1x3')
branch_2 = tf.concat(axis=3, values=[
slim.conv2d(branch_2, 256, [1, 3], scope='Conv2d_0d_1x3'),
slim.conv2d(branch_2, 256, [3, 1], scope='Conv2d_0e_3x1')])
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(inputs, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 256, [1, 1], scope='Conv2d_0b_1x1')
return tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
def inception_v4_base(inputs, final_endpoint='Mixed_7d', scope=None):
"""Creates the Inception V4 network up to the given final endpoint.
Args:
inputs: a 4-D tensor of size [batch_size, height, width, 3].
final_endpoint: specifies the endpoint to construct the network up to.
It can be one of [ 'Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'Mixed_3a', 'Mixed_4a', 'Mixed_5a', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d',
'Mixed_5e', 'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d', 'Mixed_6e',
'Mixed_6f', 'Mixed_6g', 'Mixed_6h', 'Mixed_7a', 'Mixed_7b', 'Mixed_7c',
'Mixed_7d']
scope: Optional variable_scope.
Returns:
logits: the logits outputs of the model.
end_points: the set of end_points from the inception model.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
"""
end_points = {}
def add_and_check_final(name, net):
end_points[name] = net
return name == final_endpoint
with tf.variable_scope(scope, 'InceptionV4', [inputs]):
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
# 299 x 299 x 3
net = slim.conv2d(inputs, 32, [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
if add_and_check_final('Conv2d_1a_3x3', net): return net, end_points
# 149 x 149 x 32
net = slim.conv2d(net, 32, [3, 3], padding='VALID',
scope='Conv2d_2a_3x3')
if add_and_check_final('Conv2d_2a_3x3', net): return net, end_points
# 147 x 147 x 32
net = slim.conv2d(net, 64, [3, 3], scope='Conv2d_2b_3x3')
if add_and_check_final('Conv2d_2b_3x3', net): return net, end_points
# 147 x 147 x 64
with tf.variable_scope('Mixed_3a'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',
scope='MaxPool_0a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 96, [3, 3], stride=2, padding='VALID',
scope='Conv2d_0a_3x3')
net = tf.concat(axis=3, values=[branch_0, branch_1])
if add_and_check_final('Mixed_3a', net): return net, end_points
# 73 x 73 x 160
with tf.variable_scope('Mixed_4a'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_0 = slim.conv2d(branch_0, 96, [3, 3], padding='VALID',
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 64, [1, 7], scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, 64, [7, 1], scope='Conv2d_0c_7x1')
branch_1 = slim.conv2d(branch_1, 96, [3, 3], padding='VALID',
scope='Conv2d_1a_3x3')
net = tf.concat(axis=3, values=[branch_0, branch_1])
if add_and_check_final('Mixed_4a', net): return net, end_points
# 71 x 71 x 192
with tf.variable_scope('Mixed_5a'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 192, [3, 3], stride=2, padding='VALID',
scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',
scope='MaxPool_1a_3x3')
net = tf.concat(axis=3, values=[branch_0, branch_1])
if add_and_check_final('Mixed_5a', net): return net, end_points
# 35 x 35 x 384
# 4 x Inception-A blocks
for idx in range(4):
block_scope = 'Mixed_5' + chr(ord('b') + idx)
net = block_inception_a(net, block_scope)
if add_and_check_final(block_scope, net): return net, end_points
# 35 x 35 x 384
# Reduction-A block
net = block_reduction_a(net, 'Mixed_6a')
if add_and_check_final('Mixed_6a', net): return net, end_points
# 17 x 17 x 1024
# 7 x Inception-B blocks
for idx in range(7):
block_scope = 'Mixed_6' + chr(ord('b') + idx)
net = block_inception_b(net, block_scope)
if add_and_check_final(block_scope, net): return net, end_points
# 17 x 17 x 1024
# Reduction-B block
net = block_reduction_b(net, 'Mixed_7a')
if add_and_check_final('Mixed_7a', net): return net, end_points
# 8 x 8 x 1536
# 3 x Inception-C blocks
for idx in range(3):
block_scope = 'Mixed_7' + chr(ord('b') + idx)
net = block_inception_c(net, block_scope)
if add_and_check_final(block_scope, net): return net, end_points
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def inception_v4(inputs, num_classes=1001, is_training=True,
dropout_keep_prob=0.8,
reuse=None,
scope='InceptionV4',
create_aux_logits=True):
"""Creates the Inception V4 model.
Args:
inputs: a 4-D tensor of size [batch_size, height, width, 3].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
is_training: whether is training or not.
dropout_keep_prob: float, the fraction to keep before final layer.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
create_aux_logits: Whether to include the auxiliary logits.
Returns:
net: a Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the non-dropped input to the logits layer
if num_classes is 0 or None.
end_points: the set of end_points from the inception model.
"""
end_points = {}
with tf.variable_scope(
scope, 'InceptionV4', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_v4_base(inputs, scope=scope)
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
# Auxiliary Head logits
if create_aux_logits and num_classes:
with tf.variable_scope('AuxLogits'):
# 17 x 17 x 1024
aux_logits = end_points['Mixed_6h']
aux_logits = slim.avg_pool2d(aux_logits, [5, 5], stride=3,
padding='VALID',
scope='AvgPool_1a_5x5')
aux_logits = slim.conv2d(aux_logits, 128, [1, 1],
scope='Conv2d_1b_1x1')
aux_logits = slim.conv2d(aux_logits, 768,
aux_logits.get_shape()[1:3],
padding='VALID', scope='Conv2d_2a')
aux_logits = slim.flatten(aux_logits)
aux_logits = slim.fully_connected(aux_logits, num_classes,
activation_fn=None,
scope='Aux_logits')
end_points['AuxLogits'] = aux_logits
# Final pooling and prediction
# TODO(sguada,arnoegw): Consider adding a parameter global_pool which
# can be set to False to disable pooling here (as in resnet_*()).
with tf.variable_scope('Logits'):
# 8 x 8 x 1536
kernel_size = net.get_shape()[1:3]
if kernel_size.is_fully_defined():
net = slim.avg_pool2d(net, kernel_size, padding='VALID',
scope='AvgPool_1a')
else:
net = tf.reduce_mean(
input_tensor=net,
axis=[1, 2],
keepdims=True,
name='global_pool')
end_points['global_pool'] = net
if not num_classes:
return net, end_points
# 1 x 1 x 1536
net = slim.dropout(net, dropout_keep_prob, scope='Dropout_1b')
net = slim.flatten(net, scope='PreLogitsFlatten')
end_points['PreLogitsFlatten'] = net
# 1536
logits = slim.fully_connected(net, num_classes, activation_fn=None,
scope='Logits')
end_points['Logits'] = logits
end_points['Predictions'] = tf.nn.softmax(logits, name='Predictions')
return logits, end_points
inception_v4.default_image_size = 299
inception_v4_arg_scope = inception_utils.inception_arg_scope | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v4.py | inception_v4.py |
"""Export quantized tflite model from a trained checkpoint."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
from absl import app
from absl import flags
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
from nets import nets_factory
from preprocessing import preprocessing_factory
flags.DEFINE_string("model_name", None,
"The name of the architecture to quantize.")
flags.DEFINE_string("checkpoint_path", None, "Path to the training checkpoint.")
flags.DEFINE_string("dataset_name", "imagenet2012",
"Name of the dataset to use for quantization calibration.")
flags.DEFINE_string("dataset_dir", None, "Dataset location.")
flags.DEFINE_string(
"dataset_split", "train",
"The dataset split (train, validation etc.) to use for calibration.")
flags.DEFINE_string("output_tflite", None, "Path to output tflite file.")
flags.DEFINE_boolean(
"use_model_specific_preprocessing", False,
"When true, uses the preprocessing corresponding to the model as specified "
"in preprocessing factory.")
flags.DEFINE_boolean("enable_ema", True,
"Load exponential moving average version of variables.")
flags.DEFINE_integer(
"num_steps", 1000,
"Number of post-training quantization calibration steps to run.")
flags.DEFINE_integer("image_size", 224, "Size of the input image.")
flags.DEFINE_integer("num_classes", 1001,
"Number of output classes for the model.")
FLAGS = flags.FLAGS
# Mean and standard deviation used for normalizing the image tensor.
_MEAN_RGB = 127.5
_STD_RGB = 127.5
def _preprocess_for_quantization(image_data, image_size, crop_padding=32):
"""Crops to center of image with padding then scales, normalizes image_size.
Args:
image_data: A 3D Tensor representing the RGB image data. Image can be of
arbitrary height and width.
image_size: image height/width dimension.
crop_padding: the padding size to use when centering the crop.
Returns:
A decoded and cropped image Tensor. Image is normalized to [-1,1].
"""
shape = tf.shape(image_data)
image_height = shape[0]
image_width = shape[1]
padded_center_crop_size = tf.cast(
(image_size * 1.0 / (image_size + crop_padding)) *
tf.cast(tf.minimum(image_height, image_width), tf.float32), tf.int32)
offset_height = ((image_height - padded_center_crop_size) + 1) // 2
offset_width = ((image_width - padded_center_crop_size) + 1) // 2
image = tf.image.crop_to_bounding_box(
image_data,
offset_height=offset_height,
offset_width=offset_width,
target_height=padded_center_crop_size,
target_width=padded_center_crop_size)
image = tf.image.resize([image], [image_size, image_size],
method=tf.image.ResizeMethod.BICUBIC)[0]
image = tf.cast(image, tf.float32)
image -= tf.constant(_MEAN_RGB)
image /= tf.constant(_STD_RGB)
return image
def restore_model(sess, checkpoint_path, enable_ema=True):
"""Restore variables from the checkpoint into the provided session.
Args:
sess: A tensorflow session where the checkpoint will be loaded.
checkpoint_path: Path to the trained checkpoint.
enable_ema: (optional) Whether to load the exponential moving average (ema)
version of the tensorflow variables. Defaults to True.
"""
if enable_ema:
ema = tf.train.ExponentialMovingAverage(decay=0.0)
ema_vars = tf.trainable_variables() + tf.get_collection("moving_vars")
for v in tf.global_variables():
if "moving_mean" in v.name or "moving_variance" in v.name:
ema_vars.append(v)
ema_vars = list(set(ema_vars))
var_dict = ema.variables_to_restore(ema_vars)
else:
var_dict = None
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(var_dict, max_to_keep=1)
saver.restore(sess, checkpoint_path)
def _representative_dataset_gen():
"""Gets a python generator of numpy arrays for the given dataset."""
image_size = FLAGS.image_size
dataset = tfds.builder(FLAGS.dataset_name, data_dir=FLAGS.dataset_dir)
dataset.download_and_prepare()
data = dataset.as_dataset()[FLAGS.dataset_split]
iterator = tf.data.make_one_shot_iterator(data)
if FLAGS.use_model_specific_preprocessing:
preprocess_fn = functools.partial(
preprocessing_factory.get_preprocessing(name=FLAGS.model_name),
output_height=image_size,
output_width=image_size)
else:
preprocess_fn = functools.partial(
_preprocess_for_quantization, image_size=image_size)
features = iterator.get_next()
image = features["image"]
image = preprocess_fn(image)
image = tf.reshape(image, [1, image_size, image_size, 3])
for _ in range(FLAGS.num_steps):
yield [image.eval()]
def main(_):
with tf.Graph().as_default(), tf.Session() as sess:
network_fn = nets_factory.get_network_fn(
FLAGS.model_name, num_classes=FLAGS.num_classes, is_training=False)
image_size = FLAGS.image_size
images = tf.placeholder(
tf.float32, shape=(1, image_size, image_size, 3), name="images")
logits, _ = network_fn(images)
output_tensor = tf.nn.softmax(logits)
restore_model(sess, FLAGS.checkpoint_path, enable_ema=FLAGS.enable_ema)
converter = tf.lite.TFLiteConverter.from_session(sess, [images],
[output_tensor])
converter.representative_dataset = tf.lite.RepresentativeDataset(
_representative_dataset_gen)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_buffer = converter.convert()
with tf.gfile.GFile(FLAGS.output_tflite, "wb") as output_tflite:
output_tflite.write(tflite_buffer)
print("tflite model written to %s" % FLAGS.output_tflite)
if __name__ == "__main__":
flags.mark_flag_as_required("model_name")
flags.mark_flag_as_required("checkpoint_path")
flags.mark_flag_as_required("dataset_dir")
flags.mark_flag_as_required("output_tflite")
app.run(main) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/post_training_quantization.py | post_training_quantization.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
def overfeat_arg_scope(weight_decay=0.0005):
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(weight_decay),
biases_initializer=tf.zeros_initializer()):
with slim.arg_scope([slim.conv2d], padding='SAME'):
with slim.arg_scope([slim.max_pool2d], padding='VALID') as arg_sc:
return arg_sc
def overfeat(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
scope='overfeat',
global_pool=False):
"""Contains the model definition for the OverFeat network.
The definition for the network was obtained from:
OverFeat: Integrated Recognition, Localization and Detection using
Convolutional Networks
Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus and
Yann LeCun, 2014
http://arxiv.org/abs/1312.6229
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 231x231. To use in fully
convolutional mode, set spatial_squeeze to false.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
scope: Optional scope for the variables.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original OverFeat.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the non-dropped-out input to the logits layer (if num_classes is 0 or
None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(scope, 'overfeat', [inputs]) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID',
scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.conv2d(net, 256, [5, 5], padding='VALID', scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.conv2d(net, 512, [3, 3], scope='conv3')
net = slim.conv2d(net, 1024, [3, 3], scope='conv4')
net = slim.conv2d(net, 1024, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
with slim.arg_scope(
[slim.conv2d],
weights_initializer=trunc_normal(0.005),
biases_initializer=tf.constant_initializer(0.1)):
net = slim.conv2d(net, 3072, [6, 6], padding='VALID', scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(
end_points_collection)
if global_pool:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(
net,
num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
biases_initializer=tf.zeros_initializer(),
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
overfeat.default_image_size = 231 | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/overfeat.py | overfeat.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import i3d_utils
from nets import s3dg
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
conv3d_spatiotemporal = i3d_utils.conv3d_spatiotemporal
def i3d_arg_scope(weight_decay=1e-7,
batch_norm_decay=0.999,
batch_norm_epsilon=0.001,
use_renorm=False,
separable_conv3d=False):
"""Defines default arg_scope for I3D.
Args:
weight_decay: The weight decay to use for regularizing the model.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
use_renorm: Whether to use batch renormalization or not.
separable_conv3d: Whether to use separable 3d Convs.
Returns:
sc: An arg_scope to use for the models.
"""
batch_norm_params = {
# Decay for the moving averages.
'decay': batch_norm_decay,
# epsilon to prevent 0s in variance.
'epsilon': batch_norm_epsilon,
# Turns off fused batch norm.
'fused': False,
'renorm': use_renorm,
# collection containing the moving mean and moving variance.
'variables_collections': {
'beta': None,
'gamma': None,
'moving_mean': ['moving_vars'],
'moving_variance': ['moving_vars'],
}
}
with slim.arg_scope(
[slim.conv3d, conv3d_spatiotemporal],
weights_regularizer=slim.l2_regularizer(weight_decay),
activation_fn=tf.nn.relu,
normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params):
with slim.arg_scope(
[conv3d_spatiotemporal], separable=separable_conv3d) as sc:
return sc
def i3d_base(inputs, final_endpoint='Mixed_5c',
scope='InceptionV1'):
"""Defines the I3D base architecture.
Note that we use the names as defined in Inception V1 to facilitate checkpoint
conversion from an image-trained Inception V1 checkpoint to I3D checkpoint.
Args:
inputs: A 5-D float tensor of size [batch_size, num_frames, height, width,
channels].
final_endpoint: Specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_7x7', 'MaxPool_2a_3x3', 'Conv2d_2b_1x1',
'Conv2d_2c_3x3', 'MaxPool_3a_3x3', 'Mixed_3b', 'Mixed_3c',
'MaxPool_4a_3x3', 'Mixed_4b', 'Mixed_4c', 'Mixed_4d', 'Mixed_4e',
'Mixed_4f', 'MaxPool_5a_2x2', 'Mixed_5b', 'Mixed_5c']
scope: Optional variable_scope.
Returns:
A dictionary from components of the network to the corresponding activation.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values.
"""
return s3dg.s3dg_base(
inputs,
first_temporal_kernel_size=7,
temporal_conv_startat='Conv2d_2c_3x3',
gating_startat=None,
final_endpoint=final_endpoint,
min_depth=16,
depth_multiplier=1.0,
data_format='NDHWC',
scope=scope)
def i3d(inputs,
num_classes=1000,
dropout_keep_prob=0.8,
is_training=True,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
scope='InceptionV1'):
"""Defines the I3D architecture.
The default image size used to train this network is 224x224.
Args:
inputs: A 5-D float tensor of size [batch_size, num_frames, height, width,
channels].
num_classes: number of predicted classes.
dropout_keep_prob: the percentage of activation values that are retained.
is_training: whether is training or not.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape is [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
Returns:
logits: the pre-softmax activations, a tensor of size
[batch_size, num_classes]
end_points: a dictionary from components of the network to the corresponding
activation.
"""
# Final pooling and prediction
with tf.variable_scope(
scope, 'InceptionV1', [inputs, num_classes], reuse=reuse) as scope:
with slim.arg_scope(
[slim.batch_norm, slim.dropout], is_training=is_training):
net, end_points = i3d_base(inputs, scope=scope)
with tf.variable_scope('Logits'):
kernel_size = i3d_utils.reduced_kernel_size_3d(net, [2, 7, 7])
net = slim.avg_pool3d(
net, kernel_size, stride=1, scope='AvgPool_0a_7x7')
net = slim.dropout(net, dropout_keep_prob, scope='Dropout_0b')
logits = slim.conv3d(
net,
num_classes, [1, 1, 1],
activation_fn=None,
normalizer_fn=None,
scope='Conv2d_0c_1x1')
# Temporal average pooling.
logits = tf.reduce_mean(input_tensor=logits, axis=1)
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
i3d.default_image_size = 224 | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/i3d.py | i3d.py |
"""Contains a variant of the CIFAR-10 model definition."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
stddev=stddev)
def cifarnet(images, num_classes=10, is_training=False,
dropout_keep_prob=0.5,
prediction_fn=slim.softmax,
scope='CifarNet'):
"""Creates a variant of the CifarNet model.
Note that since the output is a set of 'logits', the values fall in the
interval of (-infinity, infinity). Consequently, to convert the outputs to a
probability distribution over the characters, one will need to convert them
using the softmax function:
logits = cifarnet.cifarnet(images, is_training=False)
probabilities = tf.nn.softmax(logits)
predictions = tf.argmax(logits, 1)
Args:
images: A batch of `Tensors` of size [batch_size, height, width, channels].
num_classes: the number of classes in the dataset. If 0 or None, the logits
layer is omitted and the input features to the logits layer are returned
instead.
is_training: specifies whether or not we're currently training the model.
This variable will determine the behaviour of the dropout layer.
dropout_keep_prob: the percentage of activation values that are retained.
prediction_fn: a function to get predictions out of logits.
scope: Optional variable_scope.
Returns:
net: a 2D Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the input to the logits layer if num_classes
is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
"""
end_points = {}
with tf.variable_scope(scope, 'CifarNet', [images]):
net = slim.conv2d(images, 64, [5, 5], scope='conv1')
end_points['conv1'] = net
net = slim.max_pool2d(net, [2, 2], 2, scope='pool1')
end_points['pool1'] = net
net = tf.nn.lrn(net, 4, bias=1.0, alpha=0.001/9.0, beta=0.75, name='norm1')
net = slim.conv2d(net, 64, [5, 5], scope='conv2')
end_points['conv2'] = net
net = tf.nn.lrn(net, 4, bias=1.0, alpha=0.001/9.0, beta=0.75, name='norm2')
net = slim.max_pool2d(net, [2, 2], 2, scope='pool2')
end_points['pool2'] = net
net = slim.flatten(net)
end_points['Flatten'] = net
net = slim.fully_connected(net, 384, scope='fc3')
end_points['fc3'] = net
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout3')
net = slim.fully_connected(net, 192, scope='fc4')
end_points['fc4'] = net
if not num_classes:
return net, end_points
logits = slim.fully_connected(
net,
num_classes,
biases_initializer=tf.zeros_initializer(),
weights_initializer=trunc_normal(1 / 192.0),
weights_regularizer=None,
activation_fn=None,
scope='logits')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
cifarnet.default_image_size = 32
def cifarnet_arg_scope(weight_decay=0.004):
"""Defines the default cifarnet argument scope.
Args:
weight_decay: The weight decay to use for regularizing the model.
Returns:
An `arg_scope` to use for the inception v3 model.
"""
with slim.arg_scope(
[slim.conv2d],
weights_initializer=tf.truncated_normal_initializer(
stddev=5e-2),
activation_fn=tf.nn.relu):
with slim.arg_scope(
[slim.fully_connected],
biases_initializer=tf.constant_initializer(0.1),
weights_initializer=trunc_normal(0.04),
weights_regularizer=slim.l2_regularizer(weight_decay),
activation_fn=tf.nn.relu) as sc:
return sc | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/cifarnet.py | cifarnet.py |
"""Defines the CycleGAN generator and discriminator networks."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow.compat.v1 as tf
import tf_slim as slim
from tensorflow.python.framework import tensor_util
def cyclegan_arg_scope(instance_norm_center=True,
instance_norm_scale=True,
instance_norm_epsilon=0.001,
weights_init_stddev=0.02,
weight_decay=0.0):
"""Returns a default argument scope for all generators and discriminators.
Args:
instance_norm_center: Whether instance normalization applies centering.
instance_norm_scale: Whether instance normalization applies scaling.
instance_norm_epsilon: Small float added to the variance in the instance
normalization to avoid dividing by zero.
weights_init_stddev: Standard deviation of the random values to initialize
the convolution kernels with.
weight_decay: Magnitude of weight decay applied to all convolution kernel
variables of the generator.
Returns:
An arg-scope.
"""
instance_norm_params = {
'center': instance_norm_center,
'scale': instance_norm_scale,
'epsilon': instance_norm_epsilon,
}
weights_regularizer = None
if weight_decay and weight_decay > 0.0:
weights_regularizer = slim.l2_regularizer(weight_decay)
with slim.arg_scope(
[slim.conv2d],
normalizer_fn=slim.instance_norm,
normalizer_params=instance_norm_params,
weights_initializer=tf.random_normal_initializer(
0, weights_init_stddev),
weights_regularizer=weights_regularizer) as sc:
return sc
def cyclegan_upsample(net, num_outputs, stride, method='conv2d_transpose',
pad_mode='REFLECT', align_corners=False):
"""Upsamples the given inputs.
Args:
net: A Tensor of size [batch_size, height, width, filters].
num_outputs: The number of output filters.
stride: A list of 2 scalars or a 1x2 Tensor indicating the scale,
relative to the inputs, of the output dimensions. For example, if kernel
size is [2, 3], then the output height and width will be twice and three
times the input size.
method: The upsampling method: 'nn_upsample_conv', 'bilinear_upsample_conv',
or 'conv2d_transpose'.
pad_mode: mode for tf.pad, one of "CONSTANT", "REFLECT", or "SYMMETRIC".
align_corners: option for method, 'bilinear_upsample_conv'. If true, the
centers of the 4 corner pixels of the input and output tensors are
aligned, preserving the values at the corner pixels.
Returns:
A Tensor which was upsampled using the specified method.
Raises:
ValueError: if `method` is not recognized.
"""
with tf.variable_scope('upconv'):
net_shape = tf.shape(input=net)
height = net_shape[1]
width = net_shape[2]
# Reflection pad by 1 in spatial dimensions (axes 1, 2 = h, w) to make a 3x3
# 'valid' convolution produce an output with the same dimension as the
# input.
spatial_pad_1 = np.array([[0, 0], [1, 1], [1, 1], [0, 0]])
if method == 'nn_upsample_conv':
net = tf.image.resize(
net, [stride[0] * height, stride[1] * width],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
net = tf.pad(tensor=net, paddings=spatial_pad_1, mode=pad_mode)
net = slim.conv2d(net, num_outputs, kernel_size=[3, 3], padding='valid')
elif method == 'bilinear_upsample_conv':
net = tf.image.resize_bilinear(
net, [stride[0] * height, stride[1] * width],
align_corners=align_corners)
net = tf.pad(tensor=net, paddings=spatial_pad_1, mode=pad_mode)
net = slim.conv2d(net, num_outputs, kernel_size=[3, 3], padding='valid')
elif method == 'conv2d_transpose':
# This corrects 1 pixel offset for images with even width and height.
# conv2d is left aligned and conv2d_transpose is right aligned for even
# sized images (while doing 'SAME' padding).
# Note: This doesn't reflect actual model in paper.
net = slim.conv2d_transpose(
net, num_outputs, kernel_size=[3, 3], stride=stride, padding='valid')
net = net[:, 1:, 1:, :]
else:
raise ValueError('Unknown method: [%s]' % method)
return net
def _dynamic_or_static_shape(tensor):
shape = tf.shape(input=tensor)
static_shape = tensor_util.constant_value(shape)
return static_shape if static_shape is not None else shape
def cyclegan_generator_resnet(images,
arg_scope_fn=cyclegan_arg_scope,
num_resnet_blocks=6,
num_filters=64,
upsample_fn=cyclegan_upsample,
kernel_size=3,
tanh_linear_slope=0.0,
is_training=False):
"""Defines the cyclegan resnet network architecture.
As closely as possible following
https://github.com/junyanz/CycleGAN/blob/master/models/architectures.lua#L232
FYI: This network requires input height and width to be divisible by 4 in
order to generate an output with shape equal to input shape. Assertions will
catch this if input dimensions are known at graph construction time, but
there's no protection if unknown at graph construction time (you'll see an
error).
Args:
images: Input image tensor of shape [batch_size, h, w, 3].
arg_scope_fn: Function to create the global arg_scope for the network.
num_resnet_blocks: Number of ResNet blocks in the middle of the generator.
num_filters: Number of filters of the first hidden layer.
upsample_fn: Upsampling function for the decoder part of the generator.
kernel_size: Size w or list/tuple [h, w] of the filter kernels for all inner
layers.
tanh_linear_slope: Slope of the linear function to add to the tanh over the
logits.
is_training: Whether the network is created in training mode or inference
only mode. Not actually needed, just for compliance with other generator
network functions.
Returns:
A `Tensor` representing the model output and a dictionary of model end
points.
Raises:
ValueError: If the input height or width is known at graph construction time
and not a multiple of 4.
"""
# Neither dropout nor batch norm -> dont need is_training
del is_training
end_points = {}
input_size = images.shape.as_list()
height, width = input_size[1], input_size[2]
if height and height % 4 != 0:
raise ValueError('The input height must be a multiple of 4.')
if width and width % 4 != 0:
raise ValueError('The input width must be a multiple of 4.')
num_outputs = input_size[3]
if not isinstance(kernel_size, (list, tuple)):
kernel_size = [kernel_size, kernel_size]
kernel_height = kernel_size[0]
kernel_width = kernel_size[1]
pad_top = (kernel_height - 1) // 2
pad_bottom = kernel_height // 2
pad_left = (kernel_width - 1) // 2
pad_right = kernel_width // 2
paddings = np.array(
[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]],
dtype=np.int32)
spatial_pad_3 = np.array([[0, 0], [3, 3], [3, 3], [0, 0]])
with slim.arg_scope(arg_scope_fn()):
###########
# Encoder #
###########
with tf.variable_scope('input'):
# 7x7 input stage
net = tf.pad(tensor=images, paddings=spatial_pad_3, mode='REFLECT')
net = slim.conv2d(net, num_filters, kernel_size=[7, 7], padding='VALID')
end_points['encoder_0'] = net
with tf.variable_scope('encoder'):
with slim.arg_scope([slim.conv2d],
kernel_size=kernel_size,
stride=2,
activation_fn=tf.nn.relu,
padding='VALID'):
net = tf.pad(tensor=net, paddings=paddings, mode='REFLECT')
net = slim.conv2d(net, num_filters * 2)
end_points['encoder_1'] = net
net = tf.pad(tensor=net, paddings=paddings, mode='REFLECT')
net = slim.conv2d(net, num_filters * 4)
end_points['encoder_2'] = net
###################
# Residual Blocks #
###################
with tf.variable_scope('residual_blocks'):
with slim.arg_scope([slim.conv2d],
kernel_size=kernel_size,
stride=1,
activation_fn=tf.nn.relu,
padding='VALID'):
for block_id in xrange(num_resnet_blocks):
with tf.variable_scope('block_{}'.format(block_id)):
res_net = tf.pad(tensor=net, paddings=paddings, mode='REFLECT')
res_net = slim.conv2d(res_net, num_filters * 4)
res_net = tf.pad(tensor=res_net, paddings=paddings, mode='REFLECT')
res_net = slim.conv2d(res_net, num_filters * 4, activation_fn=None)
net += res_net
end_points['resnet_block_%d' % block_id] = net
###########
# Decoder #
###########
with tf.variable_scope('decoder'):
with slim.arg_scope([slim.conv2d],
kernel_size=kernel_size,
stride=1,
activation_fn=tf.nn.relu):
with tf.variable_scope('decoder1'):
net = upsample_fn(net, num_outputs=num_filters * 2, stride=[2, 2])
end_points['decoder1'] = net
with tf.variable_scope('decoder2'):
net = upsample_fn(net, num_outputs=num_filters, stride=[2, 2])
end_points['decoder2'] = net
with tf.variable_scope('output'):
net = tf.pad(tensor=net, paddings=spatial_pad_3, mode='REFLECT')
logits = slim.conv2d(
net,
num_outputs, [7, 7],
activation_fn=None,
normalizer_fn=None,
padding='valid')
logits = tf.reshape(logits, _dynamic_or_static_shape(images))
end_points['logits'] = logits
end_points['predictions'] = tf.tanh(logits) + logits * tanh_linear_slope
return end_points['predictions'], end_points | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/cyclegan.py | cyclegan.py |
"""Contains the definition for inception v3 classification network."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import inception_utils
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
def inception_v3_base(inputs,
final_endpoint='Mixed_7c',
min_depth=16,
depth_multiplier=1.0,
scope=None):
"""Inception model from http://arxiv.org/abs/1512.00567.
Constructs an Inception v3 network from inputs to the given final endpoint.
This method can construct the network up to the final inception block
Mixed_7c.
Note that the names of the layers in the paper do not correspond to the names
of the endpoints registered by this function although they build the same
network.
Here is a mapping from the old_names to the new names:
Old name | New name
=======================================
conv0 | Conv2d_1a_3x3
conv1 | Conv2d_2a_3x3
conv2 | Conv2d_2b_3x3
pool1 | MaxPool_3a_3x3
conv3 | Conv2d_3b_1x1
conv4 | Conv2d_4a_3x3
pool2 | MaxPool_5a_3x3
mixed_35x35x256a | Mixed_5b
mixed_35x35x288a | Mixed_5c
mixed_35x35x288b | Mixed_5d
mixed_17x17x768a | Mixed_6a
mixed_17x17x768b | Mixed_6b
mixed_17x17x768c | Mixed_6c
mixed_17x17x768d | Mixed_6d
mixed_17x17x768e | Mixed_6e
mixed_8x8x1280a | Mixed_7a
mixed_8x8x2048a | Mixed_7b
mixed_8x8x2048b | Mixed_7c
Args:
inputs: a tensor of size [batch_size, height, width, channels].
final_endpoint: specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3', 'MaxPool_5a_3x3',
'Mixed_5b', 'Mixed_5c', 'Mixed_5d', 'Mixed_6a', 'Mixed_6b', 'Mixed_6c',
'Mixed_6d', 'Mixed_6e', 'Mixed_7a', 'Mixed_7b', 'Mixed_7c'].
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
scope: Optional variable_scope.
Returns:
tensor_out: output tensor corresponding to the final_endpoint.
end_points: a set of activations for external use, for example summaries or
losses.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
or depth_multiplier <= 0
"""
# end_points will collect relevant activations for external use, for example
# summaries or losses.
end_points = {}
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
depth = lambda d: max(int(d * depth_multiplier), min_depth)
with tf.variable_scope(scope, 'InceptionV3', [inputs]):
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='VALID'):
# 299 x 299 x 3
end_point = 'Conv2d_1a_3x3'
net = slim.conv2d(inputs, depth(32), [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 149 x 149 x 32
end_point = 'Conv2d_2a_3x3'
net = slim.conv2d(net, depth(32), [3, 3], scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 147 x 147 x 32
end_point = 'Conv2d_2b_3x3'
net = slim.conv2d(net, depth(64), [3, 3], padding='SAME', scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 147 x 147 x 64
end_point = 'MaxPool_3a_3x3'
net = slim.max_pool2d(net, [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 73 x 73 x 64
end_point = 'Conv2d_3b_1x1'
net = slim.conv2d(net, depth(80), [1, 1], scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 73 x 73 x 80.
end_point = 'Conv2d_4a_3x3'
net = slim.conv2d(net, depth(192), [3, 3], scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 71 x 71 x 192.
end_point = 'MaxPool_5a_3x3'
net = slim.max_pool2d(net, [3, 3], stride=2, scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# 35 x 35 x 192.
# Inception blocks
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
# mixed: 35 x 35 x 256.
end_point = 'Mixed_5b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(48), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(64), [5, 5],
scope='Conv2d_0b_5x5')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(32), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_1: 35 x 35 x 288.
end_point = 'Mixed_5c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(48), [1, 1], scope='Conv2d_0b_1x1')
branch_1 = slim.conv2d(branch_1, depth(64), [5, 5],
scope='Conv_1_0c_5x5')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(64), [1, 1],
scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(64), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_2: 35 x 35 x 288.
end_point = 'Mixed_5d'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(48), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(64), [5, 5],
scope='Conv2d_0b_5x5')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(64), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_3: 17 x 17 x 768.
end_point = 'Mixed_6a'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(384), [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(96), [3, 3],
scope='Conv2d_0b_3x3')
branch_1 = slim.conv2d(branch_1, depth(96), [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_1x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',
scope='MaxPool_1a_3x3')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed4: 17 x 17 x 768.
end_point = 'Mixed_6b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(128), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(128), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(128), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(128), [7, 1],
scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, depth(128), [1, 7],
scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, depth(128), [7, 1],
scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_5: 17 x 17 x 768.
end_point = 'Mixed_6c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(160), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(160), [7, 1],
scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, depth(160), [1, 7],
scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, depth(160), [7, 1],
scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_6: 17 x 17 x 768.
end_point = 'Mixed_6d'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(160), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(160), [7, 1],
scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, depth(160), [1, 7],
scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, depth(160), [7, 1],
scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_7: 17 x 17 x 768.
end_point = 'Mixed_6e'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(192), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(192), [7, 1],
scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, depth(192), [7, 1],
scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_8: 8 x 8 x 1280.
end_point = 'Mixed_7a'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
branch_0 = slim.conv2d(branch_0, depth(320), [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(192), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
branch_1 = slim.conv2d(branch_1, depth(192), [3, 3], stride=2,
padding='VALID', scope='Conv2d_1a_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',
scope='MaxPool_1a_3x3')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_9: 8 x 8 x 2048.
end_point = 'Mixed_7b'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(320), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(384), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = tf.concat(axis=3, values=[
slim.conv2d(branch_1, depth(384), [1, 3], scope='Conv2d_0b_1x3'),
slim.conv2d(branch_1, depth(384), [3, 1], scope='Conv2d_0b_3x1')])
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(448), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(
branch_2, depth(384), [3, 3], scope='Conv2d_0b_3x3')
branch_2 = tf.concat(axis=3, values=[
slim.conv2d(branch_2, depth(384), [1, 3], scope='Conv2d_0c_1x3'),
slim.conv2d(branch_2, depth(384), [3, 1], scope='Conv2d_0d_3x1')])
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(192), [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
# mixed_10: 8 x 8 x 2048.
end_point = 'Mixed_7c'
with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(320), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(384), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = tf.concat(axis=3, values=[
slim.conv2d(branch_1, depth(384), [1, 3], scope='Conv2d_0b_1x3'),
slim.conv2d(branch_1, depth(384), [3, 1], scope='Conv2d_0c_3x1')])
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(448), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(
branch_2, depth(384), [3, 3], scope='Conv2d_0b_3x3')
branch_2 = tf.concat(axis=3, values=[
slim.conv2d(branch_2, depth(384), [1, 3], scope='Conv2d_0c_1x3'),
slim.conv2d(branch_2, depth(384), [3, 1], scope='Conv2d_0d_3x1')])
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(
branch_3, depth(192), [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
end_points[end_point] = net
if end_point == final_endpoint: return net, end_points
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def inception_v3(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.8,
min_depth=16,
depth_multiplier=1.0,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
create_aux_logits=True,
scope='InceptionV3',
global_pool=False):
"""Inception model from http://arxiv.org/abs/1512.00567.
"Rethinking the Inception Architecture for Computer Vision"
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens,
Zbigniew Wojna.
With the default arguments this method constructs the exact model defined in
the paper. However, one can experiment with variations of the inception_v3
network by changing arguments dropout_keep_prob, min_depth and
depth_multiplier.
The default image size used to train this network is 299x299.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
is_training: whether is training or not.
dropout_keep_prob: the percentage of activation values that are retained.
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape [B, C], if false logits is of
shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
create_aux_logits: Whether to create the auxiliary logits.
scope: Optional variable_scope.
global_pool: Optional boolean flag to control the avgpooling before the
logits layer. If false or unset, pooling is done with a fixed window
that reduces default-sized inputs to 1x1, while larger inputs lead to
larger outputs. If true, any input size is pooled down to 1x1.
Returns:
net: a Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the non-dropped-out input to the logits layer
if num_classes is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: if 'depth_multiplier' is less than or equal to zero.
"""
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
depth = lambda d: max(int(d * depth_multiplier), min_depth)
with tf.variable_scope(
scope, 'InceptionV3', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_v3_base(
inputs, scope=scope, min_depth=min_depth,
depth_multiplier=depth_multiplier)
# Auxiliary Head logits
if create_aux_logits and num_classes:
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
aux_logits = end_points['Mixed_6e']
with tf.variable_scope('AuxLogits'):
aux_logits = slim.avg_pool2d(
aux_logits, [5, 5], stride=3, padding='VALID',
scope='AvgPool_1a_5x5')
aux_logits = slim.conv2d(aux_logits, depth(128), [1, 1],
scope='Conv2d_1b_1x1')
# Shape of feature map before the final layer.
kernel_size = _reduced_kernel_size_for_small_input(
aux_logits, [5, 5])
aux_logits = slim.conv2d(
aux_logits, depth(768), kernel_size,
weights_initializer=trunc_normal(0.01),
padding='VALID', scope='Conv2d_2a_{}x{}'.format(*kernel_size))
aux_logits = slim.conv2d(
aux_logits, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, weights_initializer=trunc_normal(0.001),
scope='Conv2d_2b_1x1')
if spatial_squeeze:
aux_logits = tf.squeeze(aux_logits, [1, 2], name='SpatialSqueeze')
end_points['AuxLogits'] = aux_logits
# Final pooling and prediction
with tf.variable_scope('Logits'):
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='GlobalPool')
end_points['global_pool'] = net
else:
# Pooling with a fixed kernel size.
kernel_size = _reduced_kernel_size_for_small_input(net, [8, 8])
net = slim.avg_pool2d(net, kernel_size, padding='VALID',
scope='AvgPool_1a_{}x{}'.format(*kernel_size))
end_points['AvgPool_1a'] = net
if not num_classes:
return net, end_points
# 1 x 1 x 2048
net = slim.dropout(net, keep_prob=dropout_keep_prob, scope='Dropout_1b')
end_points['PreLogits'] = net
# 2048
logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='Conv2d_1c_1x1')
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
# 1000
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
inception_v3.default_image_size = 299
def _reduced_kernel_size_for_small_input(input_tensor, kernel_size):
"""Define kernel size which is automatically reduced for small input.
If the shape of the input images is unknown at graph construction time this
function assumes that the input images are is large enough.
Args:
input_tensor: input tensor of size [batch_size, height, width, channels].
kernel_size: desired kernel size of length 2: [kernel_height, kernel_width]
Returns:
a tensor with the kernel size.
TODO(jrru): Make this function work with unknown shapes. Theoretically, this
can be done with the code below. Problems are two-fold: (1) If the shape was
known, it will be lost. (2) inception.slim.ops._two_element_tuple cannot
handle tensors that define the kernel size.
shape = tf.shape(input_tensor)
return = tf.stack([tf.minimum(shape[1], kernel_size[0]),
tf.minimum(shape[2], kernel_size[1])])
"""
shape = input_tensor.get_shape().as_list()
if shape[1] is None or shape[2] is None:
kernel_size_out = kernel_size
else:
kernel_size_out = [min(shape[1], kernel_size[0]),
min(shape[2], kernel_size[1])]
return kernel_size_out
inception_v3_arg_scope = inception_utils.inception_arg_scope | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/inception_v3.py | inception_v3.py |
"""Contains a variant of the LeNet model definition."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
def lenet(images, num_classes=10, is_training=False,
dropout_keep_prob=0.5,
prediction_fn=slim.softmax,
scope='LeNet'):
"""Creates a variant of the LeNet model.
Note that since the output is a set of 'logits', the values fall in the
interval of (-infinity, infinity). Consequently, to convert the outputs to a
probability distribution over the characters, one will need to convert them
using the softmax function:
logits = lenet.lenet(images, is_training=False)
probabilities = tf.nn.softmax(logits)
predictions = tf.argmax(logits, 1)
Args:
images: A batch of `Tensors` of size [batch_size, height, width, channels].
num_classes: the number of classes in the dataset. If 0 or None, the logits
layer is omitted and the input features to the logits layer are returned
instead.
is_training: specifies whether or not we're currently training the model.
This variable will determine the behaviour of the dropout layer.
dropout_keep_prob: the percentage of activation values that are retained.
prediction_fn: a function to get predictions out of logits.
scope: Optional variable_scope.
Returns:
net: a 2D Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the inon-dropped-out nput to the logits layer
if num_classes is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
"""
end_points = {}
with tf.variable_scope(scope, 'LeNet', [images]):
net = end_points['conv1'] = slim.conv2d(images, 32, [5, 5], scope='conv1')
net = end_points['pool1'] = slim.max_pool2d(net, [2, 2], 2, scope='pool1')
net = end_points['conv2'] = slim.conv2d(net, 64, [5, 5], scope='conv2')
net = end_points['pool2'] = slim.max_pool2d(net, [2, 2], 2, scope='pool2')
net = slim.flatten(net)
end_points['Flatten'] = net
net = end_points['fc3'] = slim.fully_connected(net, 1024, scope='fc3')
if not num_classes:
return net, end_points
net = end_points['dropout3'] = slim.dropout(
net, dropout_keep_prob, is_training=is_training, scope='dropout3')
logits = end_points['Logits'] = slim.fully_connected(
net, num_classes, activation_fn=None, scope='fc4')
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
lenet.default_image_size = 28
def lenet_arg_scope(weight_decay=0.0):
"""Defines the default lenet argument scope.
Args:
weight_decay: The weight decay to use for regularizing the model.
Returns:
An `arg_scope` to use for the inception v3 model.
"""
with slim.arg_scope(
[slim.conv2d, slim.fully_connected],
weights_regularizer=slim.l2_regularizer(weight_decay),
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
activation_fn=tf.nn.relu) as sc:
return sc | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/lenet.py | lenet.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import resnet_utils
resnet_arg_scope = resnet_utils.resnet_arg_scope
class NoOpScope(object):
"""No-op context manager."""
def __enter__(self):
return None
def __exit__(self, exc_type, exc_value, traceback):
return False
@slim.add_arg_scope
def bottleneck(inputs,
depth,
depth_bottleneck,
stride,
rate=1,
outputs_collections=None,
scope=None,
use_bounded_activations=False):
"""Bottleneck residual unit variant with BN after convolutions.
This is the original residual unit proposed in [1]. See Fig. 1(a) of [2] for
its definition. Note that we use here the bottleneck variant which has an
extra bottleneck layer.
When putting together two consecutive ResNet blocks that use this unit, one
should use stride = 2 in the last unit of the first block.
Args:
inputs: A tensor of size [batch, height, width, channels].
depth: The depth of the ResNet unit output.
depth_bottleneck: The depth of the bottleneck layers.
stride: The ResNet unit's stride. Determines the amount of downsampling of
the units output compared to its input.
rate: An integer, rate for atrous convolution.
outputs_collections: Collection to add the ResNet unit output.
scope: Optional variable_scope.
use_bounded_activations: Whether or not to use bounded activations. Bounded
activations better lend themselves to quantized inference.
Returns:
The ResNet unit's output.
"""
with tf.variable_scope(scope, 'bottleneck_v1', [inputs]) as sc:
depth_in = slim.utils.last_dimension(inputs.get_shape(), min_rank=4)
if depth == depth_in:
shortcut = resnet_utils.subsample(inputs, stride, 'shortcut')
else:
shortcut = slim.conv2d(
inputs,
depth, [1, 1],
stride=stride,
activation_fn=tf.nn.relu6 if use_bounded_activations else None,
scope='shortcut')
residual = slim.conv2d(inputs, depth_bottleneck, [1, 1], stride=1,
scope='conv1')
residual = resnet_utils.conv2d_same(residual, depth_bottleneck, 3, stride,
rate=rate, scope='conv2')
residual = slim.conv2d(residual, depth, [1, 1], stride=1,
activation_fn=None, scope='conv3')
if use_bounded_activations:
# Use clip_by_value to simulate bandpass activation.
residual = tf.clip_by_value(residual, -6.0, 6.0)
output = tf.nn.relu6(shortcut + residual)
else:
output = tf.nn.relu(shortcut + residual)
return slim.utils.collect_named_outputs(outputs_collections,
sc.name,
output)
def resnet_v1(inputs,
blocks,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
include_root_block=True,
spatial_squeeze=True,
store_non_strided_activations=False,
reuse=None,
scope=None):
"""Generator for v1 ResNet models.
This function generates a family of ResNet v1 models. See the resnet_v1_*()
methods for specific model instantiations, obtained by selecting different
block instantiations that produce ResNets of various depths.
Training for image classification on Imagenet is usually done with [224, 224]
inputs, resulting in [7, 7] feature maps at the output of the last ResNet
block for the ResNets defined in [1] that have nominal stride equal to 32.
However, for dense prediction tasks we advise that one uses inputs with
spatial dimensions that are multiples of 32 plus 1, e.g., [321, 321]. In
this case the feature maps at the ResNet output will have spatial shape
[(height - 1) / output_stride + 1, (width - 1) / output_stride + 1]
and corners exactly aligned with the input image corners, which greatly
facilitates alignment of the features to the image. Using as input [225, 225]
images results in [8, 8] feature maps at the output of the last ResNet block.
For dense prediction tasks, the ResNet needs to run in fully-convolutional
(FCN) mode and global_pool needs to be set to False. The ResNets in [1, 2] all
have nominal stride equal to 32 and a good choice in FCN mode is to use
output_stride=16 in order to increase the density of the computed features at
small computational and memory overhead, cf. http://arxiv.org/abs/1606.00915.
Args:
inputs: A tensor of size [batch, height_in, width_in, channels].
blocks: A list of length equal to the number of ResNet blocks. Each element
is a resnet_utils.Block object describing the units in the block.
num_classes: Number of predicted classes for classification tasks.
If 0 or None, we return the features before the logit layer.
is_training: whether batch_norm layers are in training mode. If this is set
to None, the callers can specify slim.batch_norm's is_training parameter
from an outer slim.arg_scope.
global_pool: If True, we perform global average pooling before computing the
logits. Set to True for image classification, False for dense prediction.
output_stride: If None, then the output will be computed at the nominal
network stride. If output_stride is not None, it specifies the requested
ratio of input to output spatial resolution.
include_root_block: If True, include the initial convolution followed by
max-pooling, if False excludes it.
spatial_squeeze: if True, logits is of shape [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
To use this parameter, the input images must be smaller than 300x300
pixels, in which case the output logit layer does not contain spatial
information and can be removed.
store_non_strided_activations: If True, we compute non-strided (undecimated)
activations at the last unit of each block and store them in the
`outputs_collections` before subsampling them. This gives us access to
higher resolution intermediate activations which are useful in some
dense prediction problems but increases 4x the computation and memory cost
at the last unit of each block.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
Returns:
net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
If global_pool is False, then height_out and width_out are reduced by a
factor of output_stride compared to the respective height_in and width_in,
else both height_out and width_out equal one. If num_classes is 0 or None,
then net is the output of the last ResNet block, potentially after global
average pooling. If num_classes a non-zero integer, net contains the
pre-softmax activations.
end_points: A dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: If the target output_stride is not valid.
"""
with tf.variable_scope(
scope, 'resnet_v1', [inputs], reuse=reuse) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
with slim.arg_scope([slim.conv2d, bottleneck,
resnet_utils.stack_blocks_dense],
outputs_collections=end_points_collection):
with (slim.arg_scope([slim.batch_norm], is_training=is_training)
if is_training is not None else NoOpScope()):
net = inputs
if include_root_block:
if output_stride is not None:
if output_stride % 4 != 0:
raise ValueError('The output_stride needs to be a multiple of 4.')
output_stride /= 4
net = resnet_utils.conv2d_same(net, 64, 7, stride=2, scope='conv1')
net = slim.max_pool2d(net, [3, 3], stride=2, scope='pool1')
net = resnet_utils.stack_blocks_dense(net, blocks, output_stride,
store_non_strided_activations)
# Convert end_points_collection into a dictionary of end_points.
end_points = slim.utils.convert_collection_to_dict(
end_points_collection)
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], name='pool5', keepdims=True)
end_points['global_pool'] = net
if num_classes:
net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='logits')
end_points[sc.name + '/logits'] = net
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='SpatialSqueeze')
end_points[sc.name + '/spatial_squeeze'] = net
end_points['predictions'] = slim.softmax(net, scope='predictions')
return net, end_points
resnet_v1.default_image_size = 224
def resnet_v1_block(scope, base_depth, num_units, stride):
"""Helper function for creating a resnet_v1 bottleneck block.
Args:
scope: The scope of the block.
base_depth: The depth of the bottleneck layer for each unit.
num_units: The number of units in the block.
stride: The stride of the block, implemented as a stride in the last unit.
All other units have stride=1.
Returns:
A resnet_v1 bottleneck block.
"""
return resnet_utils.Block(scope, bottleneck, [{
'depth': base_depth * 4,
'depth_bottleneck': base_depth,
'stride': 1
}] * (num_units - 1) + [{
'depth': base_depth * 4,
'depth_bottleneck': base_depth,
'stride': stride
}])
def resnet_v1_50(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
store_non_strided_activations=False,
min_base_depth=8,
depth_multiplier=1,
reuse=None,
scope='resnet_v1_50'):
"""ResNet-50 model of [1]. See resnet_v1() for arg and return description."""
depth_func = lambda d: max(int(d * depth_multiplier), min_base_depth)
blocks = [
resnet_v1_block('block1', base_depth=depth_func(64), num_units=3,
stride=2),
resnet_v1_block('block2', base_depth=depth_func(128), num_units=4,
stride=2),
resnet_v1_block('block3', base_depth=depth_func(256), num_units=6,
stride=2),
resnet_v1_block('block4', base_depth=depth_func(512), num_units=3,
stride=1),
]
return resnet_v1(inputs, blocks, num_classes, is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
store_non_strided_activations=store_non_strided_activations,
reuse=reuse, scope=scope)
resnet_v1_50.default_image_size = resnet_v1.default_image_size
def resnet_v1_101(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
store_non_strided_activations=False,
min_base_depth=8,
depth_multiplier=1,
reuse=None,
scope='resnet_v1_101'):
"""ResNet-101 model of [1]. See resnet_v1() for arg and return description."""
depth_func = lambda d: max(int(d * depth_multiplier), min_base_depth)
blocks = [
resnet_v1_block('block1', base_depth=depth_func(64), num_units=3,
stride=2),
resnet_v1_block('block2', base_depth=depth_func(128), num_units=4,
stride=2),
resnet_v1_block('block3', base_depth=depth_func(256), num_units=23,
stride=2),
resnet_v1_block('block4', base_depth=depth_func(512), num_units=3,
stride=1),
]
return resnet_v1(inputs, blocks, num_classes, is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
store_non_strided_activations=store_non_strided_activations,
reuse=reuse, scope=scope)
resnet_v1_101.default_image_size = resnet_v1.default_image_size
def resnet_v1_152(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
store_non_strided_activations=False,
spatial_squeeze=True,
min_base_depth=8,
depth_multiplier=1,
reuse=None,
scope='resnet_v1_152'):
"""ResNet-152 model of [1]. See resnet_v1() for arg and return description."""
depth_func = lambda d: max(int(d * depth_multiplier), min_base_depth)
blocks = [
resnet_v1_block('block1', base_depth=depth_func(64), num_units=3,
stride=2),
resnet_v1_block('block2', base_depth=depth_func(128), num_units=8,
stride=2),
resnet_v1_block('block3', base_depth=depth_func(256), num_units=36,
stride=2),
resnet_v1_block('block4', base_depth=depth_func(512), num_units=3,
stride=1),
]
return resnet_v1(inputs, blocks, num_classes, is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
store_non_strided_activations=store_non_strided_activations,
reuse=reuse, scope=scope)
resnet_v1_152.default_image_size = resnet_v1.default_image_size
def resnet_v1_200(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
store_non_strided_activations=False,
spatial_squeeze=True,
min_base_depth=8,
depth_multiplier=1,
reuse=None,
scope='resnet_v1_200'):
"""ResNet-200 model of [2]. See resnet_v1() for arg and return description."""
depth_func = lambda d: max(int(d * depth_multiplier), min_base_depth)
blocks = [
resnet_v1_block('block1', base_depth=depth_func(64), num_units=3,
stride=2),
resnet_v1_block('block2', base_depth=depth_func(128), num_units=24,
stride=2),
resnet_v1_block('block3', base_depth=depth_func(256), num_units=36,
stride=2),
resnet_v1_block('block4', base_depth=depth_func(512), num_units=3,
stride=1),
]
return resnet_v1(inputs, blocks, num_classes, is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
store_non_strided_activations=store_non_strided_activations,
reuse=reuse, scope=scope)
resnet_v1_200.default_image_size = resnet_v1.default_image_size | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/resnet_v1.py | resnet_v1.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
# pylint: disable=g-long-lambda
trunc_normal = lambda stddev: tf.truncated_normal_initializer(
0.0, stddev)
def alexnet_v2_arg_scope(weight_decay=0.0005):
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
biases_initializer=tf.constant_initializer(0.1),
weights_regularizer=slim.l2_regularizer(weight_decay)):
with slim.arg_scope([slim.conv2d], padding='SAME'):
with slim.arg_scope([slim.max_pool2d], padding='VALID') as arg_sc:
return arg_sc
def alexnet_v2(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
scope='alexnet_v2',
global_pool=False):
"""AlexNet version 2.
Described in: http://arxiv.org/pdf/1404.5997v2.pdf
Parameters from:
github.com/akrizhevsky/cuda-convnet2/blob/master/layers/
layers-imagenet-1gpu.cfg
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224 or set
global_pool=True. To use in fully convolutional mode, set
spatial_squeeze to false.
The LRN layers have been removed and change the initializers from
random_normal_initializer to xavier_initializer.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: the number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
logits. Useful to remove unnecessary dimensions for classification.
scope: Optional scope for the variables.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original AlexNet.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the non-dropped-out input to the logits layer (if num_classes is 0
or None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(scope, 'alexnet_v2', [inputs]) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=[end_points_collection]):
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID',
scope='conv1')
net = slim.max_pool2d(net, [3, 3], 2, scope='pool1')
net = slim.conv2d(net, 192, [5, 5], scope='conv2')
net = slim.max_pool2d(net, [3, 3], 2, scope='pool2')
net = slim.conv2d(net, 384, [3, 3], scope='conv3')
net = slim.conv2d(net, 384, [3, 3], scope='conv4')
net = slim.conv2d(net, 256, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [3, 3], 2, scope='pool5')
# Use conv2d instead of fully_connected layers.
with slim.arg_scope(
[slim.conv2d],
weights_initializer=trunc_normal(0.005),
biases_initializer=tf.constant_initializer(0.1)):
net = slim.conv2d(net, 4096, [5, 5], padding='VALID',
scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(
end_points_collection)
if global_pool:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(
net,
num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
biases_initializer=tf.zeros_initializer(),
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
alexnet_v2.default_image_size = 224 | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/alexnet.py | alexnet.py |
"""DCGAN generator and discriminator from https://arxiv.org/abs/1511.06434."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from math import log
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow.compat.v1 as tf
import tf_slim as slim
def _validate_image_inputs(inputs):
inputs.get_shape().assert_has_rank(4)
inputs.get_shape()[1:3].assert_is_fully_defined()
if inputs.get_shape()[1] != inputs.get_shape()[2]:
raise ValueError('Input tensor does not have equal width and height: ',
inputs.get_shape()[1:3])
width = inputs.get_shape().as_list()[1]
if log(width, 2) != int(log(width, 2)):
raise ValueError('Input tensor `width` is not a power of 2: ', width)
# TODO(joelshor): Use fused batch norm by default. Investigate why some GAN
# setups need the gradient of gradient FusedBatchNormGrad.
def discriminator(inputs,
depth=64,
is_training=True,
reuse=None,
scope='Discriminator',
fused_batch_norm=False):
"""Discriminator network for DCGAN.
Construct discriminator network from inputs to the final endpoint.
Args:
inputs: A tensor of size [batch_size, height, width, channels]. Must be
floating point.
depth: Number of channels in first convolution layer.
is_training: Whether the network is for training or not.
reuse: Whether or not the network variables should be reused. `scope`
must be given to be reused.
scope: Optional variable_scope.
fused_batch_norm: If `True`, use a faster, fused implementation of
batch norm.
Returns:
logits: The pre-softmax activations, a tensor of size [batch_size, 1]
end_points: a dictionary from components of the network to their activation.
Raises:
ValueError: If the input image shape is not 4-dimensional, if the spatial
dimensions aren't defined at graph construction time, if the spatial
dimensions aren't square, or if the spatial dimensions aren't a power of
two.
"""
normalizer_fn = slim.batch_norm
normalizer_fn_args = {
'is_training': is_training,
'zero_debias_moving_mean': True,
'fused': fused_batch_norm,
}
_validate_image_inputs(inputs)
inp_shape = inputs.get_shape().as_list()[1]
end_points = {}
with tf.variable_scope(
scope, values=[inputs], reuse=reuse) as scope:
with slim.arg_scope([normalizer_fn], **normalizer_fn_args):
with slim.arg_scope([slim.conv2d],
stride=2,
kernel_size=4,
activation_fn=tf.nn.leaky_relu):
net = inputs
for i in xrange(int(log(inp_shape, 2))):
scope = 'conv%i' % (i + 1)
current_depth = depth * 2**i
normalizer_fn_ = None if i == 0 else normalizer_fn
net = slim.conv2d(
net, current_depth, normalizer_fn=normalizer_fn_, scope=scope)
end_points[scope] = net
logits = slim.conv2d(net, 1, kernel_size=1, stride=1, padding='VALID',
normalizer_fn=None, activation_fn=None)
logits = tf.reshape(logits, [-1, 1])
end_points['logits'] = logits
return logits, end_points
# TODO(joelshor): Use fused batch norm by default. Investigate why some GAN
# setups need the gradient of gradient FusedBatchNormGrad.
def generator(inputs,
depth=64,
final_size=32,
num_outputs=3,
is_training=True,
reuse=None,
scope='Generator',
fused_batch_norm=False):
"""Generator network for DCGAN.
Construct generator network from inputs to the final endpoint.
Args:
inputs: A tensor with any size N. [batch_size, N]
depth: Number of channels in last deconvolution layer.
final_size: The shape of the final output.
num_outputs: Number of output features. For images, this is the number of
channels.
is_training: whether is training or not.
reuse: Whether or not the network has its variables should be reused. scope
must be given to be reused.
scope: Optional variable_scope.
fused_batch_norm: If `True`, use a faster, fused implementation of
batch norm.
Returns:
logits: the pre-softmax activations, a tensor of size
[batch_size, 32, 32, channels]
end_points: a dictionary from components of the network to their activation.
Raises:
ValueError: If `inputs` is not 2-dimensional.
ValueError: If `final_size` isn't a power of 2 or is less than 8.
"""
normalizer_fn = slim.batch_norm
normalizer_fn_args = {
'is_training': is_training,
'zero_debias_moving_mean': True,
'fused': fused_batch_norm,
}
inputs.get_shape().assert_has_rank(2)
if log(final_size, 2) != int(log(final_size, 2)):
raise ValueError('`final_size` (%i) must be a power of 2.' % final_size)
if final_size < 8:
raise ValueError('`final_size` (%i) must be greater than 8.' % final_size)
end_points = {}
num_layers = int(log(final_size, 2)) - 1
with tf.variable_scope(
scope, values=[inputs], reuse=reuse) as scope:
with slim.arg_scope([normalizer_fn], **normalizer_fn_args):
with slim.arg_scope([slim.conv2d_transpose],
normalizer_fn=normalizer_fn,
stride=2,
kernel_size=4):
net = tf.expand_dims(tf.expand_dims(inputs, 1), 1)
# First upscaling is different because it takes the input vector.
current_depth = depth * 2 ** (num_layers - 1)
scope = 'deconv1'
net = slim.conv2d_transpose(
net, current_depth, stride=1, padding='VALID', scope=scope)
end_points[scope] = net
for i in xrange(2, num_layers):
scope = 'deconv%i' % (i)
current_depth = depth * 2 ** (num_layers - i)
net = slim.conv2d_transpose(net, current_depth, scope=scope)
end_points[scope] = net
# Last layer has different normalizer and activation.
scope = 'deconv%i' % (num_layers)
net = slim.conv2d_transpose(
net, depth, normalizer_fn=None, activation_fn=None, scope=scope)
end_points[scope] = net
# Convert to proper channels.
scope = 'logits'
logits = slim.conv2d(
net,
num_outputs,
normalizer_fn=None,
activation_fn=None,
kernel_size=1,
stride=1,
padding='VALID',
scope=scope)
end_points[scope] = logits
logits.get_shape().assert_has_rank(4)
logits.get_shape().assert_is_compatible_with(
[None, final_size, final_size, num_outputs])
return logits, end_points | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/dcgan.py | dcgan.py |
# Tensorflow mandates these.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from collections import namedtuple
import functools
import tensorflow.compat.v1 as tf
import tf_slim as slim
# Conv and DepthSepConv namedtuple define layers of the MobileNet architecture
# Conv defines 3x3 convolution layers
# DepthSepConv defines 3x3 depthwise convolution followed by 1x1 convolution.
# stride is the stride of the convolution
# depth is the number of channels or filters in a layer
Conv = namedtuple('Conv', ['kernel', 'stride', 'depth'])
DepthSepConv = namedtuple('DepthSepConv', ['kernel', 'stride', 'depth'])
# MOBILENETV1_CONV_DEFS specifies the MobileNet body
MOBILENETV1_CONV_DEFS = [
Conv(kernel=[3, 3], stride=2, depth=32),
DepthSepConv(kernel=[3, 3], stride=1, depth=64),
DepthSepConv(kernel=[3, 3], stride=2, depth=128),
DepthSepConv(kernel=[3, 3], stride=1, depth=128),
DepthSepConv(kernel=[3, 3], stride=2, depth=256),
DepthSepConv(kernel=[3, 3], stride=1, depth=256),
DepthSepConv(kernel=[3, 3], stride=2, depth=512),
DepthSepConv(kernel=[3, 3], stride=1, depth=512),
DepthSepConv(kernel=[3, 3], stride=1, depth=512),
DepthSepConv(kernel=[3, 3], stride=1, depth=512),
DepthSepConv(kernel=[3, 3], stride=1, depth=512),
DepthSepConv(kernel=[3, 3], stride=1, depth=512),
DepthSepConv(kernel=[3, 3], stride=2, depth=1024),
DepthSepConv(kernel=[3, 3], stride=1, depth=1024)
]
def _fixed_padding(inputs, kernel_size, rate=1):
"""Pads the input along the spatial dimensions independently of input size.
Pads the input such that if it was used in a convolution with 'VALID' padding,
the output would have the same dimensions as if the unpadded input was used
in a convolution with 'SAME' padding.
Args:
inputs: A tensor of size [batch, height_in, width_in, channels].
kernel_size: The kernel to be used in the conv2d or max_pool2d operation.
rate: An integer, rate for atrous convolution.
Returns:
output: A tensor of size [batch, height_out, width_out, channels] with the
input, either intact (if kernel_size == 1) or padded (if kernel_size > 1).
"""
kernel_size_effective = [kernel_size[0] + (kernel_size[0] - 1) * (rate - 1),
kernel_size[1] + (kernel_size[1] - 1) * (rate - 1)]
pad_total = [kernel_size_effective[0] - 1, kernel_size_effective[1] - 1]
pad_beg = [pad_total[0] // 2, pad_total[1] // 2]
pad_end = [pad_total[0] - pad_beg[0], pad_total[1] - pad_beg[1]]
padded_inputs = tf.pad(
tensor=inputs,
paddings=[[0, 0], [pad_beg[0], pad_end[0]], [pad_beg[1], pad_end[1]],
[0, 0]])
return padded_inputs
def mobilenet_v1_base(inputs,
final_endpoint='Conv2d_13_pointwise',
min_depth=8,
depth_multiplier=1.0,
conv_defs=None,
output_stride=None,
use_explicit_padding=False,
scope=None):
"""Mobilenet v1.
Constructs a Mobilenet v1 network from inputs to the given final endpoint.
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
final_endpoint: specifies the endpoint to construct the network up to. It
can be one of ['Conv2d_0', 'Conv2d_1_pointwise', 'Conv2d_2_pointwise',
'Conv2d_3_pointwise', 'Conv2d_4_pointwise', 'Conv2d_5'_pointwise,
'Conv2d_6_pointwise', 'Conv2d_7_pointwise', 'Conv2d_8_pointwise',
'Conv2d_9_pointwise', 'Conv2d_10_pointwise', 'Conv2d_11_pointwise',
'Conv2d_12_pointwise', 'Conv2d_13_pointwise'].
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
conv_defs: A list of ConvDef namedtuples specifying the net architecture.
output_stride: An integer that specifies the requested ratio of input to
output spatial resolution. If not None, then we invoke atrous convolution
if necessary to prevent the network from reducing the spatial resolution
of the activation maps. Allowed values are 8 (accurate fully convolutional
mode), 16 (fast fully convolutional mode), 32 (classification mode).
use_explicit_padding: Use 'VALID' padding for convolutions, but prepad
inputs so that the output dimensions are the same as if 'SAME' padding
were used.
scope: Optional variable_scope.
Returns:
tensor_out: output tensor corresponding to the final_endpoint.
end_points: a set of activations for external use, for example summaries or
losses.
Raises:
ValueError: if final_endpoint is not set to one of the predefined values,
or depth_multiplier <= 0, or the target output_stride is not
allowed.
"""
depth = lambda d: max(int(d * depth_multiplier), min_depth)
end_points = {}
# Used to find thinned depths for each layer.
if depth_multiplier <= 0:
raise ValueError('depth_multiplier is not greater than zero.')
if conv_defs is None:
conv_defs = MOBILENETV1_CONV_DEFS
if output_stride is not None and output_stride not in [8, 16, 32]:
raise ValueError('Only allowed output_stride values are 8, 16, 32.')
padding = 'SAME'
if use_explicit_padding:
padding = 'VALID'
with tf.variable_scope(scope, 'MobilenetV1', [inputs]):
with slim.arg_scope([slim.conv2d, slim.separable_conv2d], padding=padding):
# The current_stride variable keeps track of the output stride of the
# activations, i.e., the running product of convolution strides up to the
# current network layer. This allows us to invoke atrous convolution
# whenever applying the next convolution would result in the activations
# having output stride larger than the target output_stride.
current_stride = 1
# The atrous convolution rate parameter.
rate = 1
net = inputs
for i, conv_def in enumerate(conv_defs):
end_point_base = 'Conv2d_%d' % i
if output_stride is not None and current_stride == output_stride:
# If we have reached the target output_stride, then we need to employ
# atrous convolution with stride=1 and multiply the atrous rate by the
# current unit's stride for use in subsequent layers.
layer_stride = 1
layer_rate = rate
rate *= conv_def.stride
else:
layer_stride = conv_def.stride
layer_rate = 1
current_stride *= conv_def.stride
if isinstance(conv_def, Conv):
end_point = end_point_base
if use_explicit_padding:
net = _fixed_padding(net, conv_def.kernel)
net = slim.conv2d(net, depth(conv_def.depth), conv_def.kernel,
stride=conv_def.stride,
scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
elif isinstance(conv_def, DepthSepConv):
end_point = end_point_base + '_depthwise'
# By passing filters=None
# separable_conv2d produces only a depthwise convolution layer
if use_explicit_padding:
net = _fixed_padding(net, conv_def.kernel, layer_rate)
net = slim.separable_conv2d(net, None, conv_def.kernel,
depth_multiplier=1,
stride=layer_stride,
rate=layer_rate,
scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
end_point = end_point_base + '_pointwise'
net = slim.conv2d(net, depth(conv_def.depth), [1, 1],
stride=1,
scope=end_point)
end_points[end_point] = net
if end_point == final_endpoint:
return net, end_points
else:
raise ValueError('Unknown convolution type %s for layer %d'
% (conv_def.ltype, i))
raise ValueError('Unknown final endpoint %s' % final_endpoint)
def mobilenet_v1(inputs,
num_classes=1000,
dropout_keep_prob=0.999,
is_training=True,
min_depth=8,
depth_multiplier=1.0,
conv_defs=None,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
scope='MobilenetV1',
global_pool=False):
"""Mobilenet v1 model for classification.
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
dropout_keep_prob: the percentage of activation values that are retained.
is_training: whether is training or not.
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
conv_defs: A list of ConvDef namedtuples specifying the net architecture.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape is [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
global_pool: Optional boolean flag to control the avgpooling before the
logits layer. If false or unset, pooling is done with a fixed window
that reduces default-sized inputs to 1x1, while larger inputs lead to
larger outputs. If true, any input size is pooled down to 1x1.
Returns:
net: a 2D Tensor with the logits (pre-softmax activations) if num_classes
is a non-zero integer, or the non-dropped-out input to the logits layer
if num_classes is 0 or None.
end_points: a dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: Input rank is invalid.
"""
input_shape = inputs.get_shape().as_list()
if len(input_shape) != 4:
raise ValueError('Invalid input tensor rank, expected 4, was: %d' %
len(input_shape))
with tf.variable_scope(
scope, 'MobilenetV1', [inputs], reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = mobilenet_v1_base(inputs, scope=scope,
min_depth=min_depth,
depth_multiplier=depth_multiplier,
conv_defs=conv_defs)
with tf.variable_scope('Logits'):
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
else:
# Pooling with a fixed kernel size.
kernel_size = _reduced_kernel_size_for_small_input(net, [7, 7])
net = slim.avg_pool2d(net, kernel_size, padding='VALID',
scope='AvgPool_1a')
end_points['AvgPool_1a'] = net
if not num_classes:
return net, end_points
# 1 x 1 x 1024
net = slim.dropout(net, keep_prob=dropout_keep_prob, scope='Dropout_1b')
logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='Conv2d_1c_1x1')
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
if prediction_fn:
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
mobilenet_v1.default_image_size = 224
def wrapped_partial(func, *args, **kwargs):
partial_func = functools.partial(func, *args, **kwargs)
functools.update_wrapper(partial_func, func)
return partial_func
mobilenet_v1_075 = wrapped_partial(mobilenet_v1, depth_multiplier=0.75)
mobilenet_v1_050 = wrapped_partial(mobilenet_v1, depth_multiplier=0.50)
mobilenet_v1_025 = wrapped_partial(mobilenet_v1, depth_multiplier=0.25)
def _reduced_kernel_size_for_small_input(input_tensor, kernel_size):
"""Define kernel size which is automatically reduced for small input.
If the shape of the input images is unknown at graph construction time this
function assumes that the input images are large enough.
Args:
input_tensor: input tensor of size [batch_size, height, width, channels].
kernel_size: desired kernel size of length 2: [kernel_height, kernel_width]
Returns:
a tensor with the kernel size.
"""
shape = input_tensor.get_shape().as_list()
if shape[1] is None or shape[2] is None:
kernel_size_out = kernel_size
else:
kernel_size_out = [min(shape[1], kernel_size[0]),
min(shape[2], kernel_size[1])]
return kernel_size_out
def mobilenet_v1_arg_scope(
is_training=True,
weight_decay=0.00004,
stddev=0.09,
regularize_depthwise=False,
batch_norm_decay=0.9997,
batch_norm_epsilon=0.001,
batch_norm_updates_collections=tf.GraphKeys.UPDATE_OPS,
normalizer_fn=slim.batch_norm):
"""Defines the default MobilenetV1 arg scope.
Args:
is_training: Whether or not we're training the model. If this is set to
None, the parameter is not added to the batch_norm arg_scope.
weight_decay: The weight decay to use for regularizing the model.
stddev: The standard deviation of the trunctated normal weight initializer.
regularize_depthwise: Whether or not apply regularization on depthwise.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
batch_norm_updates_collections: Collection for the update ops for
batch norm.
normalizer_fn: Normalization function to apply after convolution.
Returns:
An `arg_scope` to use for the mobilenet v1 model.
"""
batch_norm_params = {
'center': True,
'scale': True,
'decay': batch_norm_decay,
'epsilon': batch_norm_epsilon,
'updates_collections': batch_norm_updates_collections,
}
if is_training is not None:
batch_norm_params['is_training'] = is_training
# Set weight_decay for weights in Conv and DepthSepConv layers.
weights_init = tf.truncated_normal_initializer(stddev=stddev)
regularizer = slim.l2_regularizer(weight_decay)
if regularize_depthwise:
depthwise_regularizer = regularizer
else:
depthwise_regularizer = None
with slim.arg_scope([slim.conv2d, slim.separable_conv2d],
weights_initializer=weights_init,
activation_fn=tf.nn.relu6, normalizer_fn=normalizer_fn):
with slim.arg_scope([slim.batch_norm], **batch_norm_params):
with slim.arg_scope([slim.conv2d], weights_regularizer=regularizer):
with slim.arg_scope([slim.separable_conv2d],
weights_regularizer=depthwise_regularizer) as sc:
return sc | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet_v1.py | mobilenet_v1.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets import resnet_utils
resnet_arg_scope = resnet_utils.resnet_arg_scope
@slim.add_arg_scope
def bottleneck(inputs, depth, depth_bottleneck, stride, rate=1,
outputs_collections=None, scope=None):
"""Bottleneck residual unit variant with BN before convolutions.
This is the full preactivation residual unit variant proposed in [2]. See
Fig. 1(b) of [2] for its definition. Note that we use here the bottleneck
variant which has an extra bottleneck layer.
When putting together two consecutive ResNet blocks that use this unit, one
should use stride = 2 in the last unit of the first block.
Args:
inputs: A tensor of size [batch, height, width, channels].
depth: The depth of the ResNet unit output.
depth_bottleneck: The depth of the bottleneck layers.
stride: The ResNet unit's stride. Determines the amount of downsampling of
the units output compared to its input.
rate: An integer, rate for atrous convolution.
outputs_collections: Collection to add the ResNet unit output.
scope: Optional variable_scope.
Returns:
The ResNet unit's output.
"""
with tf.variable_scope(scope, 'bottleneck_v2', [inputs]) as sc:
depth_in = slim.utils.last_dimension(inputs.get_shape(), min_rank=4)
preact = slim.batch_norm(inputs, activation_fn=tf.nn.relu, scope='preact')
if depth == depth_in:
shortcut = resnet_utils.subsample(inputs, stride, 'shortcut')
else:
shortcut = slim.conv2d(preact, depth, [1, 1], stride=stride,
normalizer_fn=None, activation_fn=None,
scope='shortcut')
residual = slim.conv2d(preact, depth_bottleneck, [1, 1], stride=1,
scope='conv1')
residual = resnet_utils.conv2d_same(residual, depth_bottleneck, 3, stride,
rate=rate, scope='conv2')
residual = slim.conv2d(residual, depth, [1, 1], stride=1,
normalizer_fn=None, activation_fn=None,
scope='conv3')
output = shortcut + residual
return slim.utils.collect_named_outputs(outputs_collections,
sc.name,
output)
def resnet_v2(inputs,
blocks,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
include_root_block=True,
spatial_squeeze=True,
reuse=None,
scope=None):
"""Generator for v2 (preactivation) ResNet models.
This function generates a family of ResNet v2 models. See the resnet_v2_*()
methods for specific model instantiations, obtained by selecting different
block instantiations that produce ResNets of various depths.
Training for image classification on Imagenet is usually done with [224, 224]
inputs, resulting in [7, 7] feature maps at the output of the last ResNet
block for the ResNets defined in [1] that have nominal stride equal to 32.
However, for dense prediction tasks we advise that one uses inputs with
spatial dimensions that are multiples of 32 plus 1, e.g., [321, 321]. In
this case the feature maps at the ResNet output will have spatial shape
[(height - 1) / output_stride + 1, (width - 1) / output_stride + 1]
and corners exactly aligned with the input image corners, which greatly
facilitates alignment of the features to the image. Using as input [225, 225]
images results in [8, 8] feature maps at the output of the last ResNet block.
For dense prediction tasks, the ResNet needs to run in fully-convolutional
(FCN) mode and global_pool needs to be set to False. The ResNets in [1, 2] all
have nominal stride equal to 32 and a good choice in FCN mode is to use
output_stride=16 in order to increase the density of the computed features at
small computational and memory overhead, cf. http://arxiv.org/abs/1606.00915.
Args:
inputs: A tensor of size [batch, height_in, width_in, channels].
blocks: A list of length equal to the number of ResNet blocks. Each element
is a resnet_utils.Block object describing the units in the block.
num_classes: Number of predicted classes for classification tasks.
If 0 or None, we return the features before the logit layer.
is_training: whether batch_norm layers are in training mode.
global_pool: If True, we perform global average pooling before computing the
logits. Set to True for image classification, False for dense prediction.
output_stride: If None, then the output will be computed at the nominal
network stride. If output_stride is not None, it specifies the requested
ratio of input to output spatial resolution.
include_root_block: If True, include the initial convolution followed by
max-pooling, if False excludes it. If excluded, `inputs` should be the
results of an activation-less convolution.
spatial_squeeze: if True, logits is of shape [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
To use this parameter, the input images must be smaller than 300x300
pixels, in which case the output logit layer does not contain spatial
information and can be removed.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
Returns:
net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
If global_pool is False, then height_out and width_out are reduced by a
factor of output_stride compared to the respective height_in and width_in,
else both height_out and width_out equal one. If num_classes is 0 or None,
then net is the output of the last ResNet block, potentially after global
average pooling. If num_classes is a non-zero integer, net contains the
pre-softmax activations.
end_points: A dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: If the target output_stride is not valid.
"""
with tf.variable_scope(
scope, 'resnet_v2', [inputs], reuse=reuse) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
with slim.arg_scope([slim.conv2d, bottleneck,
resnet_utils.stack_blocks_dense],
outputs_collections=end_points_collection):
with slim.arg_scope([slim.batch_norm], is_training=is_training):
net = inputs
if include_root_block:
if output_stride is not None:
if output_stride % 4 != 0:
raise ValueError('The output_stride needs to be a multiple of 4.')
output_stride /= 4
# We do not include batch normalization or activation functions in
# conv1 because the first ResNet unit will perform these. Cf.
# Appendix of [2].
with slim.arg_scope([slim.conv2d],
activation_fn=None, normalizer_fn=None):
net = resnet_utils.conv2d_same(net, 64, 7, stride=2, scope='conv1')
net = slim.max_pool2d(net, [3, 3], stride=2, scope='pool1')
net = resnet_utils.stack_blocks_dense(net, blocks, output_stride)
# This is needed because the pre-activation variant does not have batch
# normalization or activation functions in the residual unit output. See
# Appendix of [2].
net = slim.batch_norm(net, activation_fn=tf.nn.relu, scope='postnorm')
# Convert end_points_collection into a dictionary of end_points.
end_points = slim.utils.convert_collection_to_dict(
end_points_collection)
if global_pool:
# Global average pooling.
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], name='pool5', keepdims=True)
end_points['global_pool'] = net
if num_classes:
net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, scope='logits')
end_points[sc.name + '/logits'] = net
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='SpatialSqueeze')
end_points[sc.name + '/spatial_squeeze'] = net
end_points['predictions'] = slim.softmax(net, scope='predictions')
return net, end_points
resnet_v2.default_image_size = 224
def resnet_v2_block(scope, base_depth, num_units, stride):
"""Helper function for creating a resnet_v2 bottleneck block.
Args:
scope: The scope of the block.
base_depth: The depth of the bottleneck layer for each unit.
num_units: The number of units in the block.
stride: The stride of the block, implemented as a stride in the last unit.
All other units have stride=1.
Returns:
A resnet_v2 bottleneck block.
"""
return resnet_utils.Block(scope, bottleneck, [{
'depth': base_depth * 4,
'depth_bottleneck': base_depth,
'stride': 1
}] * (num_units - 1) + [{
'depth': base_depth * 4,
'depth_bottleneck': base_depth,
'stride': stride
}])
resnet_v2.default_image_size = 224
def resnet_v2_50(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
reuse=None,
scope='resnet_v2_50'):
"""ResNet-50 model of [1]. See resnet_v2() for arg and return description."""
blocks = [
resnet_v2_block('block1', base_depth=64, num_units=3, stride=2),
resnet_v2_block('block2', base_depth=128, num_units=4, stride=2),
resnet_v2_block('block3', base_depth=256, num_units=6, stride=2),
resnet_v2_block('block4', base_depth=512, num_units=3, stride=1),
]
return resnet_v2(inputs, blocks, num_classes, is_training=is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
reuse=reuse, scope=scope)
resnet_v2_50.default_image_size = resnet_v2.default_image_size
def resnet_v2_101(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
reuse=None,
scope='resnet_v2_101'):
"""ResNet-101 model of [1]. See resnet_v2() for arg and return description."""
blocks = [
resnet_v2_block('block1', base_depth=64, num_units=3, stride=2),
resnet_v2_block('block2', base_depth=128, num_units=4, stride=2),
resnet_v2_block('block3', base_depth=256, num_units=23, stride=2),
resnet_v2_block('block4', base_depth=512, num_units=3, stride=1),
]
return resnet_v2(inputs, blocks, num_classes, is_training=is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
reuse=reuse, scope=scope)
resnet_v2_101.default_image_size = resnet_v2.default_image_size
def resnet_v2_152(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
reuse=None,
scope='resnet_v2_152'):
"""ResNet-152 model of [1]. See resnet_v2() for arg and return description."""
blocks = [
resnet_v2_block('block1', base_depth=64, num_units=3, stride=2),
resnet_v2_block('block2', base_depth=128, num_units=8, stride=2),
resnet_v2_block('block3', base_depth=256, num_units=36, stride=2),
resnet_v2_block('block4', base_depth=512, num_units=3, stride=1),
]
return resnet_v2(inputs, blocks, num_classes, is_training=is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
reuse=reuse, scope=scope)
resnet_v2_152.default_image_size = resnet_v2.default_image_size
def resnet_v2_200(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
spatial_squeeze=True,
reuse=None,
scope='resnet_v2_200'):
"""ResNet-200 model of [2]. See resnet_v2() for arg and return description."""
blocks = [
resnet_v2_block('block1', base_depth=64, num_units=3, stride=2),
resnet_v2_block('block2', base_depth=128, num_units=24, stride=2),
resnet_v2_block('block3', base_depth=256, num_units=36, stride=2),
resnet_v2_block('block4', base_depth=512, num_units=3, stride=1),
]
return resnet_v2(inputs, blocks, num_classes, is_training=is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, spatial_squeeze=spatial_squeeze,
reuse=reuse, scope=scope)
resnet_v2_200.default_image_size = resnet_v2.default_image_size | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/resnet_v2.py | resnet_v2.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
def vgg_arg_scope(weight_decay=0.0005):
"""Defines the VGG arg scope.
Args:
weight_decay: The l2 regularization coefficient.
Returns:
An arg_scope.
"""
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(weight_decay),
biases_initializer=tf.zeros_initializer()):
with slim.arg_scope([slim.conv2d], padding='SAME') as arg_sc:
return arg_sc
def vgg_a(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
reuse=None,
scope='vgg_a',
fc_conv_padding='VALID',
global_pool=False):
"""Oxford Net VGG 11-Layers version A Example.
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional scope for the variables.
fc_conv_padding: the type of padding to use for the fully connected layer
that is implemented as a convolutional layer. Use 'SAME' padding if you
are applying the network in a fully convolutional manner and want to
get a prediction map downsampled by a factor of 32 as an output.
Otherwise, the output prediction map will be (input / 32) - 6 in case of
'VALID' padding.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original VGG architecture.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the input to the logits layer (if num_classes is 0 or None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(scope, 'vgg_a', [inputs], reuse=reuse) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.repeat(inputs, 1, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 1, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 2, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 2, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 2, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
net = slim.conv2d(net, 4096, [7, 7], padding=fc_conv_padding, scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(end_points_collection)
if global_pool:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(net, num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
vgg_a.default_image_size = 224
def vgg_16(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
reuse=None,
scope='vgg_16',
fc_conv_padding='VALID',
global_pool=False):
"""Oxford Net VGG 16-Layers version D Example.
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional scope for the variables.
fc_conv_padding: the type of padding to use for the fully connected layer
that is implemented as a convolutional layer. Use 'SAME' padding if you
are applying the network in a fully convolutional manner and want to
get a prediction map downsampled by a factor of 32 as an output.
Otherwise, the output prediction map will be (input / 32) - 6 in case of
'VALID' padding.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original VGG architecture.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the input to the logits layer (if num_classes is 0 or None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(
scope, 'vgg_16', [inputs], reuse=reuse) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
net = slim.conv2d(net, 4096, [7, 7], padding=fc_conv_padding, scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(end_points_collection)
if global_pool:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(net, num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
vgg_16.default_image_size = 224
def vgg_19(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
reuse=None,
scope='vgg_19',
fc_conv_padding='VALID',
global_pool=False):
"""Oxford Net VGG 19-Layers version E Example.
Note: All the fully_connected layers have been transformed to conv2d layers.
To use in classification mode, resize input to 224x224.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer is
omitted and the input features to the logits layer are returned instead.
is_training: whether or not the model is being trained.
dropout_keep_prob: the probability that activations are kept in the dropout
layers during training.
spatial_squeeze: whether or not should squeeze the spatial dimensions of the
outputs. Useful to remove unnecessary dimensions for classification.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional scope for the variables.
fc_conv_padding: the type of padding to use for the fully connected layer
that is implemented as a convolutional layer. Use 'SAME' padding if you
are applying the network in a fully convolutional manner and want to
get a prediction map downsampled by a factor of 32 as an output.
Otherwise, the output prediction map will be (input / 32) - 6 in case of
'VALID' padding.
global_pool: Optional boolean flag. If True, the input to the classification
layer is avgpooled to size 1x1, for any input size. (This is not part
of the original VGG architecture.)
Returns:
net: the output of the logits layer (if num_classes is a non-zero integer),
or the non-dropped-out input to the logits layer (if num_classes is 0 or
None).
end_points: a dict of tensors with intermediate activations.
"""
with tf.variable_scope(
scope, 'vgg_19', [inputs], reuse=reuse) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d.
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 4, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 4, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 4, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
# Use conv2d instead of fully_connected layers.
net = slim.conv2d(net, 4096, [7, 7], padding=fc_conv_padding, scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 4096, [1, 1], scope='fc7')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(end_points_collection)
if global_pool:
net = tf.reduce_mean(
input_tensor=net, axis=[1, 2], keepdims=True, name='global_pool')
end_points['global_pool'] = net
if num_classes:
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout7')
net = slim.conv2d(net, num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
scope='fc8')
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
vgg_19.default_image_size = 224
# Alias
vgg_d = vgg_16
vgg_e = vgg_19 | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/vgg.py | vgg.py |
"""Convolution blocks for mobilenet."""
import contextlib
import functools
import tensorflow.compat.v1 as tf
import tf_slim as slim
def _fixed_padding(inputs, kernel_size, rate=1):
"""Pads the input along the spatial dimensions independently of input size.
Pads the input such that if it was used in a convolution with 'VALID' padding,
the output would have the same dimensions as if the unpadded input was used
in a convolution with 'SAME' padding.
Args:
inputs: A tensor of size [batch, height_in, width_in, channels].
kernel_size: The kernel to be used in the conv2d or max_pool2d operation.
rate: An integer, rate for atrous convolution.
Returns:
output: A tensor of size [batch, height_out, width_out, channels] with the
input, either intact (if kernel_size == 1) or padded (if kernel_size > 1).
"""
kernel_size_effective = [kernel_size[0] + (kernel_size[0] - 1) * (rate - 1),
kernel_size[0] + (kernel_size[0] - 1) * (rate - 1)]
pad_total = [kernel_size_effective[0] - 1, kernel_size_effective[1] - 1]
pad_beg = [pad_total[0] // 2, pad_total[1] // 2]
pad_end = [pad_total[0] - pad_beg[0], pad_total[1] - pad_beg[1]]
padded_inputs = tf.pad(inputs, [[0, 0], [pad_beg[0], pad_end[0]],
[pad_beg[1], pad_end[1]], [0, 0]])
return padded_inputs
def _make_divisible(v, divisor, min_value=None):
if min_value is None:
min_value = divisor
new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
# Make sure that round down does not go down by more than 10%.
if new_v < 0.9 * v:
new_v += divisor
return new_v
def _split_divisible(num, num_ways, divisible_by=8):
"""Evenly splits num, num_ways so each piece is a multiple of divisible_by."""
assert num % divisible_by == 0
assert num / num_ways >= divisible_by
# Note: want to round down, we adjust each split to match the total.
base = num // num_ways // divisible_by * divisible_by
result = []
accumulated = 0
for i in range(num_ways):
r = base
while accumulated + r < num * (i + 1) / num_ways:
r += divisible_by
result.append(r)
accumulated += r
assert accumulated == num
return result
@contextlib.contextmanager
def _v1_compatible_scope_naming(scope):
"""v1 compatible scope naming."""
if scope is None: # Create uniqified separable blocks.
with tf.variable_scope(None, default_name='separable') as s, \
tf.name_scope(s.original_name_scope):
yield ''
else:
# We use scope_depthwise, scope_pointwise for compatibility with V1 ckpts.
# which provide numbered scopes.
scope += '_'
yield scope
@slim.add_arg_scope
def split_separable_conv2d(input_tensor,
num_outputs,
scope=None,
normalizer_fn=None,
stride=1,
rate=1,
endpoints=None,
use_explicit_padding=False):
"""Separable mobilenet V1 style convolution.
Depthwise convolution, with default non-linearity,
followed by 1x1 depthwise convolution. This is similar to
slim.separable_conv2d, but differs in tha it applies batch
normalization and non-linearity to depthwise. This matches
the basic building of Mobilenet Paper
(https://arxiv.org/abs/1704.04861)
Args:
input_tensor: input
num_outputs: number of outputs
scope: optional name of the scope. Note if provided it will use
scope_depthwise for deptwhise, and scope_pointwise for pointwise.
normalizer_fn: which normalizer function to use for depthwise/pointwise
stride: stride
rate: output rate (also known as dilation rate)
endpoints: optional, if provided, will export additional tensors to it.
use_explicit_padding: Use 'VALID' padding for convolutions, but prepad
inputs so that the output dimensions are the same as if 'SAME' padding
were used.
Returns:
output tesnor
"""
with _v1_compatible_scope_naming(scope) as scope:
dw_scope = scope + 'depthwise'
endpoints = endpoints if endpoints is not None else {}
kernel_size = [3, 3]
padding = 'SAME'
if use_explicit_padding:
padding = 'VALID'
input_tensor = _fixed_padding(input_tensor, kernel_size, rate)
net = slim.separable_conv2d(
input_tensor,
None,
kernel_size,
depth_multiplier=1,
stride=stride,
rate=rate,
normalizer_fn=normalizer_fn,
padding=padding,
scope=dw_scope)
endpoints[dw_scope] = net
pw_scope = scope + 'pointwise'
net = slim.conv2d(
net,
num_outputs, [1, 1],
stride=1,
normalizer_fn=normalizer_fn,
scope=pw_scope)
endpoints[pw_scope] = net
return net
def expand_input_by_factor(n, divisible_by=8):
return lambda num_inputs, **_: _make_divisible(num_inputs * n, divisible_by)
def split_conv(input_tensor,
num_outputs,
num_ways,
scope,
divisible_by=8,
**kwargs):
"""Creates a split convolution.
Split convolution splits the input and output into
'num_blocks' blocks of approximately the same size each,
and only connects $i$-th input to $i$ output.
Args:
input_tensor: input tensor
num_outputs: number of output filters
num_ways: num blocks to split by.
scope: scope for all the operators.
divisible_by: make sure that every part is divisiable by this.
**kwargs: will be passed directly into conv2d operator
Returns:
tensor
"""
b = input_tensor.get_shape().as_list()[3]
if num_ways == 1 or min(b // num_ways,
num_outputs // num_ways) < divisible_by:
# Don't do any splitting if we end up with less than 8 filters
# on either side.
return slim.conv2d(input_tensor, num_outputs, [1, 1], scope=scope, **kwargs)
outs = []
input_splits = _split_divisible(b, num_ways, divisible_by=divisible_by)
output_splits = _split_divisible(
num_outputs, num_ways, divisible_by=divisible_by)
inputs = tf.split(input_tensor, input_splits, axis=3, name='split_' + scope)
base = scope
for i, (input_tensor, out_size) in enumerate(zip(inputs, output_splits)):
scope = base + '_part_%d' % (i,)
n = slim.conv2d(input_tensor, out_size, [1, 1], scope=scope, **kwargs)
n = tf.identity(n, scope + '_output')
outs.append(n)
return tf.concat(outs, 3, name=scope + '_concat')
@slim.add_arg_scope
def expanded_conv(input_tensor,
num_outputs,
expansion_size=expand_input_by_factor(6),
stride=1,
rate=1,
kernel_size=(3, 3),
residual=True,
normalizer_fn=None,
split_projection=1,
split_expansion=1,
split_divisible_by=8,
expansion_transform=None,
depthwise_location='expansion',
depthwise_channel_multiplier=1,
endpoints=None,
use_explicit_padding=False,
padding='SAME',
inner_activation_fn=None,
depthwise_activation_fn=None,
project_activation_fn=tf.identity,
depthwise_fn=slim.separable_conv2d,
expansion_fn=split_conv,
projection_fn=split_conv,
scope=None):
"""Depthwise Convolution Block with expansion.
Builds a composite convolution that has the following structure
expansion (1x1) -> depthwise (kernel_size) -> projection (1x1)
Args:
input_tensor: input
num_outputs: number of outputs in the final layer.
expansion_size: the size of expansion, could be a constant or a callable.
If latter it will be provided 'num_inputs' as an input. For forward
compatibility it should accept arbitrary keyword arguments.
Default will expand the input by factor of 6.
stride: depthwise stride
rate: depthwise rate
kernel_size: depthwise kernel
residual: whether to include residual connection between input
and output.
normalizer_fn: batchnorm or otherwise
split_projection: how many ways to split projection operator
(that is conv expansion->bottleneck)
split_expansion: how many ways to split expansion op
(that is conv bottleneck->expansion) ops will keep depth divisible
by this value.
split_divisible_by: make sure every split group is divisible by this number.
expansion_transform: Optional function that takes expansion
as a single input and returns output.
depthwise_location: where to put depthwise covnvolutions supported
values None, 'input', 'output', 'expansion'
depthwise_channel_multiplier: depthwise channel multiplier:
each input will replicated (with different filters)
that many times. So if input had c channels,
output will have c x depthwise_channel_multpilier.
endpoints: An optional dictionary into which intermediate endpoints are
placed. The keys "expansion_output", "depthwise_output",
"projection_output" and "expansion_transform" are always populated, even
if the corresponding functions are not invoked.
use_explicit_padding: Use 'VALID' padding for convolutions, but prepad
inputs so that the output dimensions are the same as if 'SAME' padding
were used.
padding: Padding type to use if `use_explicit_padding` is not set.
inner_activation_fn: activation function to use in all inner convolutions.
If none, will rely on slim default scopes.
depthwise_activation_fn: activation function to use for deptwhise only.
If not provided will rely on slim default scopes. If both
inner_activation_fn and depthwise_activation_fn are provided,
depthwise_activation_fn takes precedence over inner_activation_fn.
project_activation_fn: activation function for the project layer.
(note this layer is not affected by inner_activation_fn)
depthwise_fn: Depthwise convolution function.
expansion_fn: Expansion convolution function. If use custom function then
"split_expansion" and "split_divisible_by" will be ignored.
projection_fn: Projection convolution function. If use custom function then
"split_projection" and "split_divisible_by" will be ignored.
scope: optional scope.
Returns:
Tensor of depth num_outputs
Raises:
TypeError: on inval
"""
conv_defaults = {}
dw_defaults = {}
if inner_activation_fn is not None:
conv_defaults['activation_fn'] = inner_activation_fn
dw_defaults['activation_fn'] = inner_activation_fn
if depthwise_activation_fn is not None:
dw_defaults['activation_fn'] = depthwise_activation_fn
# pylint: disable=g-backslash-continuation
with tf.variable_scope(scope, default_name='expanded_conv') as s, \
tf.name_scope(s.original_name_scope), \
slim.arg_scope((slim.conv2d,), **conv_defaults), \
slim.arg_scope((slim.separable_conv2d,), **dw_defaults):
prev_depth = input_tensor.get_shape().as_list()[3]
if depthwise_location not in [None, 'input', 'output', 'expansion']:
raise TypeError('%r is unknown value for depthwise_location' %
depthwise_location)
if use_explicit_padding:
if padding != 'SAME':
raise TypeError('`use_explicit_padding` should only be used with '
'"SAME" padding.')
padding = 'VALID'
depthwise_func = functools.partial(
depthwise_fn,
num_outputs=None,
kernel_size=kernel_size,
depth_multiplier=depthwise_channel_multiplier,
stride=stride,
rate=rate,
normalizer_fn=normalizer_fn,
padding=padding,
scope='depthwise')
# b1 -> b2 * r -> b2
# i -> (o * r) (bottleneck) -> o
input_tensor = tf.identity(input_tensor, 'input')
net = input_tensor
if depthwise_location == 'input':
if use_explicit_padding:
net = _fixed_padding(net, kernel_size, rate)
net = depthwise_func(net, activation_fn=None)
net = tf.identity(net, name='depthwise_output')
if endpoints is not None:
endpoints['depthwise_output'] = net
if callable(expansion_size):
inner_size = expansion_size(num_inputs=prev_depth)
else:
inner_size = expansion_size
if inner_size > net.shape[3]:
if expansion_fn == split_conv:
expansion_fn = functools.partial(
expansion_fn,
num_ways=split_expansion,
divisible_by=split_divisible_by,
stride=1)
net = expansion_fn(
net,
inner_size,
scope='expand',
normalizer_fn=normalizer_fn)
net = tf.identity(net, 'expansion_output')
if endpoints is not None:
endpoints['expansion_output'] = net
if depthwise_location == 'expansion':
if use_explicit_padding:
net = _fixed_padding(net, kernel_size, rate)
net = depthwise_func(net)
net = tf.identity(net, name='depthwise_output')
if endpoints is not None:
endpoints['depthwise_output'] = net
if expansion_transform:
net = expansion_transform(expansion_tensor=net, input_tensor=input_tensor)
# Note in contrast with expansion, we always have
# projection to produce the desired output size.
if projection_fn == split_conv:
projection_fn = functools.partial(
projection_fn,
num_ways=split_projection,
divisible_by=split_divisible_by,
stride=1)
net = projection_fn(
net,
num_outputs,
scope='project',
normalizer_fn=normalizer_fn,
activation_fn=project_activation_fn)
if endpoints is not None:
endpoints['projection_output'] = net
if depthwise_location == 'output':
if use_explicit_padding:
net = _fixed_padding(net, kernel_size, rate)
net = depthwise_func(net, activation_fn=None)
net = tf.identity(net, name='depthwise_output')
if endpoints is not None:
endpoints['depthwise_output'] = net
if callable(residual): # custom residual
net = residual(input_tensor=input_tensor, output_tensor=net)
elif (residual and
# stride check enforces that we don't add residuals when spatial
# dimensions are None
stride == 1 and
# Depth matches
net.get_shape().as_list()[3] ==
input_tensor.get_shape().as_list()[3]):
net += input_tensor
return tf.identity(net, name='output')
@slim.add_arg_scope
def squeeze_excite(input_tensor,
divisible_by=8,
squeeze_factor=3,
inner_activation_fn=tf.nn.relu,
gating_fn=tf.sigmoid,
squeeze_input_tensor=None,
pool=None):
"""Squeeze excite block for Mobilenet V3.
If the squeeze_input_tensor - or the input_tensor if squeeze_input_tensor is
None - contains variable dimensions (Nonetype in tensor shape), perform
average pooling (as the first step in the squeeze operation) by calling
reduce_mean across the H/W of the input tensor.
Args:
input_tensor: input tensor to apply SE block to.
divisible_by: ensures all inner dimensions are divisible by this number.
squeeze_factor: the factor of squeezing in the inner fully connected layer
inner_activation_fn: non-linearity to be used in inner layer.
gating_fn: non-linearity to be used for final gating function
squeeze_input_tensor: custom tensor to use for computing gating activation.
If provided the result will be input_tensor * SE(squeeze_input_tensor)
instead of input_tensor * SE(input_tensor).
pool: if number is provided will average pool with that kernel size
to compute inner tensor, followed by bilinear upsampling.
Returns:
Gated input_tensor. (e.g. X * SE(X))
"""
with tf.variable_scope('squeeze_excite'):
if squeeze_input_tensor is None:
squeeze_input_tensor = input_tensor
input_size = input_tensor.shape.as_list()[1:3]
pool_height, pool_width = squeeze_input_tensor.shape.as_list()[1:3]
stride = 1
if pool is not None and pool_height >= pool:
pool_height, pool_width, stride = pool, pool, pool
input_channels = squeeze_input_tensor.shape.as_list()[3]
output_channels = input_tensor.shape.as_list()[3]
squeeze_channels = _make_divisible(
input_channels / squeeze_factor, divisor=divisible_by)
if pool is None:
pooled = tf.reduce_mean(squeeze_input_tensor, axis=[1, 2], keepdims=True)
else:
pooled = tf.nn.avg_pool(
squeeze_input_tensor, (1, pool_height, pool_width, 1),
strides=(1, stride, stride, 1),
padding='VALID')
squeeze = slim.conv2d(
pooled,
kernel_size=(1, 1),
num_outputs=squeeze_channels,
normalizer_fn=None,
activation_fn=inner_activation_fn)
excite_outputs = output_channels
excite = slim.conv2d(squeeze, num_outputs=excite_outputs,
kernel_size=[1, 1],
normalizer_fn=None,
activation_fn=gating_fn)
if pool is not None:
# Note: As of 03/20/2019 only BILINEAR (the default) with
# align_corners=True has gradients implemented in TPU.
excite = tf.image.resize_images(
excite, input_size,
align_corners=True)
result = input_tensor * excite
return result | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet/conv_blocks.py | conv_blocks.py |
"""Mobilenet Base Class."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import contextlib
import copy
import os
import tensorflow.compat.v1 as tf
import tf_slim as slim
@slim.add_arg_scope
def apply_activation(x, name=None, activation_fn=None):
return activation_fn(x, name=name) if activation_fn else x
def _fixed_padding(inputs, kernel_size, rate=1):
"""Pads the input along the spatial dimensions independently of input size.
Pads the input such that if it was used in a convolution with 'VALID' padding,
the output would have the same dimensions as if the unpadded input was used
in a convolution with 'SAME' padding.
Args:
inputs: A tensor of size [batch, height_in, width_in, channels].
kernel_size: The kernel to be used in the conv2d or max_pool2d operation.
rate: An integer, rate for atrous convolution.
Returns:
output: A tensor of size [batch, height_out, width_out, channels] with the
input, either intact (if kernel_size == 1) or padded (if kernel_size > 1).
"""
kernel_size_effective = [kernel_size[0] + (kernel_size[0] - 1) * (rate - 1),
kernel_size[0] + (kernel_size[0] - 1) * (rate - 1)]
pad_total = [kernel_size_effective[0] - 1, kernel_size_effective[1] - 1]
pad_beg = [pad_total[0] // 2, pad_total[1] // 2]
pad_end = [pad_total[0] - pad_beg[0], pad_total[1] - pad_beg[1]]
padded_inputs = tf.pad(
tensor=inputs,
paddings=[[0, 0], [pad_beg[0], pad_end[0]], [pad_beg[1], pad_end[1]],
[0, 0]])
return padded_inputs
def _make_divisible(v, divisor, min_value=None):
if min_value is None:
min_value = divisor
new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
# Make sure that round down does not go down by more than 10%.
if new_v < 0.9 * v:
new_v += divisor
return int(new_v)
@contextlib.contextmanager
def _set_arg_scope_defaults(defaults):
"""Sets arg scope defaults for all items present in defaults.
Args:
defaults: dictionary/list of pairs, containing a mapping from
function to a dictionary of default args.
Yields:
context manager where all defaults are set.
"""
if hasattr(defaults, 'items'):
items = list(defaults.items())
else:
items = defaults
if not items:
yield
else:
func, default_arg = items[0]
with slim.arg_scope(func, **default_arg):
with _set_arg_scope_defaults(items[1:]):
yield
@slim.add_arg_scope
def depth_multiplier(output_params,
multiplier,
divisible_by=8,
min_depth=8,
**unused_kwargs):
if 'num_outputs' not in output_params:
return
d = output_params['num_outputs']
output_params['num_outputs'] = _make_divisible(d * multiplier, divisible_by,
min_depth)
_Op = collections.namedtuple('Op', ['op', 'params', 'multiplier_func'])
def op(opfunc, multiplier_func=depth_multiplier, **params):
multiplier = params.pop('multiplier_transform', multiplier_func)
return _Op(opfunc, params=params, multiplier_func=multiplier)
class NoOpScope(object):
"""No-op context manager."""
def __enter__(self):
return None
def __exit__(self, exc_type, exc_value, traceback):
return False
def safe_arg_scope(funcs, **kwargs):
"""Returns `slim.arg_scope` with all None arguments removed.
Args:
funcs: Functions to pass to `arg_scope`.
**kwargs: Arguments to pass to `arg_scope`.
Returns:
arg_scope or No-op context manager.
Note: can be useful if None value should be interpreted as "do not overwrite
this parameter value".
"""
filtered_args = {name: value for name, value in kwargs.items()
if value is not None}
if filtered_args:
return slim.arg_scope(funcs, **filtered_args)
else:
return NoOpScope()
@slim.add_arg_scope
def mobilenet_base( # pylint: disable=invalid-name
inputs,
conv_defs,
multiplier=1.0,
final_endpoint=None,
output_stride=None,
use_explicit_padding=False,
scope=None,
is_training=False):
"""Mobilenet base network.
Constructs a network from inputs to the given final endpoint. By default
the network is constructed in inference mode. To create network
in training mode use:
with slim.arg_scope(mobilenet.training_scope()):
logits, endpoints = mobilenet_base(...)
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
conv_defs: A list of op(...) layers specifying the net architecture.
multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
final_endpoint: The name of last layer, for early termination for
for V1-based networks: last layer is "layer_14", for V2: "layer_20"
output_stride: An integer that specifies the requested ratio of input to
output spatial resolution. If not None, then we invoke atrous convolution
if necessary to prevent the network from reducing the spatial resolution
of the activation maps. Allowed values are 1 or any even number, excluding
zero. Typical values are 8 (accurate fully convolutional mode), 16
(fast fully convolutional mode), and 32 (classification mode).
NOTE- output_stride relies on all consequent operators to support dilated
operators via "rate" parameter. This might require wrapping non-conv
operators to operate properly.
use_explicit_padding: Use 'VALID' padding for convolutions, but prepad
inputs so that the output dimensions are the same as if 'SAME' padding
were used.
scope: optional variable scope.
is_training: How to setup batch_norm and other ops. Note: most of the time
this does not need be set directly. Use mobilenet.training_scope() to set
up training instead. This parameter is here for backward compatibility
only. It is safe to set it to the value matching
training_scope(is_training=...). It is also safe to explicitly set
it to False, even if there is outer training_scope set to to training.
(The network will be built in inference mode). If this is set to None,
no arg_scope is added for slim.batch_norm's is_training parameter.
Returns:
tensor_out: output tensor.
end_points: a set of activations for external use, for example summaries or
losses.
Raises:
ValueError: depth_multiplier <= 0, or the target output_stride is not
allowed.
"""
if multiplier <= 0:
raise ValueError('multiplier is not greater than zero.')
# Set conv defs defaults and overrides.
conv_defs_defaults = conv_defs.get('defaults', {})
conv_defs_overrides = conv_defs.get('overrides', {})
if use_explicit_padding:
conv_defs_overrides = copy.deepcopy(conv_defs_overrides)
conv_defs_overrides[
(slim.conv2d, slim.separable_conv2d)] = {'padding': 'VALID'}
if output_stride is not None:
if output_stride == 0 or (output_stride > 1 and output_stride % 2):
raise ValueError('Output stride must be None, 1 or a multiple of 2.')
# a) Set the tensorflow scope
# b) set padding to default: note we might consider removing this
# since it is also set by mobilenet_scope
# c) set all defaults
# d) set all extra overrides.
# pylint: disable=g-backslash-continuation
with _scope_all(scope, default_scope='Mobilenet'), \
safe_arg_scope([slim.batch_norm], is_training=is_training), \
_set_arg_scope_defaults(conv_defs_defaults), \
_set_arg_scope_defaults(conv_defs_overrides):
# The current_stride variable keeps track of the output stride of the
# activations, i.e., the running product of convolution strides up to the
# current network layer. This allows us to invoke atrous convolution
# whenever applying the next convolution would result in the activations
# having output stride larger than the target output_stride.
current_stride = 1
# The atrous convolution rate parameter.
rate = 1
net = inputs
# Insert default parameters before the base scope which includes
# any custom overrides set in mobilenet.
end_points = {}
scopes = {}
for i, opdef in enumerate(conv_defs['spec']):
params = dict(opdef.params)
opdef.multiplier_func(params, multiplier)
stride = params.get('stride', 1)
if output_stride is not None and current_stride == output_stride:
# If we have reached the target output_stride, then we need to employ
# atrous convolution with stride=1 and multiply the atrous rate by the
# current unit's stride for use in subsequent layers.
layer_stride = 1
layer_rate = rate
rate *= stride
else:
layer_stride = stride
layer_rate = 1
current_stride *= stride
# Update params.
params['stride'] = layer_stride
# Only insert rate to params if rate > 1 and kernel size is not [1, 1].
if layer_rate > 1:
if tuple(params.get('kernel_size', [])) != (1, 1):
# We will apply atrous rate in the following cases:
# 1) When kernel_size is not in params, the operation then uses
# default kernel size 3x3.
# 2) When kernel_size is in params, and if the kernel_size is not
# equal to (1, 1) (there is no need to apply atrous convolution to
# any 1x1 convolution).
params['rate'] = layer_rate
# Set padding
if use_explicit_padding:
if 'kernel_size' in params:
net = _fixed_padding(net, params['kernel_size'], layer_rate)
else:
params['use_explicit_padding'] = True
end_point = 'layer_%d' % (i + 1)
try:
net = opdef.op(net, **params)
except Exception:
print('Failed to create op %i: %r params: %r' % (i, opdef, params))
raise
end_points[end_point] = net
scope = os.path.dirname(net.name)
scopes[scope] = end_point
if final_endpoint is not None and end_point == final_endpoint:
break
# Add all tensors that end with 'output' to
# endpoints
for t in net.graph.get_operations():
scope = os.path.dirname(t.name)
bn = os.path.basename(t.name)
if scope in scopes and t.name.endswith('output'):
end_points[scopes[scope] + '/' + bn] = t.outputs[0]
return net, end_points
@contextlib.contextmanager
def _scope_all(scope, default_scope=None):
with tf.variable_scope(scope, default_name=default_scope) as s,\
tf.name_scope(s.original_name_scope):
yield s
@slim.add_arg_scope
def mobilenet(inputs,
num_classes=1001,
prediction_fn=slim.softmax,
reuse=None,
scope='Mobilenet',
base_only=False,
use_reduce_mean_for_pooling=False,
**mobilenet_args):
"""Mobilenet model for classification, supports both V1 and V2.
Note: default mode is inference, use mobilenet.training_scope to create
training network.
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
num_classes: number of predicted classes. If 0 or None, the logits layer
is omitted and the input features to the logits layer (before dropout)
are returned instead.
prediction_fn: a function to get predictions out of logits
(default softmax).
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
base_only: if True will only create the base of the network (no pooling
and no logits).
use_reduce_mean_for_pooling: if True use the reduce_mean for pooling. If
True use the global_pool function that provides some optimization.
**mobilenet_args: passed to mobilenet_base verbatim.
- conv_defs: list of conv defs
- multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
- output_stride: will ensure that the last layer has at most total stride.
If the architecture calls for more stride than that provided
(e.g. output_stride=16, but the architecture has 5 stride=2 operators),
it will replace output_stride with fractional convolutions using Atrous
Convolutions.
Returns:
logits: the pre-softmax activations, a tensor of size
[batch_size, num_classes]
end_points: a dictionary from components of the network to the corresponding
activation tensor.
Raises:
ValueError: Input rank is invalid.
"""
is_training = mobilenet_args.get('is_training', False)
input_shape = inputs.get_shape().as_list()
if len(input_shape) != 4:
raise ValueError('Expected rank 4 input, was: %d' % len(input_shape))
with tf.variable_scope(scope, 'Mobilenet', reuse=reuse) as scope:
inputs = tf.identity(inputs, 'input')
net, end_points = mobilenet_base(inputs, scope=scope, **mobilenet_args)
if base_only:
return net, end_points
net = tf.identity(net, name='embedding')
with tf.variable_scope('Logits'):
net = global_pool(net, use_reduce_mean_for_pooling)
end_points['global_pool'] = net
if not num_classes:
return net, end_points
net = slim.dropout(net, scope='Dropout', is_training=is_training)
# 1 x 1 x num_classes
# Note: legacy scope name.
logits = slim.conv2d(
net,
num_classes, [1, 1],
activation_fn=None,
normalizer_fn=None,
biases_initializer=tf.zeros_initializer(),
scope='Conv2d_1c_1x1')
logits = tf.squeeze(logits, [1, 2])
logits = tf.identity(logits, name='output')
end_points['Logits'] = logits
if prediction_fn:
end_points['Predictions'] = prediction_fn(logits, 'Predictions')
return logits, end_points
def global_pool(input_tensor,
use_reduce_mean_for_pooling=False,
pool_op=tf.nn.avg_pool2d):
"""Applies avg pool to produce 1x1 output.
NOTE: This function is funcitonally equivalenet to reduce_mean, but it has
baked in average pool which has better support across hardware.
Args:
input_tensor: input tensor
use_reduce_mean_for_pooling: if True use reduce_mean for pooling
pool_op: pooling op (avg pool is default)
Returns:
a tensor batch_size x 1 x 1 x depth.
"""
if use_reduce_mean_for_pooling:
return tf.reduce_mean(
input_tensor, [1, 2], keepdims=True, name='ReduceMean')
else:
shape = input_tensor.get_shape().as_list()
if shape[1] is None or shape[2] is None:
kernel_size = tf.convert_to_tensor(value=[
1,
tf.shape(input=input_tensor)[1],
tf.shape(input=input_tensor)[2], 1
])
else:
kernel_size = [1, shape[1], shape[2], 1]
output = pool_op(
input_tensor, ksize=kernel_size, strides=[1, 1, 1, 1], padding='VALID')
# Recover output shape, for unknown shape.
output.set_shape([None, 1, 1, None])
return output
def training_scope(is_training=True,
weight_decay=0.00004,
stddev=0.09,
dropout_keep_prob=0.8,
bn_decay=0.997):
"""Defines Mobilenet training scope.
Usage:
with slim.arg_scope(mobilenet.training_scope()):
logits, endpoints = mobilenet_v2.mobilenet(input_tensor)
# the network created will be trainble with dropout/batch norm
# initialized appropriately.
Args:
is_training: if set to False this will ensure that all customizations are
set to non-training mode. This might be helpful for code that is reused
across both training/evaluation, but most of the time training_scope with
value False is not needed. If this is set to None, the parameters is not
added to the batch_norm arg_scope.
weight_decay: The weight decay to use for regularizing the model.
stddev: Standard deviation for initialization, if negative uses xavier.
dropout_keep_prob: dropout keep probability (not set if equals to None).
bn_decay: decay for the batch norm moving averages (not set if equals to
None).
Returns:
An argument scope to use via arg_scope.
"""
# Note: do not introduce parameters that would change the inference
# model here (for example whether to use bias), modify conv_def instead.
batch_norm_params = {
'decay': bn_decay,
'is_training': is_training
}
if stddev < 0:
weight_intitializer = slim.initializers.xavier_initializer()
else:
weight_intitializer = tf.truncated_normal_initializer(
stddev=stddev)
# Set weight_decay for weights in Conv and FC layers.
with slim.arg_scope(
[slim.conv2d, slim.fully_connected, slim.separable_conv2d],
weights_initializer=weight_intitializer,
normalizer_fn=slim.batch_norm), \
slim.arg_scope([mobilenet_base, mobilenet], is_training=is_training),\
safe_arg_scope([slim.batch_norm], **batch_norm_params), \
safe_arg_scope([slim.dropout], is_training=is_training,
keep_prob=dropout_keep_prob), \
slim.arg_scope([slim.conv2d], \
weights_regularizer=slim.l2_regularizer(weight_decay)), \
slim.arg_scope([slim.separable_conv2d], weights_regularizer=None) as s:
return s | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet/mobilenet.py | mobilenet.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import functools
import numpy as np
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets.mobilenet import conv_blocks as ops
from nets.mobilenet import mobilenet as lib
op = lib.op
expand_input = ops.expand_input_by_factor
# Squeeze Excite with all parameters filled-in, we use hard-sigmoid
# for gating function and relu for inner activation function.
squeeze_excite = functools.partial(
ops.squeeze_excite, squeeze_factor=4,
inner_activation_fn=tf.nn.relu,
gating_fn=lambda x: tf.nn.relu6(x+3)*0.16667)
# Wrap squeeze excite op as expansion_transform that takes
# both expansion and input tensor.
_se4 = lambda expansion_tensor, input_tensor: squeeze_excite(expansion_tensor)
def hard_swish(x):
with tf.name_scope('hard_swish'):
return x * tf.nn.relu6(x + np.float32(3)) * np.float32(1. / 6.)
def reduce_to_1x1(input_tensor, default_size=7, **kwargs):
h, w = input_tensor.shape.as_list()[1:3]
if h is not None and w == h:
k = [h, h]
else:
k = [default_size, default_size]
return slim.avg_pool2d(input_tensor, kernel_size=k, **kwargs)
def mbv3_op(ef, n, k, s=1, act=tf.nn.relu, se=None, **kwargs):
"""Defines a single Mobilenet V3 convolution block.
Args:
ef: expansion factor
n: number of output channels
k: stride of depthwise
s: stride
act: activation function in inner layers
se: squeeze excite function.
**kwargs: passed to expanded_conv
Returns:
An object (lib._Op) for inserting in conv_def, representing this operation.
"""
return op(
ops.expanded_conv,
expansion_size=expand_input(ef),
kernel_size=(k, k),
stride=s,
num_outputs=n,
inner_activation_fn=act,
expansion_transform=se,
**kwargs)
def mbv3_fused(ef, n, k, s=1, **kwargs):
"""Defines a single Mobilenet V3 convolution block.
Args:
ef: expansion factor
n: number of output channels
k: stride of depthwise
s: stride
**kwargs: will be passed to mbv3_op
Returns:
An object (lib._Op) for inserting in conv_def, representing this operation.
"""
expansion_fn = functools.partial(slim.conv2d, kernel_size=k, stride=s)
return mbv3_op(
ef,
n,
k=1,
s=s,
depthwise_location=None,
expansion_fn=expansion_fn,
**kwargs)
mbv3_op_se = functools.partial(mbv3_op, se=_se4)
DEFAULTS = {
(ops.expanded_conv,):
dict(
normalizer_fn=slim.batch_norm,
residual=True),
(slim.conv2d, slim.fully_connected, slim.separable_conv2d): {
'normalizer_fn': slim.batch_norm,
'activation_fn': tf.nn.relu,
},
(slim.batch_norm,): {
'center': True,
'scale': True
},
}
DEFAULTS_GROUP_NORM = {
(ops.expanded_conv,): dict(normalizer_fn=slim.group_norm, residual=True),
(slim.conv2d, slim.fully_connected, slim.separable_conv2d): {
'normalizer_fn': slim.group_norm,
'activation_fn': tf.nn.relu,
},
(slim.group_norm,): {
'groups': 8
},
}
# Compatible checkpoint: http://mldash/5511169891790690458#scalars
V3_LARGE = dict(
defaults=dict(DEFAULTS),
spec=([
# stage 1
op(slim.conv2d, stride=2, num_outputs=16, kernel_size=(3, 3),
activation_fn=hard_swish),
mbv3_op(ef=1, n=16, k=3),
mbv3_op(ef=4, n=24, k=3, s=2),
mbv3_op(ef=3, n=24, k=3, s=1),
mbv3_op_se(ef=3, n=40, k=5, s=2),
mbv3_op_se(ef=3, n=40, k=5, s=1),
mbv3_op_se(ef=3, n=40, k=5, s=1),
mbv3_op(ef=6, n=80, k=3, s=2, act=hard_swish),
mbv3_op(ef=2.5, n=80, k=3, s=1, act=hard_swish),
mbv3_op(ef=184/80., n=80, k=3, s=1, act=hard_swish),
mbv3_op(ef=184/80., n=80, k=3, s=1, act=hard_swish),
mbv3_op_se(ef=6, n=112, k=3, s=1, act=hard_swish),
mbv3_op_se(ef=6, n=112, k=3, s=1, act=hard_swish),
mbv3_op_se(ef=6, n=160, k=5, s=2, act=hard_swish),
mbv3_op_se(ef=6, n=160, k=5, s=1, act=hard_swish),
mbv3_op_se(ef=6, n=160, k=5, s=1, act=hard_swish),
op(slim.conv2d, stride=1, kernel_size=[1, 1], num_outputs=960,
activation_fn=hard_swish),
op(reduce_to_1x1, default_size=7, stride=1, padding='VALID'),
op(slim.conv2d, stride=1, kernel_size=[1, 1], num_outputs=1280,
normalizer_fn=None, activation_fn=hard_swish)
]))
# 72.2% accuracy.
V3_LARGE_MINIMALISTIC = dict(
defaults=dict(DEFAULTS),
spec=([
# stage 1
op(slim.conv2d, stride=2, num_outputs=16, kernel_size=(3, 3)),
mbv3_op(ef=1, n=16, k=3),
mbv3_op(ef=4, n=24, k=3, s=2),
mbv3_op(ef=3, n=24, k=3, s=1),
mbv3_op(ef=3, n=40, k=3, s=2),
mbv3_op(ef=3, n=40, k=3, s=1),
mbv3_op(ef=3, n=40, k=3, s=1),
mbv3_op(ef=6, n=80, k=3, s=2),
mbv3_op(ef=2.5, n=80, k=3, s=1),
mbv3_op(ef=184 / 80., n=80, k=3, s=1),
mbv3_op(ef=184 / 80., n=80, k=3, s=1),
mbv3_op(ef=6, n=112, k=3, s=1),
mbv3_op(ef=6, n=112, k=3, s=1),
mbv3_op(ef=6, n=160, k=3, s=2),
mbv3_op(ef=6, n=160, k=3, s=1),
mbv3_op(ef=6, n=160, k=3, s=1),
op(slim.conv2d, stride=1, kernel_size=[1, 1], num_outputs=960),
op(reduce_to_1x1, default_size=7, stride=1, padding='VALID'),
op(slim.conv2d,
stride=1,
kernel_size=[1, 1],
num_outputs=1280,
normalizer_fn=None)
]))
# Compatible run: http://mldash/2023283040014348118#scalars
V3_SMALL = dict(
defaults=dict(DEFAULTS),
spec=([
# stage 1
op(slim.conv2d, stride=2, num_outputs=16, kernel_size=(3, 3),
activation_fn=hard_swish),
mbv3_op_se(ef=1, n=16, k=3, s=2),
mbv3_op(ef=72./16, n=24, k=3, s=2),
mbv3_op(ef=(88./24), n=24, k=3, s=1),
mbv3_op_se(ef=4, n=40, k=5, s=2, act=hard_swish),
mbv3_op_se(ef=6, n=40, k=5, s=1, act=hard_swish),
mbv3_op_se(ef=6, n=40, k=5, s=1, act=hard_swish),
mbv3_op_se(ef=3, n=48, k=5, s=1, act=hard_swish),
mbv3_op_se(ef=3, n=48, k=5, s=1, act=hard_swish),
mbv3_op_se(ef=6, n=96, k=5, s=2, act=hard_swish),
mbv3_op_se(ef=6, n=96, k=5, s=1, act=hard_swish),
mbv3_op_se(ef=6, n=96, k=5, s=1, act=hard_swish),
op(slim.conv2d, stride=1, kernel_size=[1, 1], num_outputs=576,
activation_fn=hard_swish),
op(reduce_to_1x1, default_size=7, stride=1, padding='VALID'),
op(slim.conv2d, stride=1, kernel_size=[1, 1], num_outputs=1024,
normalizer_fn=None, activation_fn=hard_swish)
]))
# 62% accuracy.
V3_SMALL_MINIMALISTIC = dict(
defaults=dict(DEFAULTS),
spec=([
# stage 1
op(slim.conv2d, stride=2, num_outputs=16, kernel_size=(3, 3)),
mbv3_op(ef=1, n=16, k=3, s=2),
mbv3_op(ef=72. / 16, n=24, k=3, s=2),
mbv3_op(ef=(88. / 24), n=24, k=3, s=1),
mbv3_op(ef=4, n=40, k=3, s=2),
mbv3_op(ef=6, n=40, k=3, s=1),
mbv3_op(ef=6, n=40, k=3, s=1),
mbv3_op(ef=3, n=48, k=3, s=1),
mbv3_op(ef=3, n=48, k=3, s=1),
mbv3_op(ef=6, n=96, k=3, s=2),
mbv3_op(ef=6, n=96, k=3, s=1),
mbv3_op(ef=6, n=96, k=3, s=1),
op(slim.conv2d, stride=1, kernel_size=[1, 1], num_outputs=576),
op(reduce_to_1x1, default_size=7, stride=1, padding='VALID'),
op(slim.conv2d,
stride=1,
kernel_size=[1, 1],
num_outputs=1024,
normalizer_fn=None)
]))
# EdgeTPU friendly variant of MobilenetV3 that uses fused convolutions
# instead of depthwise in the early layers.
V3_EDGETPU = dict(
defaults=dict(DEFAULTS),
spec=[
op(slim.conv2d, stride=2, num_outputs=32, kernel_size=(3, 3)),
mbv3_fused(k=3, s=1, ef=1, n=16),
mbv3_fused(k=3, s=2, ef=8, n=32),
mbv3_fused(k=3, s=1, ef=4, n=32),
mbv3_fused(k=3, s=1, ef=4, n=32),
mbv3_fused(k=3, s=1, ef=4, n=32),
mbv3_fused(k=3, s=2, ef=8, n=48),
mbv3_fused(k=3, s=1, ef=4, n=48),
mbv3_fused(k=3, s=1, ef=4, n=48),
mbv3_fused(k=3, s=1, ef=4, n=48),
mbv3_op(k=3, s=2, ef=8, n=96),
mbv3_op(k=3, s=1, ef=4, n=96),
mbv3_op(k=3, s=1, ef=4, n=96),
mbv3_op(k=3, s=1, ef=4, n=96),
mbv3_op(k=3, s=1, ef=8, n=96, residual=False),
mbv3_op(k=3, s=1, ef=4, n=96),
mbv3_op(k=3, s=1, ef=4, n=96),
mbv3_op(k=3, s=1, ef=4, n=96),
mbv3_op(k=5, s=2, ef=8, n=160),
mbv3_op(k=5, s=1, ef=4, n=160),
mbv3_op(k=5, s=1, ef=4, n=160),
mbv3_op(k=5, s=1, ef=4, n=160),
mbv3_op(k=3, s=1, ef=8, n=192),
op(slim.conv2d, stride=1, num_outputs=1280, kernel_size=(1, 1)),
])
@slim.add_arg_scope
def mobilenet(input_tensor,
num_classes=1001,
depth_multiplier=1.0,
scope='MobilenetV3',
conv_defs=None,
finegrain_classification_mode=False,
use_groupnorm=False,
**kwargs):
"""Creates mobilenet V3 network.
Inference mode is created by default. To create training use training_scope
below.
with slim.arg_scope(mobilenet_v3.training_scope()):
logits, endpoints = mobilenet_v3.mobilenet(input_tensor)
Args:
input_tensor: The input tensor
num_classes: number of classes
depth_multiplier: The multiplier applied to scale number of
channels in each layer.
scope: Scope of the operator
conv_defs: Which version to create. Could be large/small or
any conv_def (see mobilenet_v3.py for examples).
finegrain_classification_mode: When set to True, the model
will keep the last layer large even for small multipliers. Following
https://arxiv.org/abs/1801.04381
it improves performance for ImageNet-type of problems.
*Note* ignored if final_endpoint makes the builder exit earlier.
use_groupnorm: When set to True, use group_norm as normalizer_fn.
**kwargs: passed directly to mobilenet.mobilenet:
prediction_fn- what prediction function to use.
reuse-: whether to reuse variables (if reuse set to true, scope
must be given).
Returns:
logits/endpoints pair
Raises:
ValueError: On invalid arguments
"""
if conv_defs is None:
conv_defs = V3_LARGE
if 'multiplier' in kwargs:
raise ValueError('mobilenetv2 doesn\'t support generic '
'multiplier parameter use "depth_multiplier" instead.')
if use_groupnorm:
conv_defs = copy.deepcopy(conv_defs)
conv_defs['defaults'] = dict(DEFAULTS_GROUP_NORM)
conv_defs['defaults'].update({
(slim.group_norm,): {
'groups': kwargs.pop('groups', 8)
}
})
if finegrain_classification_mode:
conv_defs = copy.deepcopy(conv_defs)
conv_defs['spec'][-1] = conv_defs['spec'][-1]._replace(
multiplier_func=lambda params, multiplier: params)
depth_args = {}
with slim.arg_scope((lib.depth_multiplier,), **depth_args):
return lib.mobilenet(
input_tensor,
num_classes=num_classes,
conv_defs=conv_defs,
scope=scope,
multiplier=depth_multiplier,
**kwargs)
mobilenet.default_image_size = 224
training_scope = lib.training_scope
@slim.add_arg_scope
def mobilenet_base(input_tensor, depth_multiplier=1.0, **kwargs):
"""Creates base of the mobilenet (no pooling and no logits) ."""
return mobilenet(
input_tensor, depth_multiplier=depth_multiplier, base_only=True, **kwargs)
def wrapped_partial(func, new_defaults=None,
**kwargs):
"""Partial function with new default parameters and updated docstring."""
if not new_defaults:
new_defaults = {}
def func_wrapper(*f_args, **f_kwargs):
new_kwargs = dict(new_defaults)
new_kwargs.update(f_kwargs)
return func(*f_args, **new_kwargs)
functools.update_wrapper(func_wrapper, func)
partial_func = functools.partial(func_wrapper, **kwargs)
functools.update_wrapper(partial_func, func)
return partial_func
large = wrapped_partial(mobilenet, conv_defs=V3_LARGE)
small = wrapped_partial(mobilenet, conv_defs=V3_SMALL)
edge_tpu = wrapped_partial(mobilenet,
new_defaults={'scope': 'MobilenetEdgeTPU'},
conv_defs=V3_EDGETPU)
edge_tpu_075 = wrapped_partial(
mobilenet,
new_defaults={'scope': 'MobilenetEdgeTPU'},
conv_defs=V3_EDGETPU,
depth_multiplier=0.75,
finegrain_classification_mode=True)
# Minimalistic model that does not have Squeeze Excite blocks,
# Hardswish, or 5x5 depthwise convolution.
# This makes the model very friendly for a wide range of hardware
large_minimalistic = wrapped_partial(mobilenet, conv_defs=V3_LARGE_MINIMALISTIC)
small_minimalistic = wrapped_partial(mobilenet, conv_defs=V3_SMALL_MINIMALISTIC)
def _reduce_consecutive_layers(conv_defs, start_id, end_id, multiplier=0.5):
"""Reduce the outputs of consecutive layers with multiplier.
Args:
conv_defs: Mobilenet conv_defs.
start_id: 0-based index of the starting conv_def to be reduced.
end_id: 0-based index of the last conv_def to be reduced.
multiplier: The multiplier by which to reduce the conv_defs.
Returns:
Mobilenet conv_defs where the output sizes from layers [start_id, end_id],
inclusive, are reduced by multiplier.
Raises:
ValueError if any layer to be reduced does not have the 'num_outputs'
attribute.
"""
defs = copy.deepcopy(conv_defs)
for d in defs['spec'][start_id:end_id+1]:
d.params.update({
'num_outputs': np.int(np.round(d.params['num_outputs'] * multiplier))
})
return defs
V3_LARGE_DETECTION = _reduce_consecutive_layers(V3_LARGE, 13, 16)
V3_SMALL_DETECTION = _reduce_consecutive_layers(V3_SMALL, 9, 12)
__all__ = ['training_scope', 'mobilenet', 'V3_LARGE', 'V3_SMALL', 'large',
'small', 'V3_LARGE_DETECTION', 'V3_SMALL_DETECTION'] | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet/mobilenet_v3.py | mobilenet_v3.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import functools
import tensorflow.compat.v1 as tf
import tf_slim as slim
from nets.mobilenet import conv_blocks as ops
from nets.mobilenet import mobilenet as lib
op = lib.op
expand_input = ops.expand_input_by_factor
# pyformat: disable
# Architecture: https://arxiv.org/abs/1801.04381
V2_DEF = dict(
defaults={
# Note: these parameters of batch norm affect the architecture
# that's why they are here and not in training_scope.
(slim.batch_norm,): {'center': True, 'scale': True},
(slim.conv2d, slim.fully_connected, slim.separable_conv2d): {
'normalizer_fn': slim.batch_norm, 'activation_fn': tf.nn.relu6
},
(ops.expanded_conv,): {
'expansion_size': expand_input(6),
'split_expansion': 1,
'normalizer_fn': slim.batch_norm,
'residual': True
},
(slim.conv2d, slim.separable_conv2d): {'padding': 'SAME'}
},
spec=[
op(slim.conv2d, stride=2, num_outputs=32, kernel_size=[3, 3]),
op(ops.expanded_conv,
expansion_size=expand_input(1, divisible_by=1),
num_outputs=16),
op(ops.expanded_conv, stride=2, num_outputs=24),
op(ops.expanded_conv, stride=1, num_outputs=24),
op(ops.expanded_conv, stride=2, num_outputs=32),
op(ops.expanded_conv, stride=1, num_outputs=32),
op(ops.expanded_conv, stride=1, num_outputs=32),
op(ops.expanded_conv, stride=2, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=64),
op(ops.expanded_conv, stride=1, num_outputs=96),
op(ops.expanded_conv, stride=1, num_outputs=96),
op(ops.expanded_conv, stride=1, num_outputs=96),
op(ops.expanded_conv, stride=2, num_outputs=160),
op(ops.expanded_conv, stride=1, num_outputs=160),
op(ops.expanded_conv, stride=1, num_outputs=160),
op(ops.expanded_conv, stride=1, num_outputs=320),
op(slim.conv2d, stride=1, kernel_size=[1, 1], num_outputs=1280)
],
)
# pyformat: enable
# Mobilenet v2 Definition with group normalization.
V2_DEF_GROUP_NORM = copy.deepcopy(V2_DEF)
V2_DEF_GROUP_NORM['defaults'] = {
(slim.conv2d, slim.fully_connected, slim.separable_conv2d): {
'normalizer_fn': slim.group_norm, # pylint: disable=C0330
'activation_fn': tf.nn.relu6, # pylint: disable=C0330
}, # pylint: disable=C0330
(ops.expanded_conv,): {
'expansion_size': ops.expand_input_by_factor(6),
'split_expansion': 1,
'normalizer_fn': slim.group_norm,
'residual': True
},
(slim.conv2d, slim.separable_conv2d): {
'padding': 'SAME'
}
}
@slim.add_arg_scope
def mobilenet(input_tensor,
num_classes=1001,
depth_multiplier=1.0,
scope='MobilenetV2',
conv_defs=None,
finegrain_classification_mode=False,
min_depth=None,
divisible_by=None,
activation_fn=None,
**kwargs):
"""Creates mobilenet V2 network.
Inference mode is created by default. To create training use training_scope
below.
with slim.arg_scope(mobilenet_v2.training_scope()):
logits, endpoints = mobilenet_v2.mobilenet(input_tensor)
Args:
input_tensor: The input tensor
num_classes: number of classes
depth_multiplier: The multiplier applied to scale number of
channels in each layer.
scope: Scope of the operator
conv_defs: Allows to override default conv def.
finegrain_classification_mode: When set to True, the model
will keep the last layer large even for small multipliers. Following
https://arxiv.org/abs/1801.04381
suggests that it improves performance for ImageNet-type of problems.
*Note* ignored if final_endpoint makes the builder exit earlier.
min_depth: If provided, will ensure that all layers will have that
many channels after application of depth multiplier.
divisible_by: If provided will ensure that all layers # channels
will be divisible by this number.
activation_fn: Activation function to use, defaults to tf.nn.relu6 if not
specified.
**kwargs: passed directly to mobilenet.mobilenet:
prediction_fn- what prediction function to use.
reuse-: whether to reuse variables (if reuse set to true, scope
must be given).
Returns:
logits/endpoints pair
Raises:
ValueError: On invalid arguments
"""
if conv_defs is None:
conv_defs = V2_DEF
if 'multiplier' in kwargs:
raise ValueError('mobilenetv2 doesn\'t support generic '
'multiplier parameter use "depth_multiplier" instead.')
if finegrain_classification_mode:
conv_defs = copy.deepcopy(conv_defs)
if depth_multiplier < 1:
conv_defs['spec'][-1].params['num_outputs'] /= depth_multiplier
if activation_fn:
conv_defs = copy.deepcopy(conv_defs)
defaults = conv_defs['defaults']
conv_defaults = (
defaults[(slim.conv2d, slim.fully_connected, slim.separable_conv2d)])
conv_defaults['activation_fn'] = activation_fn
depth_args = {}
# NB: do not set depth_args unless they are provided to avoid overriding
# whatever default depth_multiplier might have thanks to arg_scope.
if min_depth is not None:
depth_args['min_depth'] = min_depth
if divisible_by is not None:
depth_args['divisible_by'] = divisible_by
with slim.arg_scope((lib.depth_multiplier,), **depth_args):
return lib.mobilenet(
input_tensor,
num_classes=num_classes,
conv_defs=conv_defs,
scope=scope,
multiplier=depth_multiplier,
**kwargs)
mobilenet.default_image_size = 224
def wrapped_partial(func, *args, **kwargs):
partial_func = functools.partial(func, *args, **kwargs)
functools.update_wrapper(partial_func, func)
return partial_func
# Wrappers for mobilenet v2 with depth-multipliers. Be noticed that
# 'finegrain_classification_mode' is set to True, which means the embedding
# layer will not be shrinked when given a depth-multiplier < 1.0.
mobilenet_v2_140 = wrapped_partial(mobilenet, depth_multiplier=1.4)
mobilenet_v2_050 = wrapped_partial(mobilenet, depth_multiplier=0.50,
finegrain_classification_mode=True)
mobilenet_v2_035 = wrapped_partial(mobilenet, depth_multiplier=0.35,
finegrain_classification_mode=True)
@slim.add_arg_scope
def mobilenet_base(input_tensor, depth_multiplier=1.0, **kwargs):
"""Creates base of the mobilenet (no pooling and no logits) ."""
return mobilenet(input_tensor,
depth_multiplier=depth_multiplier,
base_only=True, **kwargs)
@slim.add_arg_scope
def mobilenet_base_group_norm(input_tensor, depth_multiplier=1.0, **kwargs):
"""Creates base of the mobilenet (no pooling and no logits) ."""
kwargs['conv_defs'] = V2_DEF_GROUP_NORM
kwargs['conv_defs']['defaults'].update({
(slim.group_norm,): {
'groups': kwargs.pop('groups', 8)
}
})
return mobilenet(
input_tensor, depth_multiplier=depth_multiplier, base_only=True, **kwargs)
def training_scope(**kwargs):
"""Defines MobilenetV2 training scope.
Usage:
with slim.arg_scope(mobilenet_v2.training_scope()):
logits, endpoints = mobilenet_v2.mobilenet(input_tensor)
Args:
**kwargs: Passed to mobilenet.training_scope. The following parameters
are supported:
weight_decay- The weight decay to use for regularizing the model.
stddev- Standard deviation for initialization, if negative uses xavier.
dropout_keep_prob- dropout keep probability
bn_decay- decay for the batch norm moving averages.
Returns:
An `arg_scope` to use for the mobilenet v2 model.
"""
return lib.training_scope(**kwargs)
__all__ = ['training_scope', 'mobilenet_base', 'mobilenet', 'V2_DEF'] | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/mobilenet/mobilenet_v2.py | mobilenet_v2.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import tensorflow.compat.v1 as tf
import tf_slim as slim
from tensorflow.contrib import training as contrib_training
from nets.nasnet import nasnet_utils
arg_scope = slim.arg_scope
# Notes for training NASNet Cifar Model
# -------------------------------------
# batch_size: 32
# learning rate: 0.025
# cosine (single period) learning rate decay
# auxiliary head loss weighting: 0.4
# clip global norm of all gradients by 5
def cifar_config():
return contrib_training.HParams(
stem_multiplier=3.0,
drop_path_keep_prob=0.6,
num_cells=18,
use_aux_head=1,
num_conv_filters=32,
dense_dropout_keep_prob=1.0,
filter_scaling_rate=2.0,
num_reduction_layers=2,
data_format='NHWC',
skip_reduction_layer_input=0,
# 600 epochs with a batch size of 32
# This is used for the drop path probabilities since it needs to increase
# the drop out probability over the course of training.
total_training_steps=937500,
use_bounded_activation=False,
)
# Notes for training large NASNet model on ImageNet
# -------------------------------------
# batch size (per replica): 16
# learning rate: 0.015 * 100
# learning rate decay factor: 0.97
# num epochs per decay: 2.4
# sync sgd with 100 replicas
# auxiliary head loss weighting: 0.4
# label smoothing: 0.1
# clip global norm of all gradients by 10
def large_imagenet_config():
return contrib_training.HParams(
stem_multiplier=3.0,
dense_dropout_keep_prob=0.5,
num_cells=18,
filter_scaling_rate=2.0,
num_conv_filters=168,
drop_path_keep_prob=0.7,
use_aux_head=1,
num_reduction_layers=2,
data_format='NHWC',
skip_reduction_layer_input=1,
total_training_steps=250000,
use_bounded_activation=False,
)
# Notes for training the mobile NASNet ImageNet model
# -------------------------------------
# batch size (per replica): 32
# learning rate: 0.04 * 50
# learning rate scaling factor: 0.97
# num epochs per decay: 2.4
# sync sgd with 50 replicas
# auxiliary head weighting: 0.4
# label smoothing: 0.1
# clip global norm of all gradients by 10
def mobile_imagenet_config():
return contrib_training.HParams(
stem_multiplier=1.0,
dense_dropout_keep_prob=0.5,
num_cells=12,
filter_scaling_rate=2.0,
drop_path_keep_prob=1.0,
num_conv_filters=44,
use_aux_head=1,
num_reduction_layers=2,
data_format='NHWC',
skip_reduction_layer_input=0,
total_training_steps=250000,
use_bounded_activation=False,
)
def _update_hparams(hparams, is_training):
"""Update hparams for given is_training option."""
if not is_training:
hparams.set_hparam('drop_path_keep_prob', 1.0)
def nasnet_cifar_arg_scope(weight_decay=5e-4,
batch_norm_decay=0.9,
batch_norm_epsilon=1e-5):
"""Defines the default arg scope for the NASNet-A Cifar model.
Args:
weight_decay: The weight decay to use for regularizing the model.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
Returns:
An `arg_scope` to use for the NASNet Cifar Model.
"""
batch_norm_params = {
# Decay for the moving averages.
'decay': batch_norm_decay,
# epsilon to prevent 0s in variance.
'epsilon': batch_norm_epsilon,
'scale': True,
'fused': True,
}
weights_regularizer = slim.l2_regularizer(weight_decay)
weights_initializer = slim.variance_scaling_initializer(mode='FAN_OUT')
with arg_scope([slim.fully_connected, slim.conv2d, slim.separable_conv2d],
weights_regularizer=weights_regularizer,
weights_initializer=weights_initializer):
with arg_scope([slim.fully_connected],
activation_fn=None, scope='FC'):
with arg_scope([slim.conv2d, slim.separable_conv2d],
activation_fn=None, biases_initializer=None):
with arg_scope([slim.batch_norm], **batch_norm_params) as sc:
return sc
def nasnet_mobile_arg_scope(weight_decay=4e-5,
batch_norm_decay=0.9997,
batch_norm_epsilon=1e-3):
"""Defines the default arg scope for the NASNet-A Mobile ImageNet model.
Args:
weight_decay: The weight decay to use for regularizing the model.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
Returns:
An `arg_scope` to use for the NASNet Mobile Model.
"""
batch_norm_params = {
# Decay for the moving averages.
'decay': batch_norm_decay,
# epsilon to prevent 0s in variance.
'epsilon': batch_norm_epsilon,
'scale': True,
'fused': True,
}
weights_regularizer = slim.l2_regularizer(weight_decay)
weights_initializer = slim.variance_scaling_initializer(mode='FAN_OUT')
with arg_scope([slim.fully_connected, slim.conv2d, slim.separable_conv2d],
weights_regularizer=weights_regularizer,
weights_initializer=weights_initializer):
with arg_scope([slim.fully_connected],
activation_fn=None, scope='FC'):
with arg_scope([slim.conv2d, slim.separable_conv2d],
activation_fn=None, biases_initializer=None):
with arg_scope([slim.batch_norm], **batch_norm_params) as sc:
return sc
def nasnet_large_arg_scope(weight_decay=5e-5,
batch_norm_decay=0.9997,
batch_norm_epsilon=1e-3):
"""Defines the default arg scope for the NASNet-A Large ImageNet model.
Args:
weight_decay: The weight decay to use for regularizing the model.
batch_norm_decay: Decay for batch norm moving average.
batch_norm_epsilon: Small float added to variance to avoid dividing by zero
in batch norm.
Returns:
An `arg_scope` to use for the NASNet Large Model.
"""
batch_norm_params = {
# Decay for the moving averages.
'decay': batch_norm_decay,
# epsilon to prevent 0s in variance.
'epsilon': batch_norm_epsilon,
'scale': True,
'fused': True,
}
weights_regularizer = slim.l2_regularizer(weight_decay)
weights_initializer = slim.variance_scaling_initializer(mode='FAN_OUT')
with arg_scope([slim.fully_connected, slim.conv2d, slim.separable_conv2d],
weights_regularizer=weights_regularizer,
weights_initializer=weights_initializer):
with arg_scope([slim.fully_connected],
activation_fn=None, scope='FC'):
with arg_scope([slim.conv2d, slim.separable_conv2d],
activation_fn=None, biases_initializer=None):
with arg_scope([slim.batch_norm], **batch_norm_params) as sc:
return sc
def _build_aux_head(net, end_points, num_classes, hparams, scope):
"""Auxiliary head used for all models across all datasets."""
activation_fn = tf.nn.relu6 if hparams.use_bounded_activation else tf.nn.relu
with tf.variable_scope(scope):
aux_logits = tf.identity(net)
with tf.variable_scope('aux_logits'):
aux_logits = slim.avg_pool2d(
aux_logits, [5, 5], stride=3, padding='VALID')
aux_logits = slim.conv2d(aux_logits, 128, [1, 1], scope='proj')
aux_logits = slim.batch_norm(aux_logits, scope='aux_bn0')
aux_logits = activation_fn(aux_logits)
# Shape of feature map before the final layer.
shape = aux_logits.shape
if hparams.data_format == 'NHWC':
shape = shape[1:3]
else:
shape = shape[2:4]
aux_logits = slim.conv2d(aux_logits, 768, shape, padding='VALID')
aux_logits = slim.batch_norm(aux_logits, scope='aux_bn1')
aux_logits = activation_fn(aux_logits)
aux_logits = slim.flatten(aux_logits)
aux_logits = slim.fully_connected(aux_logits, num_classes)
end_points['AuxLogits'] = aux_logits
def _imagenet_stem(inputs, hparams, stem_cell, current_step=None):
"""Stem used for models trained on ImageNet."""
num_stem_cells = 2
# 149 x 149 x 32
num_stem_filters = int(32 * hparams.stem_multiplier)
net = slim.conv2d(
inputs, num_stem_filters, [3, 3], stride=2, scope='conv0',
padding='VALID')
net = slim.batch_norm(net, scope='conv0_bn')
# Run the reduction cells
cell_outputs = [None, net]
filter_scaling = 1.0 / (hparams.filter_scaling_rate**num_stem_cells)
for cell_num in range(num_stem_cells):
net = stem_cell(
net,
scope='cell_stem_{}'.format(cell_num),
filter_scaling=filter_scaling,
stride=2,
prev_layer=cell_outputs[-2],
cell_num=cell_num,
current_step=current_step)
cell_outputs.append(net)
filter_scaling *= hparams.filter_scaling_rate
return net, cell_outputs
def _cifar_stem(inputs, hparams):
"""Stem used for models trained on Cifar."""
num_stem_filters = int(hparams.num_conv_filters * hparams.stem_multiplier)
net = slim.conv2d(
inputs,
num_stem_filters,
3,
scope='l1_stem_3x3')
net = slim.batch_norm(net, scope='l1_stem_bn')
return net, [None, net]
def build_nasnet_cifar(images, num_classes,
is_training=True,
config=None,
current_step=None):
"""Build NASNet model for the Cifar Dataset."""
hparams = cifar_config() if config is None else copy.deepcopy(config)
_update_hparams(hparams, is_training)
if tf.test.is_gpu_available() and hparams.data_format == 'NHWC':
tf.logging.info(
'A GPU is available on the machine, consider using NCHW '
'data format for increased speed on GPU.')
if hparams.data_format == 'NCHW':
images = tf.transpose(a=images, perm=[0, 3, 1, 2])
# Calculate the total number of cells in the network
# Add 2 for the reduction cells
total_num_cells = hparams.num_cells + 2
normal_cell = nasnet_utils.NasNetANormalCell(
hparams.num_conv_filters, hparams.drop_path_keep_prob,
total_num_cells, hparams.total_training_steps,
hparams.use_bounded_activation)
reduction_cell = nasnet_utils.NasNetAReductionCell(
hparams.num_conv_filters, hparams.drop_path_keep_prob,
total_num_cells, hparams.total_training_steps,
hparams.use_bounded_activation)
with arg_scope([slim.dropout, nasnet_utils.drop_path, slim.batch_norm],
is_training=is_training):
with arg_scope([slim.avg_pool2d,
slim.max_pool2d,
slim.conv2d,
slim.batch_norm,
slim.separable_conv2d,
nasnet_utils.factorized_reduction,
nasnet_utils.global_avg_pool,
nasnet_utils.get_channel_index,
nasnet_utils.get_channel_dim],
data_format=hparams.data_format):
return _build_nasnet_base(images,
normal_cell=normal_cell,
reduction_cell=reduction_cell,
num_classes=num_classes,
hparams=hparams,
is_training=is_training,
stem_type='cifar',
current_step=current_step)
build_nasnet_cifar.default_image_size = 32
def build_nasnet_mobile(images, num_classes,
is_training=True,
final_endpoint=None,
config=None,
current_step=None):
"""Build NASNet Mobile model for the ImageNet Dataset."""
hparams = (mobile_imagenet_config() if config is None
else copy.deepcopy(config))
_update_hparams(hparams, is_training)
if tf.test.is_gpu_available() and hparams.data_format == 'NHWC':
tf.logging.info(
'A GPU is available on the machine, consider using NCHW '
'data format for increased speed on GPU.')
if hparams.data_format == 'NCHW':
images = tf.transpose(a=images, perm=[0, 3, 1, 2])
# Calculate the total number of cells in the network
# Add 2 for the reduction cells
total_num_cells = hparams.num_cells + 2
# If ImageNet, then add an additional two for the stem cells
total_num_cells += 2
normal_cell = nasnet_utils.NasNetANormalCell(
hparams.num_conv_filters, hparams.drop_path_keep_prob,
total_num_cells, hparams.total_training_steps,
hparams.use_bounded_activation)
reduction_cell = nasnet_utils.NasNetAReductionCell(
hparams.num_conv_filters, hparams.drop_path_keep_prob,
total_num_cells, hparams.total_training_steps,
hparams.use_bounded_activation)
with arg_scope([slim.dropout, nasnet_utils.drop_path, slim.batch_norm],
is_training=is_training):
with arg_scope([slim.avg_pool2d,
slim.max_pool2d,
slim.conv2d,
slim.batch_norm,
slim.separable_conv2d,
nasnet_utils.factorized_reduction,
nasnet_utils.global_avg_pool,
nasnet_utils.get_channel_index,
nasnet_utils.get_channel_dim],
data_format=hparams.data_format):
return _build_nasnet_base(images,
normal_cell=normal_cell,
reduction_cell=reduction_cell,
num_classes=num_classes,
hparams=hparams,
is_training=is_training,
stem_type='imagenet',
final_endpoint=final_endpoint,
current_step=current_step)
build_nasnet_mobile.default_image_size = 224
def build_nasnet_large(images, num_classes,
is_training=True,
final_endpoint=None,
config=None,
current_step=None):
"""Build NASNet Large model for the ImageNet Dataset."""
hparams = (large_imagenet_config() if config is None
else copy.deepcopy(config))
_update_hparams(hparams, is_training)
if tf.test.is_gpu_available() and hparams.data_format == 'NHWC':
tf.logging.info(
'A GPU is available on the machine, consider using NCHW '
'data format for increased speed on GPU.')
if hparams.data_format == 'NCHW':
images = tf.transpose(a=images, perm=[0, 3, 1, 2])
# Calculate the total number of cells in the network
# Add 2 for the reduction cells
total_num_cells = hparams.num_cells + 2
# If ImageNet, then add an additional two for the stem cells
total_num_cells += 2
normal_cell = nasnet_utils.NasNetANormalCell(
hparams.num_conv_filters, hparams.drop_path_keep_prob,
total_num_cells, hparams.total_training_steps,
hparams.use_bounded_activation)
reduction_cell = nasnet_utils.NasNetAReductionCell(
hparams.num_conv_filters, hparams.drop_path_keep_prob,
total_num_cells, hparams.total_training_steps,
hparams.use_bounded_activation)
with arg_scope([slim.dropout, nasnet_utils.drop_path, slim.batch_norm],
is_training=is_training):
with arg_scope([slim.avg_pool2d,
slim.max_pool2d,
slim.conv2d,
slim.batch_norm,
slim.separable_conv2d,
nasnet_utils.factorized_reduction,
nasnet_utils.global_avg_pool,
nasnet_utils.get_channel_index,
nasnet_utils.get_channel_dim],
data_format=hparams.data_format):
return _build_nasnet_base(images,
normal_cell=normal_cell,
reduction_cell=reduction_cell,
num_classes=num_classes,
hparams=hparams,
is_training=is_training,
stem_type='imagenet',
final_endpoint=final_endpoint,
current_step=current_step)
build_nasnet_large.default_image_size = 331
def _build_nasnet_base(images,
normal_cell,
reduction_cell,
num_classes,
hparams,
is_training,
stem_type,
final_endpoint=None,
current_step=None):
"""Constructs a NASNet image model."""
end_points = {}
def add_and_check_endpoint(endpoint_name, net):
end_points[endpoint_name] = net
return final_endpoint and (endpoint_name == final_endpoint)
# Find where to place the reduction cells or stride normal cells
reduction_indices = nasnet_utils.calc_reduction_layers(
hparams.num_cells, hparams.num_reduction_layers)
stem_cell = reduction_cell
if stem_type == 'imagenet':
stem = lambda: _imagenet_stem(images, hparams, stem_cell)
elif stem_type == 'cifar':
stem = lambda: _cifar_stem(images, hparams)
else:
raise ValueError('Unknown stem_type: ', stem_type)
net, cell_outputs = stem()
if add_and_check_endpoint('Stem', net): return net, end_points
# Setup for building in the auxiliary head.
aux_head_cell_idxes = []
if len(reduction_indices) >= 2:
aux_head_cell_idxes.append(reduction_indices[1] - 1)
# Run the cells
filter_scaling = 1.0
# true_cell_num accounts for the stem cells
true_cell_num = 2 if stem_type == 'imagenet' else 0
activation_fn = tf.nn.relu6 if hparams.use_bounded_activation else tf.nn.relu
for cell_num in range(hparams.num_cells):
stride = 1
if hparams.skip_reduction_layer_input:
prev_layer = cell_outputs[-2]
if cell_num in reduction_indices:
filter_scaling *= hparams.filter_scaling_rate
net = reduction_cell(
net,
scope='reduction_cell_{}'.format(reduction_indices.index(cell_num)),
filter_scaling=filter_scaling,
stride=2,
prev_layer=cell_outputs[-2],
cell_num=true_cell_num,
current_step=current_step)
if add_and_check_endpoint(
'Reduction_Cell_{}'.format(reduction_indices.index(cell_num)), net):
return net, end_points
true_cell_num += 1
cell_outputs.append(net)
if not hparams.skip_reduction_layer_input:
prev_layer = cell_outputs[-2]
net = normal_cell(
net,
scope='cell_{}'.format(cell_num),
filter_scaling=filter_scaling,
stride=stride,
prev_layer=prev_layer,
cell_num=true_cell_num,
current_step=current_step)
if add_and_check_endpoint('Cell_{}'.format(cell_num), net):
return net, end_points
true_cell_num += 1
if (hparams.use_aux_head and cell_num in aux_head_cell_idxes and
num_classes and is_training):
aux_net = activation_fn(net)
_build_aux_head(aux_net, end_points, num_classes, hparams,
scope='aux_{}'.format(cell_num))
cell_outputs.append(net)
# Final softmax layer
with tf.variable_scope('final_layer'):
net = activation_fn(net)
net = nasnet_utils.global_avg_pool(net)
if add_and_check_endpoint('global_pool', net) or not num_classes:
return net, end_points
net = slim.dropout(net, hparams.dense_dropout_keep_prob, scope='dropout')
logits = slim.fully_connected(net, num_classes)
if add_and_check_endpoint('Logits', logits):
return net, end_points
predictions = tf.nn.softmax(logits, name='predictions')
if add_and_check_endpoint('Predictions', predictions):
return net, end_points
return logits, end_points | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/nasnet/nasnet.py | nasnet.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
import tf_slim as slim
arg_scope = slim.arg_scope
DATA_FORMAT_NCHW = 'NCHW'
DATA_FORMAT_NHWC = 'NHWC'
INVALID = 'null'
# The cap for tf.clip_by_value, it's hinted from the activation distribution
# that the majority of activation values are in the range [-6, 6].
CLIP_BY_VALUE_CAP = 6
def calc_reduction_layers(num_cells, num_reduction_layers):
"""Figure out what layers should have reductions."""
reduction_layers = []
for pool_num in range(1, num_reduction_layers + 1):
layer_num = (float(pool_num) / (num_reduction_layers + 1)) * num_cells
layer_num = int(layer_num)
reduction_layers.append(layer_num)
return reduction_layers
@slim.add_arg_scope
def get_channel_index(data_format=INVALID):
assert data_format != INVALID
axis = 3 if data_format == 'NHWC' else 1
return axis
@slim.add_arg_scope
def get_channel_dim(shape, data_format=INVALID):
assert data_format != INVALID
assert len(shape) == 4
if data_format == 'NHWC':
return int(shape[3])
elif data_format == 'NCHW':
return int(shape[1])
else:
raise ValueError('Not a valid data_format', data_format)
@slim.add_arg_scope
def global_avg_pool(x, data_format=INVALID):
"""Average pool away the height and width spatial dimensions of x."""
assert data_format != INVALID
assert data_format in ['NHWC', 'NCHW']
assert x.shape.ndims == 4
if data_format == 'NHWC':
return tf.reduce_mean(input_tensor=x, axis=[1, 2])
else:
return tf.reduce_mean(input_tensor=x, axis=[2, 3])
@slim.add_arg_scope
def factorized_reduction(net, output_filters, stride, data_format=INVALID):
"""Reduces the shape of net without information loss due to striding."""
assert data_format != INVALID
if stride == 1:
net = slim.conv2d(net, output_filters, 1, scope='path_conv')
net = slim.batch_norm(net, scope='path_bn')
return net
if data_format == 'NHWC':
stride_spec = [1, stride, stride, 1]
else:
stride_spec = [1, 1, stride, stride]
# Skip path 1
path1 = tf.nn.avg_pool2d(
net,
ksize=[1, 1, 1, 1],
strides=stride_spec,
padding='VALID',
data_format=data_format)
path1 = slim.conv2d(path1, int(output_filters / 2), 1, scope='path1_conv')
# Skip path 2
# First pad with 0's on the right and bottom, then shift the filter to
# include those 0's that were added.
if data_format == 'NHWC':
pad_arr = [[0, 0], [0, 1], [0, 1], [0, 0]]
path2 = tf.pad(tensor=net, paddings=pad_arr)[:, 1:, 1:, :]
concat_axis = 3
else:
pad_arr = [[0, 0], [0, 0], [0, 1], [0, 1]]
path2 = tf.pad(tensor=net, paddings=pad_arr)[:, :, 1:, 1:]
concat_axis = 1
path2 = tf.nn.avg_pool2d(
path2,
ksize=[1, 1, 1, 1],
strides=stride_spec,
padding='VALID',
data_format=data_format)
# If odd number of filters, add an additional one to the second path.
final_filter_size = int(output_filters / 2) + int(output_filters % 2)
path2 = slim.conv2d(path2, final_filter_size, 1, scope='path2_conv')
# Concat and apply BN
final_path = tf.concat(values=[path1, path2], axis=concat_axis)
final_path = slim.batch_norm(final_path, scope='final_path_bn')
return final_path
@slim.add_arg_scope
def drop_path(net, keep_prob, is_training=True):
"""Drops out a whole example hiddenstate with the specified probability."""
if is_training:
batch_size = tf.shape(input=net)[0]
noise_shape = [batch_size, 1, 1, 1]
random_tensor = keep_prob
random_tensor += tf.random.uniform(noise_shape, dtype=tf.float32)
binary_tensor = tf.cast(tf.floor(random_tensor), net.dtype)
keep_prob_inv = tf.cast(1.0 / keep_prob, net.dtype)
net = net * keep_prob_inv * binary_tensor
return net
def _operation_to_filter_shape(operation):
splitted_operation = operation.split('x')
filter_shape = int(splitted_operation[0][-1])
assert filter_shape == int(
splitted_operation[1][0]), 'Rectangular filters not supported.'
return filter_shape
def _operation_to_num_layers(operation):
splitted_operation = operation.split('_')
if 'x' in splitted_operation[-1]:
return 1
return int(splitted_operation[-1])
def _operation_to_info(operation):
"""Takes in operation name and returns meta information.
An example would be 'separable_3x3_4' -> (3, 4).
Args:
operation: String that corresponds to convolution operation.
Returns:
Tuple of (filter shape, num layers).
"""
num_layers = _operation_to_num_layers(operation)
filter_shape = _operation_to_filter_shape(operation)
return num_layers, filter_shape
def _stacked_separable_conv(net, stride, operation, filter_size,
use_bounded_activation):
"""Takes in an operations and parses it to the correct sep operation."""
num_layers, kernel_size = _operation_to_info(operation)
activation_fn = tf.nn.relu6 if use_bounded_activation else tf.nn.relu
for layer_num in range(num_layers - 1):
net = activation_fn(net)
net = slim.separable_conv2d(
net,
filter_size,
kernel_size,
depth_multiplier=1,
scope='separable_{0}x{0}_{1}'.format(kernel_size, layer_num + 1),
stride=stride)
net = slim.batch_norm(
net, scope='bn_sep_{0}x{0}_{1}'.format(kernel_size, layer_num + 1))
stride = 1
net = activation_fn(net)
net = slim.separable_conv2d(
net,
filter_size,
kernel_size,
depth_multiplier=1,
scope='separable_{0}x{0}_{1}'.format(kernel_size, num_layers),
stride=stride)
net = slim.batch_norm(
net, scope='bn_sep_{0}x{0}_{1}'.format(kernel_size, num_layers))
return net
def _operation_to_pooling_type(operation):
"""Takes in the operation string and returns the pooling type."""
splitted_operation = operation.split('_')
return splitted_operation[0]
def _operation_to_pooling_shape(operation):
"""Takes in the operation string and returns the pooling kernel shape."""
splitted_operation = operation.split('_')
shape = splitted_operation[-1]
assert 'x' in shape
filter_height, filter_width = shape.split('x')
assert filter_height == filter_width
return int(filter_height)
def _operation_to_pooling_info(operation):
"""Parses the pooling operation string to return its type and shape."""
pooling_type = _operation_to_pooling_type(operation)
pooling_shape = _operation_to_pooling_shape(operation)
return pooling_type, pooling_shape
def _pooling(net, stride, operation, use_bounded_activation):
"""Parses operation and performs the correct pooling operation on net."""
padding = 'SAME'
pooling_type, pooling_shape = _operation_to_pooling_info(operation)
if use_bounded_activation:
net = tf.nn.relu6(net)
if pooling_type == 'avg':
net = slim.avg_pool2d(net, pooling_shape, stride=stride, padding=padding)
elif pooling_type == 'max':
net = slim.max_pool2d(net, pooling_shape, stride=stride, padding=padding)
else:
raise NotImplementedError('Unimplemented pooling type: ', pooling_type)
return net
class NasNetABaseCell(object):
"""NASNet Cell class that is used as a 'layer' in image architectures.
Args:
num_conv_filters: The number of filters for each convolution operation.
operations: List of operations that are performed in the NASNet Cell in
order.
used_hiddenstates: Binary array that signals if the hiddenstate was used
within the cell. This is used to determine what outputs of the cell
should be concatenated together.
hiddenstate_indices: Determines what hiddenstates should be combined
together with the specified operations to create the NASNet cell.
use_bounded_activation: Whether or not to use bounded activations. Bounded
activations better lend themselves to quantized inference.
"""
def __init__(self, num_conv_filters, operations, used_hiddenstates,
hiddenstate_indices, drop_path_keep_prob, total_num_cells,
total_training_steps, use_bounded_activation=False):
self._num_conv_filters = num_conv_filters
self._operations = operations
self._used_hiddenstates = used_hiddenstates
self._hiddenstate_indices = hiddenstate_indices
self._drop_path_keep_prob = drop_path_keep_prob
self._total_num_cells = total_num_cells
self._total_training_steps = total_training_steps
self._use_bounded_activation = use_bounded_activation
def _reduce_prev_layer(self, prev_layer, curr_layer):
"""Matches dimension of prev_layer to the curr_layer."""
# Set the prev layer to the current layer if it is none
if prev_layer is None:
return curr_layer
curr_num_filters = self._filter_size
prev_num_filters = get_channel_dim(prev_layer.shape)
curr_filter_shape = int(curr_layer.shape[2])
prev_filter_shape = int(prev_layer.shape[2])
activation_fn = tf.nn.relu6 if self._use_bounded_activation else tf.nn.relu
if curr_filter_shape != prev_filter_shape:
prev_layer = activation_fn(prev_layer)
prev_layer = factorized_reduction(
prev_layer, curr_num_filters, stride=2)
elif curr_num_filters != prev_num_filters:
prev_layer = activation_fn(prev_layer)
prev_layer = slim.conv2d(
prev_layer, curr_num_filters, 1, scope='prev_1x1')
prev_layer = slim.batch_norm(prev_layer, scope='prev_bn')
return prev_layer
def _cell_base(self, net, prev_layer):
"""Runs the beginning of the conv cell before the predicted ops are run."""
num_filters = self._filter_size
# Check to be sure prev layer stuff is setup correctly
prev_layer = self._reduce_prev_layer(prev_layer, net)
net = tf.nn.relu6(net) if self._use_bounded_activation else tf.nn.relu(net)
net = slim.conv2d(net, num_filters, 1, scope='1x1')
net = slim.batch_norm(net, scope='beginning_bn')
# num_or_size_splits=1
net = [net]
net.append(prev_layer)
return net
def __call__(self, net, scope=None, filter_scaling=1, stride=1,
prev_layer=None, cell_num=-1, current_step=None):
"""Runs the conv cell."""
self._cell_num = cell_num
self._filter_scaling = filter_scaling
self._filter_size = int(self._num_conv_filters * filter_scaling)
i = 0
with tf.variable_scope(scope):
net = self._cell_base(net, prev_layer)
for iteration in range(5):
with tf.variable_scope('comb_iter_{}'.format(iteration)):
left_hiddenstate_idx, right_hiddenstate_idx = (
self._hiddenstate_indices[i],
self._hiddenstate_indices[i + 1])
original_input_left = left_hiddenstate_idx < 2
original_input_right = right_hiddenstate_idx < 2
h1 = net[left_hiddenstate_idx]
h2 = net[right_hiddenstate_idx]
operation_left = self._operations[i]
operation_right = self._operations[i+1]
i += 2
# Apply conv operations
with tf.variable_scope('left'):
h1 = self._apply_conv_operation(h1, operation_left,
stride, original_input_left,
current_step)
with tf.variable_scope('right'):
h2 = self._apply_conv_operation(h2, operation_right,
stride, original_input_right,
current_step)
# Combine hidden states using 'add'.
with tf.variable_scope('combine'):
h = h1 + h2
if self._use_bounded_activation:
h = tf.nn.relu6(h)
# Add hiddenstate to the list of hiddenstates we can choose from
net.append(h)
with tf.variable_scope('cell_output'):
net = self._combine_unused_states(net)
return net
def _apply_conv_operation(self, net, operation,
stride, is_from_original_input, current_step):
"""Applies the predicted conv operation to net."""
# Dont stride if this is not one of the original hiddenstates
if stride > 1 and not is_from_original_input:
stride = 1
input_filters = get_channel_dim(net.shape)
filter_size = self._filter_size
if 'separable' in operation:
net = _stacked_separable_conv(net, stride, operation, filter_size,
self._use_bounded_activation)
if self._use_bounded_activation:
net = tf.clip_by_value(net, -CLIP_BY_VALUE_CAP, CLIP_BY_VALUE_CAP)
elif operation in ['none']:
if self._use_bounded_activation:
net = tf.nn.relu6(net)
# Check if a stride is needed, then use a strided 1x1 here
if stride > 1 or (input_filters != filter_size):
if not self._use_bounded_activation:
net = tf.nn.relu(net)
net = slim.conv2d(net, filter_size, 1, stride=stride, scope='1x1')
net = slim.batch_norm(net, scope='bn_1')
if self._use_bounded_activation:
net = tf.clip_by_value(net, -CLIP_BY_VALUE_CAP, CLIP_BY_VALUE_CAP)
elif 'pool' in operation:
net = _pooling(net, stride, operation, self._use_bounded_activation)
if input_filters != filter_size:
net = slim.conv2d(net, filter_size, 1, stride=1, scope='1x1')
net = slim.batch_norm(net, scope='bn_1')
if self._use_bounded_activation:
net = tf.clip_by_value(net, -CLIP_BY_VALUE_CAP, CLIP_BY_VALUE_CAP)
else:
raise ValueError('Unimplemented operation', operation)
if operation != 'none':
net = self._apply_drop_path(net, current_step=current_step)
return net
def _combine_unused_states(self, net):
"""Concatenate the unused hidden states of the cell."""
used_hiddenstates = self._used_hiddenstates
final_height = int(net[-1].shape[2])
final_num_filters = get_channel_dim(net[-1].shape)
assert len(used_hiddenstates) == len(net)
for idx, used_h in enumerate(used_hiddenstates):
curr_height = int(net[idx].shape[2])
curr_num_filters = get_channel_dim(net[idx].shape)
# Determine if a reduction should be applied to make the number of
# filters match.
should_reduce = final_num_filters != curr_num_filters
should_reduce = (final_height != curr_height) or should_reduce
should_reduce = should_reduce and not used_h
if should_reduce:
stride = 2 if final_height != curr_height else 1
with tf.variable_scope('reduction_{}'.format(idx)):
net[idx] = factorized_reduction(
net[idx], final_num_filters, stride)
states_to_combine = (
[h for h, is_used in zip(net, used_hiddenstates) if not is_used])
# Return the concat of all the states
concat_axis = get_channel_index()
net = tf.concat(values=states_to_combine, axis=concat_axis)
return net
@slim.add_arg_scope # No public API. For internal use only.
def _apply_drop_path(self, net, current_step=None,
use_summaries=False, drop_connect_version='v3'):
"""Apply drop_path regularization.
Args:
net: the Tensor that gets drop_path regularization applied.
current_step: a float32 Tensor with the current global_step value,
to be divided by hparams.total_training_steps. Usually None, which
defaults to tf.train.get_or_create_global_step() properly casted.
use_summaries: a Python boolean. If set to False, no summaries are output.
drop_connect_version: one of 'v1', 'v2', 'v3', controlling whether
the dropout rate is scaled by current_step (v1), layer (v2), or
both (v3, the default).
Returns:
The dropped-out value of `net`.
"""
drop_path_keep_prob = self._drop_path_keep_prob
if drop_path_keep_prob < 1.0:
assert drop_connect_version in ['v1', 'v2', 'v3']
if drop_connect_version in ['v2', 'v3']:
# Scale keep prob by layer number
assert self._cell_num != -1
# The added 2 is for the reduction cells
num_cells = self._total_num_cells
layer_ratio = (self._cell_num + 1)/float(num_cells)
if use_summaries:
with tf.device('/cpu:0'):
tf.summary.scalar('layer_ratio', layer_ratio)
drop_path_keep_prob = 1 - layer_ratio * (1 - drop_path_keep_prob)
if drop_connect_version in ['v1', 'v3']:
# Decrease the keep probability over time
if current_step is None:
current_step = tf.train.get_or_create_global_step()
current_step = tf.cast(current_step, tf.float32)
drop_path_burn_in_steps = self._total_training_steps
current_ratio = current_step / drop_path_burn_in_steps
current_ratio = tf.minimum(1.0, current_ratio)
if use_summaries:
with tf.device('/cpu:0'):
tf.summary.scalar('current_ratio', current_ratio)
drop_path_keep_prob = (1 - current_ratio * (1 - drop_path_keep_prob))
if use_summaries:
with tf.device('/cpu:0'):
tf.summary.scalar('drop_path_keep_prob', drop_path_keep_prob)
net = drop_path(net, drop_path_keep_prob)
return net
class NasNetANormalCell(NasNetABaseCell):
"""NASNetA Normal Cell."""
def __init__(self, num_conv_filters, drop_path_keep_prob, total_num_cells,
total_training_steps, use_bounded_activation=False):
operations = ['separable_5x5_2',
'separable_3x3_2',
'separable_5x5_2',
'separable_3x3_2',
'avg_pool_3x3',
'none',
'avg_pool_3x3',
'avg_pool_3x3',
'separable_3x3_2',
'none']
used_hiddenstates = [1, 0, 0, 0, 0, 0, 0]
hiddenstate_indices = [0, 1, 1, 1, 0, 1, 1, 1, 0, 0]
super(NasNetANormalCell, self).__init__(num_conv_filters, operations,
used_hiddenstates,
hiddenstate_indices,
drop_path_keep_prob,
total_num_cells,
total_training_steps,
use_bounded_activation)
class NasNetAReductionCell(NasNetABaseCell):
"""NASNetA Reduction Cell."""
def __init__(self, num_conv_filters, drop_path_keep_prob, total_num_cells,
total_training_steps, use_bounded_activation=False):
operations = ['separable_5x5_2',
'separable_7x7_2',
'max_pool_3x3',
'separable_7x7_2',
'avg_pool_3x3',
'separable_5x5_2',
'none',
'avg_pool_3x3',
'separable_3x3_2',
'max_pool_3x3']
used_hiddenstates = [1, 1, 1, 0, 0, 0, 0]
hiddenstate_indices = [0, 1, 0, 1, 0, 1, 3, 2, 2, 0]
super(NasNetAReductionCell, self).__init__(num_conv_filters, operations,
used_hiddenstates,
hiddenstate_indices,
drop_path_keep_prob,
total_num_cells,
total_training_steps,
use_bounded_activation) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/nasnet/nasnet_utils.py | nasnet_utils.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import tensorflow.compat.v1 as tf
import tf_slim as slim
from tensorflow.contrib import training as contrib_training
from nets.nasnet import nasnet
from nets.nasnet import nasnet_utils
arg_scope = slim.arg_scope
def large_imagenet_config():
"""Large ImageNet configuration based on PNASNet-5."""
return contrib_training.HParams(
stem_multiplier=3.0,
dense_dropout_keep_prob=0.5,
num_cells=12,
filter_scaling_rate=2.0,
num_conv_filters=216,
drop_path_keep_prob=0.6,
use_aux_head=1,
num_reduction_layers=2,
data_format='NHWC',
skip_reduction_layer_input=1,
total_training_steps=250000,
use_bounded_activation=False,
)
def mobile_imagenet_config():
"""Mobile ImageNet configuration based on PNASNet-5."""
return contrib_training.HParams(
stem_multiplier=1.0,
dense_dropout_keep_prob=0.5,
num_cells=9,
filter_scaling_rate=2.0,
num_conv_filters=54,
drop_path_keep_prob=1.0,
use_aux_head=1,
num_reduction_layers=2,
data_format='NHWC',
skip_reduction_layer_input=1,
total_training_steps=250000,
use_bounded_activation=False,
)
def pnasnet_large_arg_scope(weight_decay=4e-5, batch_norm_decay=0.9997,
batch_norm_epsilon=0.001):
"""Default arg scope for the PNASNet Large ImageNet model."""
return nasnet.nasnet_large_arg_scope(
weight_decay, batch_norm_decay, batch_norm_epsilon)
def pnasnet_mobile_arg_scope(weight_decay=4e-5,
batch_norm_decay=0.9997,
batch_norm_epsilon=0.001):
"""Default arg scope for the PNASNet Mobile ImageNet model."""
return nasnet.nasnet_mobile_arg_scope(weight_decay, batch_norm_decay,
batch_norm_epsilon)
def _build_pnasnet_base(images,
normal_cell,
num_classes,
hparams,
is_training,
final_endpoint=None):
"""Constructs a PNASNet image model."""
end_points = {}
def add_and_check_endpoint(endpoint_name, net):
end_points[endpoint_name] = net
return final_endpoint and (endpoint_name == final_endpoint)
# Find where to place the reduction cells or stride normal cells
reduction_indices = nasnet_utils.calc_reduction_layers(
hparams.num_cells, hparams.num_reduction_layers)
# pylint: disable=protected-access
stem = lambda: nasnet._imagenet_stem(images, hparams, normal_cell)
# pylint: enable=protected-access
net, cell_outputs = stem()
if add_and_check_endpoint('Stem', net):
return net, end_points
# Setup for building in the auxiliary head.
aux_head_cell_idxes = []
if len(reduction_indices) >= 2:
aux_head_cell_idxes.append(reduction_indices[1] - 1)
# Run the cells
filter_scaling = 1.0
# true_cell_num accounts for the stem cells
true_cell_num = 2
activation_fn = tf.nn.relu6 if hparams.use_bounded_activation else tf.nn.relu
for cell_num in range(hparams.num_cells):
is_reduction = cell_num in reduction_indices
stride = 2 if is_reduction else 1
if is_reduction: filter_scaling *= hparams.filter_scaling_rate
if hparams.skip_reduction_layer_input or not is_reduction:
prev_layer = cell_outputs[-2]
net = normal_cell(
net,
scope='cell_{}'.format(cell_num),
filter_scaling=filter_scaling,
stride=stride,
prev_layer=prev_layer,
cell_num=true_cell_num)
if add_and_check_endpoint('Cell_{}'.format(cell_num), net):
return net, end_points
true_cell_num += 1
cell_outputs.append(net)
if (hparams.use_aux_head and cell_num in aux_head_cell_idxes and
num_classes and is_training):
aux_net = activation_fn(net)
# pylint: disable=protected-access
nasnet._build_aux_head(aux_net, end_points, num_classes, hparams,
scope='aux_{}'.format(cell_num))
# pylint: enable=protected-access
# Final softmax layer
with tf.variable_scope('final_layer'):
net = activation_fn(net)
net = nasnet_utils.global_avg_pool(net)
if add_and_check_endpoint('global_pool', net) or not num_classes:
return net, end_points
net = slim.dropout(net, hparams.dense_dropout_keep_prob, scope='dropout')
logits = slim.fully_connected(net, num_classes)
if add_and_check_endpoint('Logits', logits):
return net, end_points
predictions = tf.nn.softmax(logits, name='predictions')
if add_and_check_endpoint('Predictions', predictions):
return net, end_points
return logits, end_points
def build_pnasnet_large(images,
num_classes,
is_training=True,
final_endpoint=None,
config=None):
"""Build PNASNet Large model for the ImageNet Dataset."""
hparams = copy.deepcopy(config) if config else large_imagenet_config()
# pylint: disable=protected-access
nasnet._update_hparams(hparams, is_training)
# pylint: enable=protected-access
if tf.test.is_gpu_available() and hparams.data_format == 'NHWC':
tf.logging.info(
'A GPU is available on the machine, consider using NCHW '
'data format for increased speed on GPU.')
if hparams.data_format == 'NCHW':
images = tf.transpose(a=images, perm=[0, 3, 1, 2])
# Calculate the total number of cells in the network.
# There is no distinction between reduction and normal cells in PNAS so the
# total number of cells is equal to the number normal cells plus the number
# of stem cells (two by default).
total_num_cells = hparams.num_cells + 2
normal_cell = PNasNetNormalCell(hparams.num_conv_filters,
hparams.drop_path_keep_prob, total_num_cells,
hparams.total_training_steps,
hparams.use_bounded_activation)
with arg_scope(
[slim.dropout, nasnet_utils.drop_path, slim.batch_norm],
is_training=is_training):
with arg_scope([slim.avg_pool2d, slim.max_pool2d, slim.conv2d,
slim.batch_norm, slim.separable_conv2d,
nasnet_utils.factorized_reduction,
nasnet_utils.global_avg_pool,
nasnet_utils.get_channel_index,
nasnet_utils.get_channel_dim],
data_format=hparams.data_format):
return _build_pnasnet_base(
images,
normal_cell=normal_cell,
num_classes=num_classes,
hparams=hparams,
is_training=is_training,
final_endpoint=final_endpoint)
build_pnasnet_large.default_image_size = 331
def build_pnasnet_mobile(images,
num_classes,
is_training=True,
final_endpoint=None,
config=None):
"""Build PNASNet Mobile model for the ImageNet Dataset."""
hparams = copy.deepcopy(config) if config else mobile_imagenet_config()
# pylint: disable=protected-access
nasnet._update_hparams(hparams, is_training)
# pylint: enable=protected-access
if tf.test.is_gpu_available() and hparams.data_format == 'NHWC':
tf.logging.info(
'A GPU is available on the machine, consider using NCHW '
'data format for increased speed on GPU.')
if hparams.data_format == 'NCHW':
images = tf.transpose(a=images, perm=[0, 3, 1, 2])
# Calculate the total number of cells in the network.
# There is no distinction between reduction and normal cells in PNAS so the
# total number of cells is equal to the number normal cells plus the number
# of stem cells (two by default).
total_num_cells = hparams.num_cells + 2
normal_cell = PNasNetNormalCell(hparams.num_conv_filters,
hparams.drop_path_keep_prob, total_num_cells,
hparams.total_training_steps,
hparams.use_bounded_activation)
with arg_scope(
[slim.dropout, nasnet_utils.drop_path, slim.batch_norm],
is_training=is_training):
with arg_scope(
[
slim.avg_pool2d, slim.max_pool2d, slim.conv2d, slim.batch_norm,
slim.separable_conv2d, nasnet_utils.factorized_reduction,
nasnet_utils.global_avg_pool, nasnet_utils.get_channel_index,
nasnet_utils.get_channel_dim
],
data_format=hparams.data_format):
return _build_pnasnet_base(
images,
normal_cell=normal_cell,
num_classes=num_classes,
hparams=hparams,
is_training=is_training,
final_endpoint=final_endpoint)
build_pnasnet_mobile.default_image_size = 224
class PNasNetNormalCell(nasnet_utils.NasNetABaseCell):
"""PNASNet Normal Cell."""
def __init__(self, num_conv_filters, drop_path_keep_prob, total_num_cells,
total_training_steps, use_bounded_activation=False):
# Configuration for the PNASNet-5 model.
operations = [
'separable_5x5_2', 'max_pool_3x3', 'separable_7x7_2', 'max_pool_3x3',
'separable_5x5_2', 'separable_3x3_2', 'separable_3x3_2', 'max_pool_3x3',
'separable_3x3_2', 'none'
]
used_hiddenstates = [1, 1, 0, 0, 0, 0, 0]
hiddenstate_indices = [1, 1, 0, 0, 0, 0, 4, 0, 1, 0]
super(PNasNetNormalCell, self).__init__(
num_conv_filters, operations, used_hiddenstates, hiddenstate_indices,
drop_path_keep_prob, total_num_cells, total_training_steps,
use_bounded_activation) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/nets/nasnet/pnasnet.py | pnasnet.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
_PADDING = 4
def preprocess_for_train(image,
output_height,
output_width,
padding=_PADDING,
add_image_summaries=True,
use_grayscale=False):
"""Preprocesses the given image for training.
Note that the actual resizing scale is sampled from
[`resize_size_min`, `resize_size_max`].
Args:
image: A `Tensor` representing an image of arbitrary size.
output_height: The height of the image after preprocessing.
output_width: The width of the image after preprocessing.
padding: The amound of padding before and after each dimension of the image.
add_image_summaries: Enable image summaries.
use_grayscale: Whether to convert the image from RGB to grayscale.
Returns:
A preprocessed image.
"""
if add_image_summaries:
tf.summary.image('image', tf.expand_dims(image, 0))
# Transform the image to floats.
image = tf.to_float(image)
if use_grayscale:
image = tf.image.rgb_to_grayscale(image)
if padding > 0:
image = tf.pad(image, [[padding, padding], [padding, padding], [0, 0]])
# Randomly crop a [height, width] section of the image.
distorted_image = tf.random_crop(image,
[output_height, output_width, 3])
# Randomly flip the image horizontally.
distorted_image = tf.image.random_flip_left_right(distorted_image)
if add_image_summaries:
tf.summary.image('distorted_image', tf.expand_dims(distorted_image, 0))
# Because these operations are not commutative, consider randomizing
# the order their operation.
distorted_image = tf.image.random_brightness(distorted_image,
max_delta=63)
distorted_image = tf.image.random_contrast(distorted_image,
lower=0.2, upper=1.8)
# Subtract off the mean and divide by the variance of the pixels.
return tf.image.per_image_standardization(distorted_image)
def preprocess_for_eval(image,
output_height,
output_width,
add_image_summaries=True,
use_grayscale=False):
"""Preprocesses the given image for evaluation.
Args:
image: A `Tensor` representing an image of arbitrary size.
output_height: The height of the image after preprocessing.
output_width: The width of the image after preprocessing.
add_image_summaries: Enable image summaries.
use_grayscale: Whether to convert the image from RGB to grayscale.
Returns:
A preprocessed image.
"""
if add_image_summaries:
tf.summary.image('image', tf.expand_dims(image, 0))
# Transform the image to floats.
image = tf.to_float(image)
if use_grayscale:
image = tf.image.rgb_to_grayscale(image)
# Resize and crop if needed.
resized_image = tf.image.resize_image_with_crop_or_pad(image,
output_width,
output_height)
if add_image_summaries:
tf.summary.image('resized_image', tf.expand_dims(resized_image, 0))
# Subtract off the mean and divide by the variance of the pixels.
return tf.image.per_image_standardization(resized_image)
def preprocess_image(image,
output_height,
output_width,
is_training=False,
add_image_summaries=True,
use_grayscale=False):
"""Preprocesses the given image.
Args:
image: A `Tensor` representing an image of arbitrary size.
output_height: The height of the image after preprocessing.
output_width: The width of the image after preprocessing.
is_training: `True` if we're preprocessing the image for training and
`False` otherwise.
add_image_summaries: Enable image summaries.
use_grayscale: Whether to convert the image from RGB to grayscale.
Returns:
A preprocessed image.
"""
if is_training:
return preprocess_for_train(
image,
output_height,
output_width,
add_image_summaries=add_image_summaries,
use_grayscale=use_grayscale)
else:
return preprocess_for_eval(
image,
output_height,
output_width,
add_image_summaries=add_image_summaries,
use_grayscale=use_grayscale) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/preprocessing/cifarnet_preprocessing.py | cifarnet_preprocessing.py |
"""Contains a factory for building various models."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from preprocessing import cifarnet_preprocessing
from preprocessing import inception_preprocessing
from preprocessing import lenet_preprocessing
from preprocessing import vgg_preprocessing
def get_preprocessing(name, is_training=False, use_grayscale=False):
"""Returns preprocessing_fn(image, height, width, **kwargs).
Args:
name: The name of the preprocessing function.
is_training: `True` if the model is being used for training and `False`
otherwise.
use_grayscale: Whether to convert the image from RGB to grayscale.
Returns:
preprocessing_fn: A function that preprocessing a single image (pre-batch).
It has the following signature:
image = preprocessing_fn(image, output_height, output_width, ...).
Raises:
ValueError: If Preprocessing `name` is not recognized.
"""
preprocessing_fn_map = {
'cifarnet': cifarnet_preprocessing,
'inception': inception_preprocessing,
'inception_v1': inception_preprocessing,
'inception_v2': inception_preprocessing,
'inception_v3': inception_preprocessing,
'inception_v4': inception_preprocessing,
'inception_resnet_v2': inception_preprocessing,
'lenet': lenet_preprocessing,
'mobilenet_v1': inception_preprocessing,
'mobilenet_v2': inception_preprocessing,
'mobilenet_v2_035': inception_preprocessing,
'mobilenet_v3_small': inception_preprocessing,
'mobilenet_v3_large': inception_preprocessing,
'mobilenet_v3_small_minimalistic': inception_preprocessing,
'mobilenet_v3_large_minimalistic': inception_preprocessing,
'mobilenet_edgetpu': inception_preprocessing,
'mobilenet_edgetpu_075': inception_preprocessing,
'mobilenet_v2_140': inception_preprocessing,
'nasnet_mobile': inception_preprocessing,
'nasnet_large': inception_preprocessing,
'pnasnet_mobile': inception_preprocessing,
'pnasnet_large': inception_preprocessing,
'resnet_v1_50': vgg_preprocessing,
'resnet_v1_101': vgg_preprocessing,
'resnet_v1_152': vgg_preprocessing,
'resnet_v1_200': vgg_preprocessing,
'resnet_v2_50': vgg_preprocessing,
'resnet_v2_101': vgg_preprocessing,
'resnet_v2_152': vgg_preprocessing,
'resnet_v2_200': vgg_preprocessing,
'vgg': vgg_preprocessing,
'vgg_a': vgg_preprocessing,
'vgg_16': vgg_preprocessing,
'vgg_19': vgg_preprocessing,
}
if name not in preprocessing_fn_map:
raise ValueError('Preprocessing name [%s] was not recognized' % name)
def preprocessing_fn(image, output_height, output_width, **kwargs):
return preprocessing_fn_map[name].preprocess_image(
image,
output_height,
output_width,
is_training=is_training,
use_grayscale=use_grayscale,
**kwargs)
return preprocessing_fn | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/preprocessing/preprocessing_factory.py | preprocessing_factory.py |
"""Provides utilities to preprocess images for the Inception networks."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
from tensorflow.python.ops import control_flow_ops
def apply_with_random_selector(x, func, num_cases):
"""Computes func(x, sel), with sel sampled from [0...num_cases-1].
Args:
x: input Tensor.
func: Python function to apply.
num_cases: Python int32, number of cases to sample sel from.
Returns:
The result of func(x, sel), where func receives the value of the
selector as a python integer, but sel is sampled dynamically.
"""
sel = tf.random_uniform([], maxval=num_cases, dtype=tf.int32)
# Pass the real x only to one of the func calls.
return control_flow_ops.merge([
func(control_flow_ops.switch(x, tf.equal(sel, case))[1], case)
for case in range(num_cases)])[0]
def distort_color(image, color_ordering=0, fast_mode=True, scope=None):
"""Distort the color of a Tensor image.
Each color distortion is non-commutative and thus ordering of the color ops
matters. Ideally we would randomly permute the ordering of the color ops.
Rather then adding that level of complication, we select a distinct ordering
of color ops for each preprocessing thread.
Args:
image: 3-D Tensor containing single image in [0, 1].
color_ordering: Python int, a type of distortion (valid values: 0-3).
fast_mode: Avoids slower ops (random_hue and random_contrast)
scope: Optional scope for name_scope.
Returns:
3-D Tensor color-distorted image on range [0, 1]
Raises:
ValueError: if color_ordering not in [0, 3]
"""
with tf.name_scope(scope, 'distort_color', [image]):
if fast_mode:
if color_ordering == 0:
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
else:
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
else:
if color_ordering == 0:
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_hue(image, max_delta=0.2)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
elif color_ordering == 1:
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.random_hue(image, max_delta=0.2)
elif color_ordering == 2:
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.random_hue(image, max_delta=0.2)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
elif color_ordering == 3:
image = tf.image.random_hue(image, max_delta=0.2)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
else:
raise ValueError('color_ordering must be in [0, 3]')
# The random_* ops do not necessarily clamp.
return tf.clip_by_value(image, 0.0, 1.0)
def distorted_bounding_box_crop(image,
bbox,
min_object_covered=0.1,
aspect_ratio_range=(0.75, 1.33),
area_range=(0.05, 1.0),
max_attempts=100,
scope=None):
"""Generates cropped_image using a one of the bboxes randomly distorted.
See `tf.image.sample_distorted_bounding_box` for more documentation.
Args:
image: 3-D Tensor of image (it will be converted to floats in [0, 1]).
bbox: 3-D float Tensor of bounding boxes arranged [1, num_boxes, coords]
where each coordinate is [0, 1) and the coordinates are arranged
as [ymin, xmin, ymax, xmax]. If num_boxes is 0 then it would use the whole
image.
min_object_covered: An optional `float`. Defaults to `0.1`. The cropped
area of the image must contain at least this fraction of any bounding box
supplied.
aspect_ratio_range: An optional list of `floats`. The cropped area of the
image must have an aspect ratio = width / height within this range.
area_range: An optional list of `floats`. The cropped area of the image
must contain a fraction of the supplied image within in this range.
max_attempts: An optional `int`. Number of attempts at generating a cropped
region of the image of the specified constraints. After `max_attempts`
failures, return the entire image.
scope: Optional scope for name_scope.
Returns:
A tuple, a 3-D Tensor cropped_image and the distorted bbox
"""
with tf.name_scope(scope, 'distorted_bounding_box_crop', [image, bbox]):
# Each bounding box has shape [1, num_boxes, box coords] and
# the coordinates are ordered [ymin, xmin, ymax, xmax].
# A large fraction of image datasets contain a human-annotated bounding
# box delineating the region of the image containing the object of interest.
# We choose to create a new bounding box for the object which is a randomly
# distorted version of the human-annotated bounding box that obeys an
# allowed range of aspect ratios, sizes and overlap with the human-annotated
# bounding box. If no box is supplied, then we assume the bounding box is
# the entire image.
sample_distorted_bounding_box = tf.image.sample_distorted_bounding_box(
tf.shape(image),
bounding_boxes=bbox,
min_object_covered=min_object_covered,
aspect_ratio_range=aspect_ratio_range,
area_range=area_range,
max_attempts=max_attempts,
use_image_if_no_bounding_boxes=True)
bbox_begin, bbox_size, distort_bbox = sample_distorted_bounding_box
# Crop the image to the specified bounding box.
cropped_image = tf.slice(image, bbox_begin, bbox_size)
return cropped_image, distort_bbox
def preprocess_for_train(image,
height,
width,
bbox,
fast_mode=True,
scope=None,
add_image_summaries=True,
random_crop=True,
use_grayscale=False):
"""Distort one image for training a network.
Distorting images provides a useful technique for augmenting the data
set during training in order to make the network invariant to aspects
of the image that do not effect the label.
Additionally it would create image_summaries to display the different
transformations applied to the image.
Args:
image: 3-D Tensor of image. If dtype is tf.float32 then the range should be
[0, 1], otherwise it would converted to tf.float32 assuming that the range
is [0, MAX], where MAX is largest positive representable number for
int(8/16/32) data type (see `tf.image.convert_image_dtype` for details).
height: integer
width: integer
bbox: 3-D float Tensor of bounding boxes arranged [1, num_boxes, coords]
where each coordinate is [0, 1) and the coordinates are arranged
as [ymin, xmin, ymax, xmax].
fast_mode: Optional boolean, if True avoids slower transformations (i.e.
bi-cubic resizing, random_hue or random_contrast).
scope: Optional scope for name_scope.
add_image_summaries: Enable image summaries.
random_crop: Enable random cropping of images during preprocessing for
training.
use_grayscale: Whether to convert the image from RGB to grayscale.
Returns:
3-D float Tensor of distorted image used for training with range [-1, 1].
"""
with tf.name_scope(scope, 'distort_image', [image, height, width, bbox]):
if bbox is None:
bbox = tf.constant([0.0, 0.0, 1.0, 1.0],
dtype=tf.float32,
shape=[1, 1, 4])
if image.dtype != tf.float32:
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
# Each bounding box has shape [1, num_boxes, box coords] and
# the coordinates are ordered [ymin, xmin, ymax, xmax].
image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),
bbox)
if add_image_summaries:
tf.summary.image('image_with_bounding_boxes', image_with_box)
if not random_crop:
distorted_image = image
else:
distorted_image, distorted_bbox = distorted_bounding_box_crop(image, bbox)
# Restore the shape since the dynamic slice based upon the bbox_size loses
# the third dimension.
distorted_image.set_shape([None, None, 3])
image_with_distorted_box = tf.image.draw_bounding_boxes(
tf.expand_dims(image, 0), distorted_bbox)
if add_image_summaries:
tf.summary.image('images_with_distorted_bounding_box',
image_with_distorted_box)
# This resizing operation may distort the images because the aspect
# ratio is not respected. We select a resize method in a round robin
# fashion based on the thread number.
# Note that ResizeMethod contains 4 enumerated resizing methods.
# We select only 1 case for fast_mode bilinear.
num_resize_cases = 1 if fast_mode else 4
distorted_image = apply_with_random_selector(
distorted_image,
lambda x, method: tf.image.resize_images(x, [height, width], method),
num_cases=num_resize_cases)
if add_image_summaries:
tf.summary.image(('cropped_' if random_crop else '') + 'resized_image',
tf.expand_dims(distorted_image, 0))
# Randomly flip the image horizontally.
distorted_image = tf.image.random_flip_left_right(distorted_image)
# Randomly distort the colors. There are 1 or 4 ways to do it.
num_distort_cases = 1 if fast_mode else 4
distorted_image = apply_with_random_selector(
distorted_image,
lambda x, ordering: distort_color(x, ordering, fast_mode),
num_cases=num_distort_cases)
if use_grayscale:
distorted_image = tf.image.rgb_to_grayscale(distorted_image)
if add_image_summaries:
tf.summary.image('final_distorted_image',
tf.expand_dims(distorted_image, 0))
distorted_image = tf.subtract(distorted_image, 0.5)
distorted_image = tf.multiply(distorted_image, 2.0)
return distorted_image
def preprocess_for_eval(image,
height,
width,
central_fraction=0.875,
scope=None,
central_crop=True,
use_grayscale=False):
"""Prepare one image for evaluation.
If height and width are specified it would output an image with that size by
applying resize_bilinear.
If central_fraction is specified it would crop the central fraction of the
input image.
Args:
image: 3-D Tensor of image. If dtype is tf.float32 then the range should be
[0, 1], otherwise it would converted to tf.float32 assuming that the range
is [0, MAX], where MAX is largest positive representable number for
int(8/16/32) data type (see `tf.image.convert_image_dtype` for details).
height: integer
width: integer
central_fraction: Optional Float, fraction of the image to crop.
scope: Optional scope for name_scope.
central_crop: Enable central cropping of images during preprocessing for
evaluation.
use_grayscale: Whether to convert the image from RGB to grayscale.
Returns:
3-D float Tensor of prepared image.
"""
with tf.name_scope(scope, 'eval_image', [image, height, width]):
if image.dtype != tf.float32:
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
if use_grayscale:
image = tf.image.rgb_to_grayscale(image)
# Crop the central region of the image with an area containing 87.5% of
# the original image.
if central_crop and central_fraction:
image = tf.image.central_crop(image, central_fraction=central_fraction)
if height and width:
# Resize the image to the specified height and width.
image = tf.expand_dims(image, 0)
image = tf.image.resize_bilinear(image, [height, width],
align_corners=False)
image = tf.squeeze(image, [0])
image = tf.subtract(image, 0.5)
image = tf.multiply(image, 2.0)
return image
def preprocess_image(image,
height,
width,
is_training=False,
bbox=None,
fast_mode=True,
add_image_summaries=True,
crop_image=True,
use_grayscale=False):
"""Pre-process one image for training or evaluation.
Args:
image: 3-D Tensor [height, width, channels] with the image. If dtype is
tf.float32 then the range should be [0, 1], otherwise it would converted
to tf.float32 assuming that the range is [0, MAX], where MAX is largest
positive representable number for int(8/16/32) data type (see
`tf.image.convert_image_dtype` for details).
height: integer, image expected height.
width: integer, image expected width.
is_training: Boolean. If true it would transform an image for train,
otherwise it would transform it for evaluation.
bbox: 3-D float Tensor of bounding boxes arranged [1, num_boxes, coords]
where each coordinate is [0, 1) and the coordinates are arranged as
[ymin, xmin, ymax, xmax].
fast_mode: Optional boolean, if True avoids slower transformations.
add_image_summaries: Enable image summaries.
crop_image: Whether to enable cropping of images during preprocessing for
both training and evaluation.
use_grayscale: Whether to convert the image from RGB to grayscale.
Returns:
3-D float Tensor containing an appropriately scaled image
Raises:
ValueError: if user does not provide bounding box
"""
if is_training:
return preprocess_for_train(
image,
height,
width,
bbox,
fast_mode,
add_image_summaries=add_image_summaries,
random_crop=crop_image,
use_grayscale=use_grayscale)
else:
return preprocess_for_eval(
image,
height,
width,
central_crop=crop_image,
use_grayscale=use_grayscale) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/preprocessing/inception_preprocessing.py | inception_preprocessing.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
_R_MEAN = 123.68
_G_MEAN = 116.78
_B_MEAN = 103.94
_RESIZE_SIDE_MIN = 256
_RESIZE_SIDE_MAX = 512
def _crop(image, offset_height, offset_width, crop_height, crop_width):
"""Crops the given image using the provided offsets and sizes.
Note that the method doesn't assume we know the input image size but it does
assume we know the input image rank.
Args:
image: an image of shape [height, width, channels].
offset_height: a scalar tensor indicating the height offset.
offset_width: a scalar tensor indicating the width offset.
crop_height: the height of the cropped image.
crop_width: the width of the cropped image.
Returns:
the cropped (and resized) image.
Raises:
InvalidArgumentError: if the rank is not 3 or if the image dimensions are
less than the crop size.
"""
original_shape = tf.shape(image)
rank_assertion = tf.Assert(
tf.equal(tf.rank(image), 3),
['Rank of image must be equal to 3.'])
with tf.control_dependencies([rank_assertion]):
cropped_shape = tf.stack([crop_height, crop_width, original_shape[2]])
size_assertion = tf.Assert(
tf.logical_and(
tf.greater_equal(original_shape[0], crop_height),
tf.greater_equal(original_shape[1], crop_width)),
['Crop size greater than the image size.'])
offsets = tf.to_int32(tf.stack([offset_height, offset_width, 0]))
# Use tf.slice instead of crop_to_bounding box as it accepts tensors to
# define the crop size.
with tf.control_dependencies([size_assertion]):
image = tf.slice(image, offsets, cropped_shape)
return tf.reshape(image, cropped_shape)
def _random_crop(image_list, crop_height, crop_width):
"""Crops the given list of images.
The function applies the same crop to each image in the list. This can be
effectively applied when there are multiple image inputs of the same
dimension such as:
image, depths, normals = _random_crop([image, depths, normals], 120, 150)
Args:
image_list: a list of image tensors of the same dimension but possibly
varying channel.
crop_height: the new height.
crop_width: the new width.
Returns:
the image_list with cropped images.
Raises:
ValueError: if there are multiple image inputs provided with different size
or the images are smaller than the crop dimensions.
"""
if not image_list:
raise ValueError('Empty image_list.')
# Compute the rank assertions.
rank_assertions = []
for i in range(len(image_list)):
image_rank = tf.rank(image_list[i])
rank_assert = tf.Assert(
tf.equal(image_rank, 3),
['Wrong rank for tensor %s [expected] [actual]',
image_list[i].name, 3, image_rank])
rank_assertions.append(rank_assert)
with tf.control_dependencies([rank_assertions[0]]):
image_shape = tf.shape(image_list[0])
image_height = image_shape[0]
image_width = image_shape[1]
crop_size_assert = tf.Assert(
tf.logical_and(
tf.greater_equal(image_height, crop_height),
tf.greater_equal(image_width, crop_width)),
['Crop size greater than the image size.'])
asserts = [rank_assertions[0], crop_size_assert]
for i in range(1, len(image_list)):
image = image_list[i]
asserts.append(rank_assertions[i])
with tf.control_dependencies([rank_assertions[i]]):
shape = tf.shape(image)
height = shape[0]
width = shape[1]
height_assert = tf.Assert(
tf.equal(height, image_height),
['Wrong height for tensor %s [expected][actual]',
image.name, height, image_height])
width_assert = tf.Assert(
tf.equal(width, image_width),
['Wrong width for tensor %s [expected][actual]',
image.name, width, image_width])
asserts.extend([height_assert, width_assert])
# Create a random bounding box.
#
# Use tf.random_uniform and not numpy.random.rand as doing the former would
# generate random numbers at graph eval time, unlike the latter which
# generates random numbers at graph definition time.
with tf.control_dependencies(asserts):
max_offset_height = tf.reshape(image_height - crop_height + 1, [])
with tf.control_dependencies(asserts):
max_offset_width = tf.reshape(image_width - crop_width + 1, [])
offset_height = tf.random_uniform(
[], maxval=max_offset_height, dtype=tf.int32)
offset_width = tf.random_uniform(
[], maxval=max_offset_width, dtype=tf.int32)
return [_crop(image, offset_height, offset_width,
crop_height, crop_width) for image in image_list]
def _central_crop(image_list, crop_height, crop_width):
"""Performs central crops of the given image list.
Args:
image_list: a list of image tensors of the same dimension but possibly
varying channel.
crop_height: the height of the image following the crop.
crop_width: the width of the image following the crop.
Returns:
the list of cropped images.
"""
outputs = []
for image in image_list:
image_height = tf.shape(image)[0]
image_width = tf.shape(image)[1]
offset_height = (image_height - crop_height) / 2
offset_width = (image_width - crop_width) / 2
outputs.append(_crop(image, offset_height, offset_width,
crop_height, crop_width))
return outputs
def _mean_image_subtraction(image, means):
"""Subtracts the given means from each image channel.
For example:
means = [123.68, 116.779, 103.939]
image = _mean_image_subtraction(image, means)
Note that the rank of `image` must be known.
Args:
image: a tensor of size [height, width, C].
means: a C-vector of values to subtract from each channel.
Returns:
the centered image.
Raises:
ValueError: If the rank of `image` is unknown, if `image` has a rank other
than three or if the number of channels in `image` doesn't match the
number of values in `means`.
"""
if image.get_shape().ndims != 3:
raise ValueError('Input must be of size [height, width, C>0]')
num_channels = image.get_shape().as_list()[-1]
if len(means) != num_channels:
raise ValueError('len(means) must match the number of channels')
channels = tf.split(axis=2, num_or_size_splits=num_channels, value=image)
for i in range(num_channels):
channels[i] -= means[i]
return tf.concat(axis=2, values=channels)
def _smallest_size_at_least(height, width, smallest_side):
"""Computes new shape with the smallest side equal to `smallest_side`.
Computes new shape with the smallest side equal to `smallest_side` while
preserving the original aspect ratio.
Args:
height: an int32 scalar tensor indicating the current height.
width: an int32 scalar tensor indicating the current width.
smallest_side: A python integer or scalar `Tensor` indicating the size of
the smallest side after resize.
Returns:
new_height: an int32 scalar tensor indicating the new height.
new_width: and int32 scalar tensor indicating the new width.
"""
smallest_side = tf.convert_to_tensor(smallest_side, dtype=tf.int32)
height = tf.to_float(height)
width = tf.to_float(width)
smallest_side = tf.to_float(smallest_side)
scale = tf.cond(tf.greater(height, width),
lambda: smallest_side / width,
lambda: smallest_side / height)
new_height = tf.to_int32(tf.rint(height * scale))
new_width = tf.to_int32(tf.rint(width * scale))
return new_height, new_width
def _aspect_preserving_resize(image, smallest_side):
"""Resize images preserving the original aspect ratio.
Args:
image: A 3-D image `Tensor`.
smallest_side: A python integer or scalar `Tensor` indicating the size of
the smallest side after resize.
Returns:
resized_image: A 3-D tensor containing the resized image.
"""
smallest_side = tf.convert_to_tensor(smallest_side, dtype=tf.int32)
shape = tf.shape(image)
height = shape[0]
width = shape[1]
new_height, new_width = _smallest_size_at_least(height, width, smallest_side)
image = tf.expand_dims(image, 0)
resized_image = tf.image.resize_bilinear(image, [new_height, new_width],
align_corners=False)
resized_image = tf.squeeze(resized_image)
resized_image.set_shape([None, None, 3])
return resized_image
def preprocess_for_train(image,
output_height,
output_width,
resize_side_min=_RESIZE_SIDE_MIN,
resize_side_max=_RESIZE_SIDE_MAX,
use_grayscale=False):
"""Preprocesses the given image for training.
Note that the actual resizing scale is sampled from
[`resize_size_min`, `resize_size_max`].
Args:
image: A `Tensor` representing an image of arbitrary size.
output_height: The height of the image after preprocessing.
output_width: The width of the image after preprocessing.
resize_side_min: The lower bound for the smallest side of the image for
aspect-preserving resizing.
resize_side_max: The upper bound for the smallest side of the image for
aspect-preserving resizing.
use_grayscale: Whether to convert the image from RGB to grayscale.
Returns:
A preprocessed image.
"""
resize_side = tf.random_uniform(
[], minval=resize_side_min, maxval=resize_side_max+1, dtype=tf.int32)
image = _aspect_preserving_resize(image, resize_side)
image = _random_crop([image], output_height, output_width)[0]
image.set_shape([output_height, output_width, 3])
image = tf.to_float(image)
if use_grayscale:
image = tf.image.rgb_to_grayscale(image)
image = tf.image.random_flip_left_right(image)
return _mean_image_subtraction(image, [_R_MEAN, _G_MEAN, _B_MEAN])
def preprocess_for_eval(image,
output_height,
output_width,
resize_side,
use_grayscale=False):
"""Preprocesses the given image for evaluation.
Args:
image: A `Tensor` representing an image of arbitrary size.
output_height: The height of the image after preprocessing.
output_width: The width of the image after preprocessing.
resize_side: The smallest side of the image for aspect-preserving resizing.
use_grayscale: Whether to convert the image from RGB to grayscale.
Returns:
A preprocessed image.
"""
image = _aspect_preserving_resize(image, resize_side)
image = _central_crop([image], output_height, output_width)[0]
image.set_shape([output_height, output_width, 3])
image = tf.to_float(image)
if use_grayscale:
image = tf.image.rgb_to_grayscale(image)
return _mean_image_subtraction(image, [_R_MEAN, _G_MEAN, _B_MEAN])
def preprocess_image(image,
output_height,
output_width,
is_training=False,
resize_side_min=_RESIZE_SIDE_MIN,
resize_side_max=_RESIZE_SIDE_MAX,
use_grayscale=False):
"""Preprocesses the given image.
Args:
image: A `Tensor` representing an image of arbitrary size.
output_height: The height of the image after preprocessing.
output_width: The width of the image after preprocessing.
is_training: `True` if we're preprocessing the image for training and
`False` otherwise.
resize_side_min: The lower bound for the smallest side of the image for
aspect-preserving resizing. If `is_training` is `False`, then this value
is used for rescaling.
resize_side_max: The upper bound for the smallest side of the image for
aspect-preserving resizing. If `is_training` is `False`, this value is
ignored. Otherwise, the resize side is sampled from
[resize_size_min, resize_size_max].
use_grayscale: Whether to convert the image from RGB to grayscale.
Returns:
A preprocessed image.
"""
if is_training:
return preprocess_for_train(image, output_height, output_width,
resize_side_min, resize_side_max,
use_grayscale)
else:
return preprocess_for_eval(image, output_height, output_width,
resize_side_min, use_grayscale) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/preprocessing/vgg_preprocessing.py | vgg_preprocessing.py |
r"""Downloads and converts MNIST data to TFRecords of TF-Example protos.
This module downloads the MNIST data, uncompresses it, reads the files
that make up the MNIST data and creates two TFRecord datasets: one for train
and one for test. Each TFRecord dataset is comprised of a set of TF-Example
protocol buffers, each of which contain a single image and label.
The script should take about a minute to run.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gzip
import os
import sys
import numpy as np
from six.moves import urllib
import tensorflow.compat.v1 as tf
from datasets import dataset_utils
# The URLs where the MNIST data can be downloaded.
_DATA_URL = 'http://yann.lecun.com/exdb/mnist/'
_TRAIN_DATA_FILENAME = 'train-images-idx3-ubyte.gz'
_TRAIN_LABELS_FILENAME = 'train-labels-idx1-ubyte.gz'
_TEST_DATA_FILENAME = 't10k-images-idx3-ubyte.gz'
_TEST_LABELS_FILENAME = 't10k-labels-idx1-ubyte.gz'
_IMAGE_SIZE = 28
_NUM_CHANNELS = 1
# The names of the classes.
_CLASS_NAMES = [
'zero',
'one',
'two',
'three',
'four',
'five',
'size',
'seven',
'eight',
'nine',
]
def _extract_images(filename, num_images):
"""Extract the images into a numpy array.
Args:
filename: The path to an MNIST images file.
num_images: The number of images in the file.
Returns:
A numpy array of shape [number_of_images, height, width, channels].
"""
print('Extracting images from: ', filename)
with gzip.open(filename) as bytestream:
bytestream.read(16)
buf = bytestream.read(
_IMAGE_SIZE * _IMAGE_SIZE * num_images * _NUM_CHANNELS)
data = np.frombuffer(buf, dtype=np.uint8)
data = data.reshape(num_images, _IMAGE_SIZE, _IMAGE_SIZE, _NUM_CHANNELS)
return data
def _extract_labels(filename, num_labels):
"""Extract the labels into a vector of int64 label IDs.
Args:
filename: The path to an MNIST labels file.
num_labels: The number of labels in the file.
Returns:
A numpy array of shape [number_of_labels]
"""
print('Extracting labels from: ', filename)
with gzip.open(filename) as bytestream:
bytestream.read(8)
buf = bytestream.read(1 * num_labels)
labels = np.frombuffer(buf, dtype=np.uint8).astype(np.int64)
return labels
def _add_to_tfrecord(data_filename, labels_filename, num_images,
tfrecord_writer):
"""Loads data from the binary MNIST files and writes files to a TFRecord.
Args:
data_filename: The filename of the MNIST images.
labels_filename: The filename of the MNIST labels.
num_images: The number of images in the dataset.
tfrecord_writer: The TFRecord writer to use for writing.
"""
images = _extract_images(data_filename, num_images)
labels = _extract_labels(labels_filename, num_images)
shape = (_IMAGE_SIZE, _IMAGE_SIZE, _NUM_CHANNELS)
with tf.Graph().as_default():
image = tf.placeholder(dtype=tf.uint8, shape=shape)
encoded_png = tf.image.encode_png(image)
with tf.Session('') as sess:
for j in range(num_images):
sys.stdout.write('\r>> Converting image %d/%d' % (j + 1, num_images))
sys.stdout.flush()
png_string = sess.run(encoded_png, feed_dict={image: images[j]})
example = dataset_utils.image_to_tfexample(
png_string, 'png'.encode(), _IMAGE_SIZE, _IMAGE_SIZE, labels[j])
tfrecord_writer.write(example.SerializeToString())
def _get_output_filename(dataset_dir, split_name):
"""Creates the output filename.
Args:
dataset_dir: The directory where the temporary files are stored.
split_name: The name of the train/test split.
Returns:
An absolute file path.
"""
return '%s/mnist_%s.tfrecord' % (dataset_dir, split_name)
def _download_dataset(dataset_dir):
"""Downloads MNIST locally.
Args:
dataset_dir: The directory where the temporary files are stored.
"""
for filename in [_TRAIN_DATA_FILENAME,
_TRAIN_LABELS_FILENAME,
_TEST_DATA_FILENAME,
_TEST_LABELS_FILENAME]:
filepath = os.path.join(dataset_dir, filename)
if not os.path.exists(filepath):
print('Downloading file %s...' % filename)
def _progress(count, block_size, total_size):
sys.stdout.write('\r>> Downloading %.1f%%' % (
float(count * block_size) / float(total_size) * 100.0))
sys.stdout.flush()
filepath, _ = urllib.request.urlretrieve(_DATA_URL + filename,
filepath,
_progress)
print()
with tf.gfile.GFile(filepath) as f:
size = f.size()
print('Successfully downloaded', filename, size, 'bytes.')
def _clean_up_temporary_files(dataset_dir):
"""Removes temporary files used to create the dataset.
Args:
dataset_dir: The directory where the temporary files are stored.
"""
for filename in [_TRAIN_DATA_FILENAME,
_TRAIN_LABELS_FILENAME,
_TEST_DATA_FILENAME,
_TEST_LABELS_FILENAME]:
filepath = os.path.join(dataset_dir, filename)
tf.gfile.Remove(filepath)
def run(dataset_dir):
"""Runs the download and conversion operation.
Args:
dataset_dir: The dataset directory where the dataset is stored.
"""
if not tf.gfile.Exists(dataset_dir):
tf.gfile.MakeDirs(dataset_dir)
training_filename = _get_output_filename(dataset_dir, 'train')
testing_filename = _get_output_filename(dataset_dir, 'test')
if tf.gfile.Exists(training_filename) and tf.gfile.Exists(testing_filename):
print('Dataset files already exist. Exiting without re-creating them.')
return
_download_dataset(dataset_dir)
# First, process the training data:
with tf.python_io.TFRecordWriter(training_filename) as tfrecord_writer:
data_filename = os.path.join(dataset_dir, _TRAIN_DATA_FILENAME)
labels_filename = os.path.join(dataset_dir, _TRAIN_LABELS_FILENAME)
_add_to_tfrecord(data_filename, labels_filename, 60000, tfrecord_writer)
# Next, process the testing data:
with tf.python_io.TFRecordWriter(testing_filename) as tfrecord_writer:
data_filename = os.path.join(dataset_dir, _TEST_DATA_FILENAME)
labels_filename = os.path.join(dataset_dir, _TEST_LABELS_FILENAME)
_add_to_tfrecord(data_filename, labels_filename, 10000, tfrecord_writer)
# Finally, write the labels file:
labels_to_class_names = dict(zip(range(len(_CLASS_NAMES)), _CLASS_NAMES))
dataset_utils.write_label_file(labels_to_class_names, dataset_dir)
_clean_up_temporary_files(dataset_dir)
print('\nFinished converting the MNIST dataset!') | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/download_and_convert_mnist.py | download_and_convert_mnist.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tensorflow.compat.v1 as tf
import tf_slim as slim
from datasets import dataset_utils
_FILE_PATTERN = 'cifar10_%s.tfrecord'
SPLITS_TO_SIZES = {'train': 50000, 'test': 10000}
_NUM_CLASSES = 10
_ITEMS_TO_DESCRIPTIONS = {
'image': 'A [32 x 32 x 3] color image.',
'label': 'A single integer between 0 and 9',
}
def get_split(split_name, dataset_dir, file_pattern=None, reader=None):
"""Gets a dataset tuple with instructions for reading cifar10.
Args:
split_name: A train/test split name.
dataset_dir: The base directory of the dataset sources.
file_pattern: The file pattern to use when matching the dataset sources.
It is assumed that the pattern contains a '%s' string so that the split
name can be inserted.
reader: The TensorFlow reader type.
Returns:
A `Dataset` namedtuple.
Raises:
ValueError: if `split_name` is not a valid train/test split.
"""
if split_name not in SPLITS_TO_SIZES:
raise ValueError('split name %s was not recognized.' % split_name)
if not file_pattern:
file_pattern = _FILE_PATTERN
file_pattern = os.path.join(dataset_dir, file_pattern % split_name)
# Allowing None in the signature so that dataset_factory can use the default.
if not reader:
reader = tf.TFRecordReader
keys_to_features = {
'image/encoded': tf.FixedLenFeature((), tf.string, default_value=''),
'image/format': tf.FixedLenFeature((), tf.string, default_value='png'),
'image/class/label': tf.FixedLenFeature(
[], tf.int64, default_value=tf.zeros([], dtype=tf.int64)),
}
items_to_handlers = {
'image': slim.tfexample_decoder.Image(shape=[32, 32, 3]),
'label': slim.tfexample_decoder.Tensor('image/class/label'),
}
decoder = slim.tfexample_decoder.TFExampleDecoder(
keys_to_features, items_to_handlers)
labels_to_names = None
if dataset_utils.has_labels(dataset_dir):
labels_to_names = dataset_utils.read_label_file(dataset_dir)
return slim.dataset.Dataset(
data_sources=file_pattern,
reader=reader,
decoder=decoder,
num_samples=SPLITS_TO_SIZES[split_name],
items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
num_classes=_NUM_CLASSES,
labels_to_names=labels_to_names) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/cifar10.py | cifar10.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from six.moves import urllib
import tensorflow.compat.v1 as tf
import tf_slim as slim
from datasets import dataset_utils
# TODO(nsilberman): Add tfrecord file type once the script is updated.
_FILE_PATTERN = '%s-*'
_SPLITS_TO_SIZES = {
'train': 1281167,
'validation': 50000,
}
_ITEMS_TO_DESCRIPTIONS = {
'image': 'A color image of varying height and width.',
'label': 'The label id of the image, integer between 0 and 999',
'label_text': 'The text of the label.',
'object/bbox': 'A list of bounding boxes.',
'object/label': 'A list of labels, one per each object.',
}
_NUM_CLASSES = 1001
# If set to false, will not try to set label_to_names in dataset
# by reading them from labels.txt or github.
LOAD_READABLE_NAMES = True
def create_readable_names_for_imagenet_labels():
"""Create a dict mapping label id to human readable string.
Returns:
labels_to_names: dictionary where keys are integers from to 1000
and values are human-readable names.
We retrieve a synset file, which contains a list of valid synset labels used
by ILSVRC competition. There is one synset one per line, eg.
# n01440764
# n01443537
We also retrieve a synset_to_human_file, which contains a mapping from synsets
to human-readable names for every synset in Imagenet. These are stored in a
tsv format, as follows:
# n02119247 black fox
# n02119359 silver fox
We assign each synset (in alphabetical order) an integer, starting from 1
(since 0 is reserved for the background class).
Code is based on
https://github.com/tensorflow/models/blob/master/research/inception/inception/data/build_imagenet_data.py#L463
"""
# pylint: disable=g-line-too-long
base_url = 'https://raw.githubusercontent.com/tensorflow/models/master/research/slim/datasets/'
synset_url = '{}/imagenet_lsvrc_2015_synsets.txt'.format(base_url)
synset_to_human_url = '{}/imagenet_metadata.txt'.format(base_url)
filename, _ = urllib.request.urlretrieve(synset_url)
synset_list = [s.strip() for s in open(filename).readlines()]
num_synsets_in_ilsvrc = len(synset_list)
assert num_synsets_in_ilsvrc == 1000
filename, _ = urllib.request.urlretrieve(synset_to_human_url)
synset_to_human_list = open(filename).readlines()
num_synsets_in_all_imagenet = len(synset_to_human_list)
assert num_synsets_in_all_imagenet == 21842
synset_to_human = {}
for s in synset_to_human_list:
parts = s.strip().split('\t')
assert len(parts) == 2
synset = parts[0]
human = parts[1]
synset_to_human[synset] = human
label_index = 1
labels_to_names = {0: 'background'}
for synset in synset_list:
name = synset_to_human[synset]
labels_to_names[label_index] = name
label_index += 1
return labels_to_names
def get_split(split_name, dataset_dir, file_pattern=None, reader=None):
"""Gets a dataset tuple with instructions for reading ImageNet.
Args:
split_name: A train/test split name.
dataset_dir: The base directory of the dataset sources.
file_pattern: The file pattern to use when matching the dataset sources.
It is assumed that the pattern contains a '%s' string so that the split
name can be inserted.
reader: The TensorFlow reader type.
Returns:
A `Dataset` namedtuple.
Raises:
ValueError: if `split_name` is not a valid train/test split.
"""
if split_name not in _SPLITS_TO_SIZES:
raise ValueError('split name %s was not recognized.' % split_name)
if not file_pattern:
file_pattern = _FILE_PATTERN
file_pattern = os.path.join(dataset_dir, file_pattern % split_name)
# Allowing None in the signature so that dataset_factory can use the default.
if reader is None:
reader = tf.TFRecordReader
keys_to_features = {
'image/encoded': tf.FixedLenFeature(
(), tf.string, default_value=''),
'image/format': tf.FixedLenFeature(
(), tf.string, default_value='jpeg'),
'image/class/label': tf.FixedLenFeature(
[], dtype=tf.int64, default_value=-1),
'image/class/text': tf.FixedLenFeature(
[], dtype=tf.string, default_value=''),
'image/object/bbox/xmin': tf.VarLenFeature(
dtype=tf.float32),
'image/object/bbox/ymin': tf.VarLenFeature(
dtype=tf.float32),
'image/object/bbox/xmax': tf.VarLenFeature(
dtype=tf.float32),
'image/object/bbox/ymax': tf.VarLenFeature(
dtype=tf.float32),
'image/object/class/label': tf.VarLenFeature(
dtype=tf.int64),
}
items_to_handlers = {
'image': slim.tfexample_decoder.Image('image/encoded', 'image/format'),
'label': slim.tfexample_decoder.Tensor('image/class/label'),
'label_text': slim.tfexample_decoder.Tensor('image/class/text'),
'object/bbox': slim.tfexample_decoder.BoundingBox(
['ymin', 'xmin', 'ymax', 'xmax'], 'image/object/bbox/'),
'object/label': slim.tfexample_decoder.Tensor('image/object/class/label'),
}
decoder = slim.tfexample_decoder.TFExampleDecoder(
keys_to_features, items_to_handlers)
labels_to_names = None
if LOAD_READABLE_NAMES:
if dataset_utils.has_labels(dataset_dir):
labels_to_names = dataset_utils.read_label_file(dataset_dir)
else:
labels_to_names = create_readable_names_for_imagenet_labels()
dataset_utils.write_label_file(labels_to_names, dataset_dir)
return slim.dataset.Dataset(
data_sources=file_pattern,
reader=reader,
decoder=decoder,
num_samples=_SPLITS_TO_SIZES[split_name],
items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
num_classes=_NUM_CLASSES,
labels_to_names=labels_to_names) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/imagenet.py | imagenet.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tensorflow.compat.v1 as tf
import tf_slim as slim
from datasets import dataset_utils
_FILE_PATTERN = 'mnist_%s.tfrecord'
_SPLITS_TO_SIZES = {'train': 60000, 'test': 10000}
_NUM_CLASSES = 10
_ITEMS_TO_DESCRIPTIONS = {
'image': 'A [28 x 28 x 1] grayscale image.',
'label': 'A single integer between 0 and 9',
}
def get_split(split_name, dataset_dir, file_pattern=None, reader=None):
"""Gets a dataset tuple with instructions for reading MNIST.
Args:
split_name: A train/test split name.
dataset_dir: The base directory of the dataset sources.
file_pattern: The file pattern to use when matching the dataset sources.
It is assumed that the pattern contains a '%s' string so that the split
name can be inserted.
reader: The TensorFlow reader type.
Returns:
A `Dataset` namedtuple.
Raises:
ValueError: if `split_name` is not a valid train/test split.
"""
if split_name not in _SPLITS_TO_SIZES:
raise ValueError('split name %s was not recognized.' % split_name)
if not file_pattern:
file_pattern = _FILE_PATTERN
file_pattern = os.path.join(dataset_dir, file_pattern % split_name)
# Allowing None in the signature so that dataset_factory can use the default.
if reader is None:
reader = tf.TFRecordReader
keys_to_features = {
'image/encoded': tf.FixedLenFeature((), tf.string, default_value=''),
'image/format': tf.FixedLenFeature((), tf.string, default_value='raw'),
'image/class/label': tf.FixedLenFeature(
[1], tf.int64, default_value=tf.zeros([1], dtype=tf.int64)),
}
items_to_handlers = {
'image': slim.tfexample_decoder.Image(shape=[28, 28, 1], channels=1),
'label': slim.tfexample_decoder.Tensor('image/class/label', shape=[]),
}
decoder = slim.tfexample_decoder.TFExampleDecoder(
keys_to_features, items_to_handlers)
labels_to_names = None
if dataset_utils.has_labels(dataset_dir):
labels_to_names = dataset_utils.read_label_file(dataset_dir)
return slim.dataset.Dataset(
data_sources=file_pattern,
reader=reader,
decoder=decoder,
num_samples=_SPLITS_TO_SIZES[split_name],
num_classes=_NUM_CLASSES,
items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
labels_to_names=labels_to_names) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/mnist.py | mnist.py |
r"""Downloads and converts Flowers data to TFRecords of TF-Example protos.
This module downloads the Flowers data, uncompresses it, reads the files
that make up the Flowers data and creates two TFRecord datasets: one for train
and one for test. Each TFRecord dataset is comprised of a set of TF-Example
protocol buffers, each of which contain a single image and label.
The script should take about a minute to run.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import os
import random
import sys
from six.moves import range
from six.moves import zip
import tensorflow.compat.v1 as tf
from datasets import dataset_utils
# The URL where the Flowers data can be downloaded.
_DATA_URL = 'http://download.tensorflow.org/example_images/flower_photos.tgz'
# The number of images in the validation set.
_NUM_VALIDATION = 350
# Seed for repeatability.
_RANDOM_SEED = 0
# The number of shards per dataset split.
_NUM_SHARDS = 5
class ImageReader(object):
"""Helper class that provides TensorFlow image coding utilities."""
def __init__(self):
# Initializes function that decodes RGB JPEG data.
self._decode_jpeg_data = tf.placeholder(dtype=tf.string)
self._decode_jpeg = tf.image.decode_jpeg(self._decode_jpeg_data, channels=3)
def read_image_dims(self, sess, image_data):
image = self.decode_jpeg(sess, image_data)
return image.shape[0], image.shape[1]
def decode_jpeg(self, sess, image_data):
image = sess.run(self._decode_jpeg,
feed_dict={self._decode_jpeg_data: image_data})
assert len(image.shape) == 3
assert image.shape[2] == 3
return image
def _get_filenames_and_classes(dataset_dir):
"""Returns a list of filenames and inferred class names.
Args:
dataset_dir: A directory containing a set of subdirectories representing
class names. Each subdirectory should contain PNG or JPG encoded images.
Returns:
A list of image file paths, relative to `dataset_dir` and the list of
subdirectories, representing class names.
"""
flower_root = os.path.join(dataset_dir, 'flower_photos')
directories = []
class_names = []
for filename in os.listdir(flower_root):
path = os.path.join(flower_root, filename)
if os.path.isdir(path):
directories.append(path)
class_names.append(filename)
photo_filenames = []
for directory in directories:
for filename in os.listdir(directory):
path = os.path.join(directory, filename)
photo_filenames.append(path)
return photo_filenames, sorted(class_names)
def _get_dataset_filename(dataset_dir, split_name, shard_id):
output_filename = 'flowers_%s_%05d-of-%05d.tfrecord' % (
split_name, shard_id, _NUM_SHARDS)
return os.path.join(dataset_dir, output_filename)
def _convert_dataset(split_name, filenames, class_names_to_ids, dataset_dir):
"""Converts the given filenames to a TFRecord dataset.
Args:
split_name: The name of the dataset, either 'train' or 'validation'.
filenames: A list of absolute paths to png or jpg images.
class_names_to_ids: A dictionary from class names (strings) to ids
(integers).
dataset_dir: The directory where the converted datasets are stored.
"""
assert split_name in ['train', 'validation']
num_per_shard = int(math.ceil(len(filenames) / float(_NUM_SHARDS)))
with tf.Graph().as_default():
image_reader = ImageReader()
with tf.Session('') as sess:
for shard_id in range(_NUM_SHARDS):
output_filename = _get_dataset_filename(
dataset_dir, split_name, shard_id)
with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer:
start_ndx = shard_id * num_per_shard
end_ndx = min((shard_id+1) * num_per_shard, len(filenames))
for i in range(start_ndx, end_ndx):
sys.stdout.write('\r>> Converting image %d/%d shard %d' % (
i+1, len(filenames), shard_id))
sys.stdout.flush()
# Read the filename:
image_data = tf.gfile.GFile(filenames[i], 'rb').read()
height, width = image_reader.read_image_dims(sess, image_data)
class_name = os.path.basename(os.path.dirname(filenames[i]))
class_id = class_names_to_ids[class_name]
example = dataset_utils.image_to_tfexample(
image_data, b'jpg', height, width, class_id)
tfrecord_writer.write(example.SerializeToString())
sys.stdout.write('\n')
sys.stdout.flush()
def _clean_up_temporary_files(dataset_dir):
"""Removes temporary files used to create the dataset.
Args:
dataset_dir: The directory where the temporary files are stored.
"""
filename = _DATA_URL.split('/')[-1]
filepath = os.path.join(dataset_dir, filename)
tf.gfile.Remove(filepath)
tmp_dir = os.path.join(dataset_dir, 'flower_photos')
tf.gfile.DeleteRecursively(tmp_dir)
def _dataset_exists(dataset_dir):
for split_name in ['train', 'validation']:
for shard_id in range(_NUM_SHARDS):
output_filename = _get_dataset_filename(
dataset_dir, split_name, shard_id)
if not tf.gfile.Exists(output_filename):
return False
return True
def run(dataset_dir):
"""Runs the download and conversion operation.
Args:
dataset_dir: The dataset directory where the dataset is stored.
"""
if not tf.gfile.Exists(dataset_dir):
tf.gfile.MakeDirs(dataset_dir)
if _dataset_exists(dataset_dir):
print('Dataset files already exist. Exiting without re-creating them.')
return
dataset_utils.download_and_uncompress_tarball(_DATA_URL, dataset_dir)
photo_filenames, class_names = _get_filenames_and_classes(dataset_dir)
class_names_to_ids = dict(
list(zip(class_names, list(range(len(class_names))))))
# Divide into train and test:
random.seed(_RANDOM_SEED)
random.shuffle(photo_filenames)
training_filenames = photo_filenames[_NUM_VALIDATION:]
validation_filenames = photo_filenames[:_NUM_VALIDATION]
# First, convert the training and validation sets.
_convert_dataset('train', training_filenames, class_names_to_ids,
dataset_dir)
_convert_dataset('validation', validation_filenames, class_names_to_ids,
dataset_dir)
# Finally, write the labels file:
labels_to_class_names = dict(
list(zip(list(range(len(class_names))), class_names)))
dataset_utils.write_label_file(labels_to_class_names, dataset_dir)
_clean_up_temporary_files(dataset_dir)
print('\nFinished converting the Flowers dataset!') | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/download_and_convert_flowers.py | download_and_convert_flowers.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import glob
import os.path
import sys
import xml.etree.ElementTree as ET
from six.moves import xrange # pylint: disable=redefined-builtin
class BoundingBox(object):
pass
def GetItem(name, root, index=0):
count = 0
for item in root.iter(name):
if count == index:
return item.text
count += 1
# Failed to find "index" occurrence of item.
return -1
def GetInt(name, root, index=0):
return int(GetItem(name, root, index))
def FindNumberBoundingBoxes(root):
index = 0
while True:
if GetInt('xmin', root, index) == -1:
break
index += 1
return index
def ProcessXMLAnnotation(xml_file):
"""Process a single XML file containing a bounding box."""
# pylint: disable=broad-except
try:
tree = ET.parse(xml_file)
except Exception:
print('Failed to parse: ' + xml_file, file=sys.stderr)
return None
# pylint: enable=broad-except
root = tree.getroot()
num_boxes = FindNumberBoundingBoxes(root)
boxes = []
for index in xrange(num_boxes):
box = BoundingBox()
# Grab the 'index' annotation.
box.xmin = GetInt('xmin', root, index)
box.ymin = GetInt('ymin', root, index)
box.xmax = GetInt('xmax', root, index)
box.ymax = GetInt('ymax', root, index)
box.width = GetInt('width', root)
box.height = GetInt('height', root)
box.filename = GetItem('filename', root) + '.JPEG'
box.label = GetItem('name', root)
xmin = float(box.xmin) / float(box.width)
xmax = float(box.xmax) / float(box.width)
ymin = float(box.ymin) / float(box.height)
ymax = float(box.ymax) / float(box.height)
# Some images contain bounding box annotations that
# extend outside of the supplied image. See, e.g.
# n03127925/n03127925_147.xml
# Additionally, for some bounding boxes, the min > max
# or the box is entirely outside of the image.
min_x = min(xmin, xmax)
max_x = max(xmin, xmax)
box.xmin_scaled = min(max(min_x, 0.0), 1.0)
box.xmax_scaled = min(max(max_x, 0.0), 1.0)
min_y = min(ymin, ymax)
max_y = max(ymin, ymax)
box.ymin_scaled = min(max(min_y, 0.0), 1.0)
box.ymax_scaled = min(max(max_y, 0.0), 1.0)
boxes.append(box)
return boxes
if __name__ == '__main__':
if len(sys.argv) < 2 or len(sys.argv) > 3:
print('Invalid usage\n'
'usage: process_bounding_boxes.py <dir> [synsets-file]',
file=sys.stderr)
sys.exit(-1)
xml_files = glob.glob(sys.argv[1] + '/*/*.xml')
print('Identified %d XML files in %s' % (len(xml_files), sys.argv[1]),
file=sys.stderr)
if len(sys.argv) == 3:
labels = set([l.strip() for l in open(sys.argv[2]).readlines()])
print('Identified %d synset IDs in %s' % (len(labels), sys.argv[2]),
file=sys.stderr)
else:
labels = None
skipped_boxes = 0
skipped_files = 0
saved_boxes = 0
saved_files = 0
for file_index, one_file in enumerate(xml_files):
# Example: <...>/n06470073/n00141669_6790.xml
label = os.path.basename(os.path.dirname(one_file))
# Determine if the annotation is from an ImageNet Challenge label.
if labels is not None and label not in labels:
skipped_files += 1
continue
bboxes = ProcessXMLAnnotation(one_file)
assert bboxes is not None, 'No bounding boxes found in ' + one_file
found_box = False
for bbox in bboxes:
if labels is not None:
if bbox.label != label:
# Note: There is a slight bug in the bounding box annotation data.
# Many of the dog labels have the human label 'Scottish_deerhound'
# instead of the synset ID 'n02092002' in the bbox.label field. As a
# simple hack to overcome this issue, we only exclude bbox labels
# *which are synset ID's* that do not match original synset label for
# the XML file.
if bbox.label in labels:
skipped_boxes += 1
continue
# Guard against improperly specified boxes.
if (bbox.xmin_scaled >= bbox.xmax_scaled or
bbox.ymin_scaled >= bbox.ymax_scaled):
skipped_boxes += 1
continue
# Note bbox.filename occasionally contains '%s' in the name. This is
# data set noise that is fixed by just using the basename of the XML file.
image_filename = os.path.splitext(os.path.basename(one_file))[0]
print('%s.JPEG,%.4f,%.4f,%.4f,%.4f' %
(image_filename,
bbox.xmin_scaled, bbox.ymin_scaled,
bbox.xmax_scaled, bbox.ymax_scaled))
saved_boxes += 1
found_box = True
if found_box:
saved_files += 1
else:
skipped_files += 1
if not file_index % 5000:
print('--> processed %d of %d XML files.' %
(file_index + 1, len(xml_files)),
file=sys.stderr)
print('--> skipped %d boxes and %d XML files.' %
(skipped_boxes, skipped_files), file=sys.stderr)
print('Finished processing %d XML files.' % len(xml_files), file=sys.stderr)
print('Skipped %d XML files not in ImageNet Challenge.' % skipped_files,
file=sys.stderr)
print('Skipped %d bounding boxes not in ImageNet Challenge.' % skipped_boxes,
file=sys.stderr)
print('Wrote %d bounding boxes from %d annotated images.' %
(saved_boxes, saved_files),
file=sys.stderr)
print('Finished.', file=sys.stderr) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/process_bounding_boxes.py | process_bounding_boxes.py |
r"""Downloads and converts VisualWakewords data to TFRecords of TF-Example protos.
This module downloads the COCO dataset, uncompresses it, derives the
VisualWakeWords dataset to create two TFRecord datasets: one for
train and one for test. Each TFRecord dataset is comprised of a set of
TF-Example protocol buffers, each of which contain a single image and label.
The script should take several minutes to run.
Please note that this tool creates sharded output files.
VisualWakeWords dataset is used to design tiny models classifying two classes,
such as person/not-person. The two steps to generate the VisualWakeWords
dataset from the COCO dataset are given below:
1. Use COCO annotations to create VisualWakeWords annotations:
Note: A bounding box is 'valid' if it has the foreground_class_of_interest
(e.g. person) and it's area is greater than 0.5% of the image area.
The resulting annotations file has the following fields, where 'images' are
the same as COCO dataset. 'categories' only contains information about the
foreground_class_of_interest (e.g. person) and 'annotations' maps an image to
objects (a list of valid bounding boxes) and label (value is 1 if it has
atleast one valid bounding box, otherwise 0)
images[{
"id", "width", "height", "file_name", "flickr_url", "coco_url",
"license", "date_captured",
}]
categories{
"id": {"id", "name", "supercategory"}
}
annotations{
"image_id": {"objects":[{"area", "bbox" : [x,y,width,height]}], "label"}
}
2. Use VisualWakeWords annotations to create TFRecords:
The resulting TFRecord file contains the following features:
{ image/height, image/width, image/source_id, image/encoded,
image/class/label_text, image/class/label,
image/object/class/text,
image/object/bbox/ymin, image/object/bbox/xmin, image/object/bbox/ymax,
image/object/bbox/xmax, image/object/area
image/filename, image/format, image/key/sha256}
For classification models, you need the image/encoded and image/class/label.
Example usage:
Run download_and_convert_data.py in the parent directory as follows:
python download_and_convert_visualwakewords.py --logtostderr \
--dataset_name=visualwakewords \
--dataset_dir="${DATASET_DIR}" \
--small_object_area_threshold=0.005 \
--foreground_class_of_interest='person'
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tensorflow.compat.v1 as tf
from datasets import download_and_convert_visualwakewords_lib
tf.logging.set_verbosity(tf.logging.INFO)
tf.app.flags.DEFINE_string(
'coco_dirname', 'coco_dataset',
'A subdirectory in visualwakewords dataset directory'
'containing the coco dataset')
FLAGS = tf.app.flags.FLAGS
def run(dataset_dir, small_object_area_threshold, foreground_class_of_interest):
"""Runs the download and conversion operation.
Args:
dataset_dir: The dataset directory where the dataset is stored.
small_object_area_threshold: Threshold of fraction of image area below which
small objects are filtered
foreground_class_of_interest: Build a binary classifier based on the
presence or absence of this object in the image.
"""
# 1. Download the coco dataset into a subdirectory under the visualwakewords
# dataset directory
coco_dir = os.path.join(dataset_dir, FLAGS.coco_dirname)
if not tf.gfile.IsDirectory(coco_dir):
tf.gfile.MakeDirs(coco_dir)
download_and_convert_visualwakewords_lib.download_coco_dataset(coco_dir)
# Path to COCO annotations
train_annotations_file = os.path.join(coco_dir, 'annotations',
'instances_train2014.json')
val_annotations_file = os.path.join(coco_dir, 'annotations',
'instances_val2014.json')
train_image_dir = os.path.join(coco_dir, 'train2014')
val_image_dir = os.path.join(coco_dir, 'val2014')
# Path to VisualWakeWords annotations
visualwakewords_annotations_train = os.path.join(
dataset_dir, 'instances_visualwakewords_train2014.json')
visualwakewords_annotations_val = os.path.join(
dataset_dir, 'instances_visualwakewords_val2014.json')
visualwakewords_labels_filename = os.path.join(dataset_dir, 'labels.txt')
train_output_path = os.path.join(dataset_dir, 'train.record')
val_output_path = os.path.join(dataset_dir, 'val.record')
# 2. Create a labels file
tf.logging.info('Creating a labels file...')
download_and_convert_visualwakewords_lib.create_labels_file(
foreground_class_of_interest, visualwakewords_labels_filename)
# 3. Use COCO annotations to create VisualWakeWords annotations
tf.logging.info('Creating train VisualWakeWords annotations...')
download_and_convert_visualwakewords_lib.create_visual_wakeword_annotations(
train_annotations_file, visualwakewords_annotations_train,
small_object_area_threshold, foreground_class_of_interest)
tf.logging.info('Creating validation VisualWakeWords annotations...')
download_and_convert_visualwakewords_lib.create_visual_wakeword_annotations(
val_annotations_file, visualwakewords_annotations_val,
small_object_area_threshold, foreground_class_of_interest)
# 4. Use VisualWakeWords annotations to create the TFRecords
tf.logging.info('Creating train TFRecords for VisualWakeWords dataset...')
download_and_convert_visualwakewords_lib.create_tf_record_for_visualwakewords_dataset(
visualwakewords_annotations_train,
train_image_dir,
train_output_path,
num_shards=100)
tf.logging.info(
'Creating validation TFRecords for VisualWakeWords dataset...')
download_and_convert_visualwakewords_lib.create_tf_record_for_visualwakewords_dataset(
visualwakewords_annotations_val,
val_image_dir,
val_output_path,
num_shards=10) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/download_and_convert_visualwakewords.py | download_and_convert_visualwakewords.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tensorflow.compat.v1 as tf
import tf_slim as slim
from datasets import dataset_utils
_FILE_PATTERN = '%s.record-*'
_SPLITS_TO_SIZES = {
'train': 82783,
'val': 40504,
}
_ITEMS_TO_DESCRIPTIONS = {
'image': 'A color image of varying height and width.',
'label': 'The label id of the image, an integer in {0, 1}',
'object/bbox': 'A list of bounding boxes.',
}
_NUM_CLASSES = 2
# labels file
LABELS_FILENAME = 'labels.txt'
def get_split(split_name, dataset_dir, file_pattern=None, reader=None):
"""Gets a dataset tuple with instructions for reading ImageNet.
Args:
split_name: A train/test split name.
dataset_dir: The base directory of the dataset sources.
file_pattern: The file pattern to use when matching the dataset sources. It
is assumed that the pattern contains a '%s' string so that the split name
can be inserted.
reader: The TensorFlow reader type.
Returns:
A `Dataset` namedtuple.
Raises:
ValueError: if `split_name` is not a valid train/test split.
"""
if split_name not in _SPLITS_TO_SIZES:
raise ValueError('split name %s was not recognized.' % split_name)
if not file_pattern:
file_pattern = _FILE_PATTERN
file_pattern = os.path.join(dataset_dir, file_pattern % split_name)
# Allowing None in the signature so that dataset_factory can use the default.
if reader is None:
reader = tf.TFRecordReader
keys_to_features = {
'image/encoded':
tf.FixedLenFeature((), tf.string, default_value=''),
'image/format':
tf.FixedLenFeature((), tf.string, default_value='jpeg'),
'image/class/label':
tf.FixedLenFeature([], dtype=tf.int64, default_value=-1),
'image/object/bbox/xmin':
tf.VarLenFeature(dtype=tf.float32),
'image/object/bbox/ymin':
tf.VarLenFeature(dtype=tf.float32),
'image/object/bbox/xmax':
tf.VarLenFeature(dtype=tf.float32),
'image/object/bbox/ymax':
tf.VarLenFeature(dtype=tf.float32),
}
items_to_handlers = {
'image':
slim.tfexample_decoder.Image('image/encoded', 'image/format'),
'label':
slim.tfexample_decoder.Tensor('image/class/label'),
'object/bbox':
slim.tfexample_decoder.BoundingBox(['ymin', 'xmin', 'ymax', 'xmax'],
'image/object/bbox/'),
}
decoder = slim.tfexample_decoder.TFExampleDecoder(keys_to_features,
items_to_handlers)
labels_to_names = None
labels_file = os.path.join(dataset_dir, LABELS_FILENAME)
if tf.gfile.Exists(labels_file):
labels_to_names = dataset_utils.read_label_file(dataset_dir)
return slim.dataset.Dataset(
data_sources=file_pattern,
reader=reader,
decoder=decoder,
num_samples=_SPLITS_TO_SIZES[split_name],
items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
num_classes=_NUM_CLASSES,
labels_to_names=labels_to_names) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/visualwakewords.py | visualwakewords.py |
"""Contains utilities for downloading and converting datasets."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
import tarfile
import zipfile
from six.moves import urllib
import tensorflow.compat.v1 as tf
LABELS_FILENAME = 'labels.txt'
def int64_feature(values):
"""Returns a TF-Feature of int64s.
Args:
values: A scalar or list of values.
Returns:
A TF-Feature.
"""
if not isinstance(values, (tuple, list)):
values = [values]
return tf.train.Feature(int64_list=tf.train.Int64List(value=values))
def bytes_list_feature(values):
"""Returns a TF-Feature of list of bytes.
Args:
values: A string or list of strings.
Returns:
A TF-Feature.
"""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=values))
def float_list_feature(values):
"""Returns a TF-Feature of list of floats.
Args:
values: A float or list of floats.
Returns:
A TF-Feature.
"""
return tf.train.Feature(float_list=tf.train.FloatList(value=values))
def bytes_feature(values):
"""Returns a TF-Feature of bytes.
Args:
values: A string.
Returns:
A TF-Feature.
"""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[values]))
def float_feature(values):
"""Returns a TF-Feature of floats.
Args:
values: A scalar of list of values.
Returns:
A TF-Feature.
"""
if not isinstance(values, (tuple, list)):
values = [values]
return tf.train.Feature(float_list=tf.train.FloatList(value=values))
def image_to_tfexample(image_data, image_format, height, width, class_id):
return tf.train.Example(features=tf.train.Features(feature={
'image/encoded': bytes_feature(image_data),
'image/format': bytes_feature(image_format),
'image/class/label': int64_feature(class_id),
'image/height': int64_feature(height),
'image/width': int64_feature(width),
}))
def download_url(url, dataset_dir):
"""Downloads the tarball or zip file from url into filepath.
Args:
url: The URL of a tarball or zip file.
dataset_dir: The directory where the temporary files are stored.
Returns:
filepath: path where the file is downloaded.
"""
filename = url.split('/')[-1]
filepath = os.path.join(dataset_dir, filename)
def _progress(count, block_size, total_size):
sys.stdout.write('\r>> Downloading %s %.1f%%' % (
filename, float(count * block_size) / float(total_size) * 100.0))
sys.stdout.flush()
filepath, _ = urllib.request.urlretrieve(url, filepath, _progress)
print()
statinfo = os.stat(filepath)
print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
return filepath
def download_and_uncompress_tarball(tarball_url, dataset_dir):
"""Downloads the `tarball_url` and uncompresses it locally.
Args:
tarball_url: The URL of a tarball file.
dataset_dir: The directory where the temporary files are stored.
"""
filepath = download_url(tarball_url, dataset_dir)
tarfile.open(filepath, 'r:gz').extractall(dataset_dir)
def download_and_uncompress_zipfile(zip_url, dataset_dir):
"""Downloads the `zip_url` and uncompresses it locally.
Args:
zip_url: The URL of a zip file.
dataset_dir: The directory where the temporary files are stored.
"""
filename = zip_url.split('/')[-1]
filepath = os.path.join(dataset_dir, filename)
if tf.gfile.Exists(filepath):
print('File {filename} has been already downloaded at {filepath}. '
'Unzipping it....'.format(filename=filename, filepath=filepath))
else:
filepath = download_url(zip_url, dataset_dir)
with zipfile.ZipFile(filepath, 'r') as zip_file:
for member in zip_file.namelist():
memberpath = os.path.join(dataset_dir, member)
# extract only if file doesn't exist
if not (os.path.exists(memberpath) or os.path.isfile(memberpath)):
zip_file.extract(member, dataset_dir)
def write_label_file(labels_to_class_names,
dataset_dir,
filename=LABELS_FILENAME):
"""Writes a file with the list of class names.
Args:
labels_to_class_names: A map of (integer) labels to class names.
dataset_dir: The directory in which the labels file should be written.
filename: The filename where the class names are written.
"""
labels_filename = os.path.join(dataset_dir, filename)
with tf.gfile.Open(labels_filename, 'w') as f:
for label in labels_to_class_names:
class_name = labels_to_class_names[label]
f.write('%d:%s\n' % (label, class_name))
def has_labels(dataset_dir, filename=LABELS_FILENAME):
"""Specifies whether or not the dataset directory contains a label map file.
Args:
dataset_dir: The directory in which the labels file is found.
filename: The filename where the class names are written.
Returns:
`True` if the labels file exists and `False` otherwise.
"""
return tf.gfile.Exists(os.path.join(dataset_dir, filename))
def read_label_file(dataset_dir, filename=LABELS_FILENAME):
"""Reads the labels file and returns a mapping from ID to class name.
Args:
dataset_dir: The directory in which the labels file is found.
filename: The filename where the class names are written.
Returns:
A map from a label (integer) to class name.
"""
labels_filename = os.path.join(dataset_dir, filename)
with tf.gfile.Open(labels_filename, 'rb') as f:
lines = f.read().decode()
lines = lines.split('\n')
lines = filter(None, lines)
labels_to_class_names = {}
for line in lines:
index = line.index(':')
labels_to_class_names[int(line[:index])] = line[index+1:]
return labels_to_class_names
def open_sharded_output_tfrecords(exit_stack, base_path, num_shards):
"""Opens all TFRecord shards for writing and adds them to an exit stack.
Args:
exit_stack: A context2.ExitStack used to automatically closed the TFRecords
opened in this function.
base_path: The base path for all shards
num_shards: The number of shards
Returns:
The list of opened TFRecords. Position k in the list corresponds to shard k.
"""
tf_record_output_filenames = [
'{}-{:05d}-of-{:05d}'.format(base_path, idx, num_shards)
for idx in range(num_shards)
]
tfrecords = [
exit_stack.enter_context(tf.python_io.TFRecordWriter(file_name))
for file_name in tf_record_output_filenames
]
return tfrecords | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/dataset_utils.py | dataset_utils.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tensorflow.compat.v1 as tf
import tf_slim as slim
from datasets import dataset_utils
_FILE_PATTERN = 'flowers_%s_*.tfrecord'
SPLITS_TO_SIZES = {'train': 3320, 'validation': 350}
_NUM_CLASSES = 5
_ITEMS_TO_DESCRIPTIONS = {
'image': 'A color image of varying size.',
'label': 'A single integer between 0 and 4',
}
def get_split(split_name, dataset_dir, file_pattern=None, reader=None):
"""Gets a dataset tuple with instructions for reading flowers.
Args:
split_name: A train/validation split name.
dataset_dir: The base directory of the dataset sources.
file_pattern: The file pattern to use when matching the dataset sources.
It is assumed that the pattern contains a '%s' string so that the split
name can be inserted.
reader: The TensorFlow reader type.
Returns:
A `Dataset` namedtuple.
Raises:
ValueError: if `split_name` is not a valid train/validation split.
"""
if split_name not in SPLITS_TO_SIZES:
raise ValueError('split name %s was not recognized.' % split_name)
if not file_pattern:
file_pattern = _FILE_PATTERN
file_pattern = os.path.join(dataset_dir, file_pattern % split_name)
# Allowing None in the signature so that dataset_factory can use the default.
if reader is None:
reader = tf.TFRecordReader
keys_to_features = {
'image/encoded': tf.FixedLenFeature((), tf.string, default_value=''),
'image/format': tf.FixedLenFeature((), tf.string, default_value='png'),
'image/class/label': tf.FixedLenFeature(
[], tf.int64, default_value=tf.zeros([], dtype=tf.int64)),
}
items_to_handlers = {
'image': slim.tfexample_decoder.Image(),
'label': slim.tfexample_decoder.Tensor('image/class/label'),
}
decoder = slim.tfexample_decoder.TFExampleDecoder(
keys_to_features, items_to_handlers)
labels_to_names = None
if dataset_utils.has_labels(dataset_dir):
labels_to_names = dataset_utils.read_label_file(dataset_dir)
return slim.dataset.Dataset(
data_sources=file_pattern,
reader=reader,
decoder=decoder,
num_samples=SPLITS_TO_SIZES[split_name],
items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
num_classes=_NUM_CLASSES,
labels_to_names=labels_to_names) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/flowers.py | flowers.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import os
import random
import sys
import threading
import numpy as np
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow.compat.v1 as tf
tf.app.flags.DEFINE_string('train_directory', '/tmp/',
'Training data directory')
tf.app.flags.DEFINE_string('validation_directory', '/tmp/',
'Validation data directory')
tf.app.flags.DEFINE_string('output_directory', '/tmp/',
'Output data directory')
tf.app.flags.DEFINE_integer('train_shards', 1024,
'Number of shards in training TFRecord files.')
tf.app.flags.DEFINE_integer('validation_shards', 128,
'Number of shards in validation TFRecord files.')
tf.app.flags.DEFINE_integer('num_threads', 8,
'Number of threads to preprocess the images.')
# The labels file contains a list of valid labels are held in this file.
# Assumes that the file contains entries as such:
# n01440764
# n01443537
# n01484850
# where each line corresponds to a label expressed as a synset. We map
# each synset contained in the file to an integer (based on the alphabetical
# ordering). See below for details.
tf.app.flags.DEFINE_string('labels_file',
'imagenet_lsvrc_2015_synsets.txt',
'Labels file')
# This file containing mapping from synset to human-readable label.
# Assumes each line of the file looks like:
#
# n02119247 black fox
# n02119359 silver fox
# n02119477 red fox, Vulpes fulva
#
# where each line corresponds to a unique mapping. Note that each line is
# formatted as <synset>\t<human readable label>.
tf.app.flags.DEFINE_string('imagenet_metadata_file',
'imagenet_metadata.txt',
'ImageNet metadata file')
# This file is the output of process_bounding_box.py
# Assumes each line of the file looks like:
#
# n00007846_64193.JPEG,0.0060,0.2620,0.7545,0.9940
#
# where each line corresponds to one bounding box annotation associated
# with an image. Each line can be parsed as:
#
# <JPEG file name>, <xmin>, <ymin>, <xmax>, <ymax>
#
# Note that there might exist mulitple bounding box annotations associated
# with an image file.
tf.app.flags.DEFINE_string('bounding_box_file',
'./imagenet_2012_bounding_boxes.csv',
'Bounding box file')
FLAGS = tf.app.flags.FLAGS
def _int64_feature(value):
"""Wrapper for inserting int64 features into Example proto."""
if not isinstance(value, list):
value = [value]
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
def _float_feature(value):
"""Wrapper for inserting float features into Example proto."""
if not isinstance(value, list):
value = [value]
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
def _bytes_feature(value):
"""Wrapper for inserting bytes features into Example proto."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _convert_to_example(filename, image_buffer, label, synset, human, bbox,
height, width):
"""Build an Example proto for an example.
Args:
filename: string, path to an image file, e.g., '/path/to/example.JPG'
image_buffer: string, JPEG encoding of RGB image
label: integer, identifier for the ground truth for the network
synset: string, unique WordNet ID specifying the label, e.g., 'n02323233'
human: string, human-readable label, e.g., 'red fox, Vulpes vulpes'
bbox: list of bounding boxes; each box is a list of integers
specifying [xmin, ymin, xmax, ymax]. All boxes are assumed to belong to
the same label as the image label.
height: integer, image height in pixels
width: integer, image width in pixels
Returns:
Example proto
"""
xmin = []
ymin = []
xmax = []
ymax = []
for b in bbox:
assert len(b) == 4
# pylint: disable=expression-not-assigned
[l.append(point) for l, point in zip([xmin, ymin, xmax, ymax], b)]
# pylint: enable=expression-not-assigned
colorspace = 'RGB'
channels = 3
image_format = 'JPEG'
example = tf.train.Example(features=tf.train.Features(feature={
'image/height': _int64_feature(height),
'image/width': _int64_feature(width),
'image/colorspace': _bytes_feature(colorspace),
'image/channels': _int64_feature(channels),
'image/class/label': _int64_feature(label),
'image/class/synset': _bytes_feature(synset),
'image/class/text': _bytes_feature(human),
'image/object/bbox/xmin': _float_feature(xmin),
'image/object/bbox/xmax': _float_feature(xmax),
'image/object/bbox/ymin': _float_feature(ymin),
'image/object/bbox/ymax': _float_feature(ymax),
'image/object/bbox/label': _int64_feature([label] * len(xmin)),
'image/format': _bytes_feature(image_format),
'image/filename': _bytes_feature(os.path.basename(filename)),
'image/encoded': _bytes_feature(image_buffer)}))
return example
class ImageCoder(object):
"""Helper class that provides TensorFlow image coding utilities."""
def __init__(self):
# Create a single Session to run all image coding calls.
self._sess = tf.Session()
# Initializes function that converts PNG to JPEG data.
self._png_data = tf.placeholder(dtype=tf.string)
image = tf.image.decode_png(self._png_data, channels=3)
self._png_to_jpeg = tf.image.encode_jpeg(image, format='rgb', quality=100)
# Initializes function that converts CMYK JPEG data to RGB JPEG data.
self._cmyk_data = tf.placeholder(dtype=tf.string)
image = tf.image.decode_jpeg(self._cmyk_data, channels=0)
self._cmyk_to_rgb = tf.image.encode_jpeg(image, format='rgb', quality=100)
# Initializes function that decodes RGB JPEG data.
self._decode_jpeg_data = tf.placeholder(dtype=tf.string)
self._decode_jpeg = tf.image.decode_jpeg(self._decode_jpeg_data, channels=3)
def png_to_jpeg(self, image_data):
return self._sess.run(self._png_to_jpeg,
feed_dict={self._png_data: image_data})
def cmyk_to_rgb(self, image_data):
return self._sess.run(self._cmyk_to_rgb,
feed_dict={self._cmyk_data: image_data})
def decode_jpeg(self, image_data):
image = self._sess.run(self._decode_jpeg,
feed_dict={self._decode_jpeg_data: image_data})
assert len(image.shape) == 3
assert image.shape[2] == 3
return image
def _is_png(filename):
"""Determine if a file contains a PNG format image.
Args:
filename: string, path of the image file.
Returns:
boolean indicating if the image is a PNG.
"""
# File list from:
# https://groups.google.com/forum/embed/?place=forum/torch7#!topic/torch7/fOSTXHIESSU
return 'n02105855_2933.JPEG' in filename
def _is_cmyk(filename):
"""Determine if file contains a CMYK JPEG format image.
Args:
filename: string, path of the image file.
Returns:
boolean indicating if the image is a JPEG encoded with CMYK color space.
"""
# File list from:
# https://github.com/cytsai/ilsvrc-cmyk-image-list
blacklist = ['n01739381_1309.JPEG', 'n02077923_14822.JPEG',
'n02447366_23489.JPEG', 'n02492035_15739.JPEG',
'n02747177_10752.JPEG', 'n03018349_4028.JPEG',
'n03062245_4620.JPEG', 'n03347037_9675.JPEG',
'n03467068_12171.JPEG', 'n03529860_11437.JPEG',
'n03544143_17228.JPEG', 'n03633091_5218.JPEG',
'n03710637_5125.JPEG', 'n03961711_5286.JPEG',
'n04033995_2932.JPEG', 'n04258138_17003.JPEG',
'n04264628_27969.JPEG', 'n04336792_7448.JPEG',
'n04371774_5854.JPEG', 'n04596742_4225.JPEG',
'n07583066_647.JPEG', 'n13037406_4650.JPEG']
return filename.split('/')[-1] in blacklist
def _process_image(filename, coder):
"""Process a single image file.
Args:
filename: string, path to an image file e.g., '/path/to/example.JPG'.
coder: instance of ImageCoder to provide TensorFlow image coding utils.
Returns:
image_buffer: string, JPEG encoding of RGB image.
height: integer, image height in pixels.
width: integer, image width in pixels.
"""
# Read the image file.
image_data = tf.gfile.GFile(filename, 'r').read()
# Clean the dirty data.
if _is_png(filename):
# 1 image is a PNG.
print('Converting PNG to JPEG for %s' % filename)
image_data = coder.png_to_jpeg(image_data)
elif _is_cmyk(filename):
# 22 JPEG images are in CMYK colorspace.
print('Converting CMYK to RGB for %s' % filename)
image_data = coder.cmyk_to_rgb(image_data)
# Decode the RGB JPEG.
image = coder.decode_jpeg(image_data)
# Check that image converted to RGB
assert len(image.shape) == 3
height = image.shape[0]
width = image.shape[1]
assert image.shape[2] == 3
return image_data, height, width
def _process_image_files_batch(coder, thread_index, ranges, name, filenames,
synsets, labels, humans, bboxes, num_shards):
"""Processes and saves list of images as TFRecord in 1 thread.
Args:
coder: instance of ImageCoder to provide TensorFlow image coding utils.
thread_index: integer, unique batch to run index is within [0, len(ranges)).
ranges: list of pairs of integers specifying ranges of each batches to
analyze in parallel.
name: string, unique identifier specifying the data set
filenames: list of strings; each string is a path to an image file
synsets: list of strings; each string is a unique WordNet ID
labels: list of integer; each integer identifies the ground truth
humans: list of strings; each string is a human-readable label
bboxes: list of bounding boxes for each image. Note that each entry in this
list might contain from 0+ entries corresponding to the number of bounding
box annotations for the image.
num_shards: integer number of shards for this data set.
"""
# Each thread produces N shards where N = int(num_shards / num_threads).
# For instance, if num_shards = 128, and the num_threads = 2, then the first
# thread would produce shards [0, 64).
num_threads = len(ranges)
assert not num_shards % num_threads
num_shards_per_batch = int(num_shards / num_threads)
shard_ranges = np.linspace(ranges[thread_index][0],
ranges[thread_index][1],
num_shards_per_batch + 1).astype(int)
num_files_in_thread = ranges[thread_index][1] - ranges[thread_index][0]
counter = 0
for s in xrange(num_shards_per_batch):
# Generate a sharded version of the file name, e.g. 'train-00002-of-00010'
shard = thread_index * num_shards_per_batch + s
output_filename = '%s-%.5d-of-%.5d' % (name, shard, num_shards)
output_file = os.path.join(FLAGS.output_directory, output_filename)
writer = tf.python_io.TFRecordWriter(output_file)
shard_counter = 0
files_in_shard = np.arange(shard_ranges[s], shard_ranges[s + 1], dtype=int)
for i in files_in_shard:
filename = filenames[i]
label = labels[i]
synset = synsets[i]
human = humans[i]
bbox = bboxes[i]
image_buffer, height, width = _process_image(filename, coder)
example = _convert_to_example(filename, image_buffer, label,
synset, human, bbox,
height, width)
writer.write(example.SerializeToString())
shard_counter += 1
counter += 1
if not counter % 1000:
print('%s [thread %d]: Processed %d of %d images in thread batch.' %
(datetime.now(), thread_index, counter, num_files_in_thread))
sys.stdout.flush()
writer.close()
print('%s [thread %d]: Wrote %d images to %s' %
(datetime.now(), thread_index, shard_counter, output_file))
sys.stdout.flush()
shard_counter = 0
print('%s [thread %d]: Wrote %d images to %d shards.' %
(datetime.now(), thread_index, counter, num_files_in_thread))
sys.stdout.flush()
def _process_image_files(name, filenames, synsets, labels, humans,
bboxes, num_shards):
"""Process and save list of images as TFRecord of Example protos.
Args:
name: string, unique identifier specifying the data set
filenames: list of strings; each string is a path to an image file
synsets: list of strings; each string is a unique WordNet ID
labels: list of integer; each integer identifies the ground truth
humans: list of strings; each string is a human-readable label
bboxes: list of bounding boxes for each image. Note that each entry in this
list might contain from 0+ entries corresponding to the number of bounding
box annotations for the image.
num_shards: integer number of shards for this data set.
"""
assert len(filenames) == len(synsets)
assert len(filenames) == len(labels)
assert len(filenames) == len(humans)
assert len(filenames) == len(bboxes)
# Break all images into batches with a [ranges[i][0], ranges[i][1]].
spacing = np.linspace(0, len(filenames), FLAGS.num_threads + 1).astype(np.int)
ranges = []
threads = []
for i in xrange(len(spacing) - 1):
ranges.append([spacing[i], spacing[i+1]])
# Launch a thread for each batch.
print('Launching %d threads for spacings: %s' % (FLAGS.num_threads, ranges))
sys.stdout.flush()
# Create a mechanism for monitoring when all threads are finished.
coord = tf.train.Coordinator()
# Create a generic TensorFlow-based utility for converting all image codings.
coder = ImageCoder()
threads = []
for thread_index in xrange(len(ranges)):
args = (coder, thread_index, ranges, name, filenames,
synsets, labels, humans, bboxes, num_shards)
t = threading.Thread(target=_process_image_files_batch, args=args)
t.start()
threads.append(t)
# Wait for all the threads to terminate.
coord.join(threads)
print('%s: Finished writing all %d images in data set.' %
(datetime.now(), len(filenames)))
sys.stdout.flush()
def _find_image_files(data_dir, labels_file):
"""Build a list of all images files and labels in the data set.
Args:
data_dir: string, path to the root directory of images.
Assumes that the ImageNet data set resides in JPEG files located in
the following directory structure.
data_dir/n01440764/ILSVRC2012_val_00000293.JPEG
data_dir/n01440764/ILSVRC2012_val_00000543.JPEG
where 'n01440764' is the unique synset label associated with these images.
labels_file: string, path to the labels file.
The list of valid labels are held in this file. Assumes that the file
contains entries as such:
n01440764
n01443537
n01484850
where each line corresponds to a label expressed as a synset. We map
each synset contained in the file to an integer (based on the alphabetical
ordering) starting with the integer 1 corresponding to the synset
contained in the first line.
The reason we start the integer labels at 1 is to reserve label 0 as an
unused background class.
Returns:
filenames: list of strings; each string is a path to an image file.
synsets: list of strings; each string is a unique WordNet ID.
labels: list of integer; each integer identifies the ground truth.
"""
print('Determining list of input files and labels from %s.' % data_dir)
challenge_synsets = [
l.strip() for l in tf.gfile.GFile(labels_file, 'r').readlines()
]
labels = []
filenames = []
synsets = []
# Leave label index 0 empty as a background class.
label_index = 1
# Construct the list of JPEG files and labels.
for synset in challenge_synsets:
jpeg_file_path = '%s/%s/*.JPEG' % (data_dir, synset)
matching_files = tf.gfile.Glob(jpeg_file_path)
labels.extend([label_index] * len(matching_files))
synsets.extend([synset] * len(matching_files))
filenames.extend(matching_files)
if not label_index % 100:
print('Finished finding files in %d of %d classes.' % (
label_index, len(challenge_synsets)))
label_index += 1
# Shuffle the ordering of all image files in order to guarantee
# random ordering of the images with respect to label in the
# saved TFRecord files. Make the randomization repeatable.
shuffled_index = range(len(filenames))
random.seed(12345)
random.shuffle(shuffled_index)
filenames = [filenames[i] for i in shuffled_index]
synsets = [synsets[i] for i in shuffled_index]
labels = [labels[i] for i in shuffled_index]
print('Found %d JPEG files across %d labels inside %s.' %
(len(filenames), len(challenge_synsets), data_dir))
return filenames, synsets, labels
def _find_human_readable_labels(synsets, synset_to_human):
"""Build a list of human-readable labels.
Args:
synsets: list of strings; each string is a unique WordNet ID.
synset_to_human: dict of synset to human labels, e.g.,
'n02119022' --> 'red fox, Vulpes vulpes'
Returns:
List of human-readable strings corresponding to each synset.
"""
humans = []
for s in synsets:
assert s in synset_to_human, ('Failed to find: %s' % s)
humans.append(synset_to_human[s])
return humans
def _find_image_bounding_boxes(filenames, image_to_bboxes):
"""Find the bounding boxes for a given image file.
Args:
filenames: list of strings; each string is a path to an image file.
image_to_bboxes: dictionary mapping image file names to a list of
bounding boxes. This list contains 0+ bounding boxes.
Returns:
List of bounding boxes for each image. Note that each entry in this
list might contain from 0+ entries corresponding to the number of bounding
box annotations for the image.
"""
num_image_bbox = 0
bboxes = []
for f in filenames:
basename = os.path.basename(f)
if basename in image_to_bboxes:
bboxes.append(image_to_bboxes[basename])
num_image_bbox += 1
else:
bboxes.append([])
print('Found %d images with bboxes out of %d images' % (
num_image_bbox, len(filenames)))
return bboxes
def _process_dataset(name, directory, num_shards, synset_to_human,
image_to_bboxes):
"""Process a complete data set and save it as a TFRecord.
Args:
name: string, unique identifier specifying the data set.
directory: string, root path to the data set.
num_shards: integer number of shards for this data set.
synset_to_human: dict of synset to human labels, e.g.,
'n02119022' --> 'red fox, Vulpes vulpes'
image_to_bboxes: dictionary mapping image file names to a list of
bounding boxes. This list contains 0+ bounding boxes.
"""
filenames, synsets, labels = _find_image_files(directory, FLAGS.labels_file)
humans = _find_human_readable_labels(synsets, synset_to_human)
bboxes = _find_image_bounding_boxes(filenames, image_to_bboxes)
_process_image_files(name, filenames, synsets, labels,
humans, bboxes, num_shards)
def _build_synset_lookup(imagenet_metadata_file):
"""Build lookup for synset to human-readable label.
Args:
imagenet_metadata_file: string, path to file containing mapping from
synset to human-readable label.
Assumes each line of the file looks like:
n02119247 black fox
n02119359 silver fox
n02119477 red fox, Vulpes fulva
where each line corresponds to a unique mapping. Note that each line is
formatted as <synset>\t<human readable label>.
Returns:
Dictionary of synset to human labels, such as:
'n02119022' --> 'red fox, Vulpes vulpes'
"""
lines = tf.gfile.GFile(imagenet_metadata_file, 'r').readlines()
synset_to_human = {}
for l in lines:
if l:
parts = l.strip().split('\t')
assert len(parts) == 2
synset = parts[0]
human = parts[1]
synset_to_human[synset] = human
return synset_to_human
def _build_bounding_box_lookup(bounding_box_file):
"""Build a lookup from image file to bounding boxes.
Args:
bounding_box_file: string, path to file with bounding boxes annotations.
Assumes each line of the file looks like:
n00007846_64193.JPEG,0.0060,0.2620,0.7545,0.9940
where each line corresponds to one bounding box annotation associated
with an image. Each line can be parsed as:
<JPEG file name>, <xmin>, <ymin>, <xmax>, <ymax>
Note that there might exist mulitple bounding box annotations associated
with an image file. This file is the output of process_bounding_boxes.py.
Returns:
Dictionary mapping image file names to a list of bounding boxes. This list
contains 0+ bounding boxes.
"""
lines = tf.gfile.GFile(bounding_box_file, 'r').readlines()
images_to_bboxes = {}
num_bbox = 0
num_image = 0
for l in lines:
if l:
parts = l.split(',')
assert len(parts) == 5, ('Failed to parse: %s' % l)
filename = parts[0]
xmin = float(parts[1])
ymin = float(parts[2])
xmax = float(parts[3])
ymax = float(parts[4])
box = [xmin, ymin, xmax, ymax]
if filename not in images_to_bboxes:
images_to_bboxes[filename] = []
num_image += 1
images_to_bboxes[filename].append(box)
num_bbox += 1
print('Successfully read %d bounding boxes '
'across %d images.' % (num_bbox, num_image))
return images_to_bboxes
def main(unused_argv):
assert not FLAGS.train_shards % FLAGS.num_threads, (
'Please make the FLAGS.num_threads commensurate with FLAGS.train_shards')
assert not FLAGS.validation_shards % FLAGS.num_threads, (
'Please make the FLAGS.num_threads commensurate with '
'FLAGS.validation_shards')
print('Saving results to %s' % FLAGS.output_directory)
# Build a map from synset to human-readable label.
synset_to_human = _build_synset_lookup(FLAGS.imagenet_metadata_file)
image_to_bboxes = _build_bounding_box_lookup(FLAGS.bounding_box_file)
# Run it!
_process_dataset('validation', FLAGS.validation_directory,
FLAGS.validation_shards, synset_to_human, image_to_bboxes)
_process_dataset('train', FLAGS.train_directory, FLAGS.train_shards,
synset_to_human, image_to_bboxes)
if __name__ == '__main__':
tf.app.run() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/build_imagenet_data.py | build_imagenet_data.py |
r"""Downloads and converts cifar10 data to TFRecords of TF-Example protos.
This module downloads the cifar10 data, uncompresses it, reads the files
that make up the cifar10 data and creates two TFRecord datasets: one for train
and one for test. Each TFRecord dataset is comprised of a set of TF-Example
protocol buffers, each of which contain a single image and label.
The script should take several minutes to run.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
import tarfile
import numpy as np
from six.moves import cPickle
from six.moves import urllib
import tensorflow.compat.v1 as tf
from datasets import dataset_utils
# The URL where the CIFAR data can be downloaded.
_DATA_URL = 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz'
# The number of training files.
_NUM_TRAIN_FILES = 5
# The height and width of each image.
_IMAGE_SIZE = 32
# The names of the classes.
_CLASS_NAMES = [
'airplane',
'automobile',
'bird',
'cat',
'deer',
'dog',
'frog',
'horse',
'ship',
'truck',
]
def _add_to_tfrecord(filename, tfrecord_writer, offset=0):
"""Loads data from the cifar10 pickle files and writes files to a TFRecord.
Args:
filename: The filename of the cifar10 pickle file.
tfrecord_writer: The TFRecord writer to use for writing.
offset: An offset into the absolute number of images previously written.
Returns:
The new offset.
"""
with tf.gfile.Open(filename, 'rb') as f:
if sys.version_info < (3,):
data = cPickle.load(f)
else:
data = cPickle.load(f, encoding='bytes')
images = data[b'data']
num_images = images.shape[0]
images = images.reshape((num_images, 3, 32, 32))
labels = data[b'labels']
with tf.Graph().as_default():
image_placeholder = tf.placeholder(dtype=tf.uint8)
encoded_image = tf.image.encode_png(image_placeholder)
with tf.Session('') as sess:
for j in range(num_images):
sys.stdout.write('\r>> Reading file [%s] image %d/%d' % (
filename, offset + j + 1, offset + num_images))
sys.stdout.flush()
image = np.squeeze(images[j]).transpose((1, 2, 0))
label = labels[j]
png_string = sess.run(encoded_image,
feed_dict={image_placeholder: image})
example = dataset_utils.image_to_tfexample(
png_string, b'png', _IMAGE_SIZE, _IMAGE_SIZE, label)
tfrecord_writer.write(example.SerializeToString())
return offset + num_images
def _get_output_filename(dataset_dir, split_name):
"""Creates the output filename.
Args:
dataset_dir: The dataset directory where the dataset is stored.
split_name: The name of the train/test split.
Returns:
An absolute file path.
"""
return '%s/cifar10_%s.tfrecord' % (dataset_dir, split_name)
def _download_and_uncompress_dataset(dataset_dir):
"""Downloads cifar10 and uncompresses it locally.
Args:
dataset_dir: The directory where the temporary files are stored.
"""
filename = _DATA_URL.split('/')[-1]
filepath = os.path.join(dataset_dir, filename)
if not os.path.exists(filepath):
def _progress(count, block_size, total_size):
sys.stdout.write('\r>> Downloading %s %.1f%%' % (
filename, float(count * block_size) / float(total_size) * 100.0))
sys.stdout.flush()
filepath, _ = urllib.request.urlretrieve(_DATA_URL, filepath, _progress)
print()
statinfo = os.stat(filepath)
print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
tarfile.open(filepath, 'r:gz').extractall(dataset_dir)
def _clean_up_temporary_files(dataset_dir):
"""Removes temporary files used to create the dataset.
Args:
dataset_dir: The directory where the temporary files are stored.
"""
filename = _DATA_URL.split('/')[-1]
filepath = os.path.join(dataset_dir, filename)
tf.gfile.Remove(filepath)
tmp_dir = os.path.join(dataset_dir, 'cifar-10-batches-py')
tf.gfile.DeleteRecursively(tmp_dir)
def run(dataset_dir):
"""Runs the download and conversion operation.
Args:
dataset_dir: The dataset directory where the dataset is stored.
"""
if not tf.gfile.Exists(dataset_dir):
tf.gfile.MakeDirs(dataset_dir)
training_filename = _get_output_filename(dataset_dir, 'train')
testing_filename = _get_output_filename(dataset_dir, 'test')
if tf.gfile.Exists(training_filename) and tf.gfile.Exists(testing_filename):
print('Dataset files already exist. Exiting without re-creating them.')
return
dataset_utils.download_and_uncompress_tarball(_DATA_URL, dataset_dir)
# First, process the training data:
with tf.python_io.TFRecordWriter(training_filename) as tfrecord_writer:
offset = 0
for i in range(_NUM_TRAIN_FILES):
filename = os.path.join(dataset_dir,
'cifar-10-batches-py',
'data_batch_%d' % (i + 1)) # 1-indexed.
offset = _add_to_tfrecord(filename, tfrecord_writer, offset)
# Next, process the testing data:
with tf.python_io.TFRecordWriter(testing_filename) as tfrecord_writer:
filename = os.path.join(dataset_dir,
'cifar-10-batches-py',
'test_batch')
_add_to_tfrecord(filename, tfrecord_writer)
# Finally, write the labels file:
labels_to_class_names = dict(zip(range(len(_CLASS_NAMES)), _CLASS_NAMES))
dataset_utils.write_label_file(labels_to_class_names, dataset_dir)
_clean_up_temporary_files(dataset_dir)
print('\nFinished converting the Cifar10 dataset!') | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/download_and_convert_cifar10.py | download_and_convert_cifar10.py |
r"""Helper functions to generate the Visual WakeWords dataset.
It filters raw COCO annotations file to Visual WakeWords Dataset
annotations. The resulting annotations and COCO images are then converted
to TF records.
See download_and_convert_visualwakewords.py for the sample usage.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import hashlib
import io
import json
import os
import contextlib2
import PIL.Image
import six
import tensorflow.compat.v1 as tf
from datasets import dataset_utils
tf.logging.set_verbosity(tf.logging.INFO)
tf.app.flags.DEFINE_string(
'coco_train_url',
'http://images.cocodataset.org/zips/train2014.zip',
'Link to zip file containing coco training data')
tf.app.flags.DEFINE_string(
'coco_validation_url',
'http://images.cocodataset.org/zips/val2014.zip',
'Link to zip file containing coco validation data')
tf.app.flags.DEFINE_string(
'coco_annotations_url',
'http://images.cocodataset.org/annotations/annotations_trainval2014.zip',
'Link to zip file containing coco annotation data')
FLAGS = tf.app.flags.FLAGS
def download_coco_dataset(dataset_dir):
"""Download the coco dataset.
Args:
dataset_dir: Path where coco dataset should be downloaded.
"""
dataset_utils.download_and_uncompress_zipfile(FLAGS.coco_train_url,
dataset_dir)
dataset_utils.download_and_uncompress_zipfile(FLAGS.coco_validation_url,
dataset_dir)
dataset_utils.download_and_uncompress_zipfile(FLAGS.coco_annotations_url,
dataset_dir)
def create_labels_file(foreground_class_of_interest,
visualwakewords_labels_file):
"""Generate visualwakewords labels file.
Args:
foreground_class_of_interest: category from COCO dataset that is filtered by
the visualwakewords dataset
visualwakewords_labels_file: output visualwakewords label file
"""
labels_to_class_names = {0: 'background', 1: foreground_class_of_interest}
with open(visualwakewords_labels_file, 'w') as fp:
for label in labels_to_class_names:
fp.write(str(label) + ':' + str(labels_to_class_names[label]) + '\n')
def create_visual_wakeword_annotations(annotations_file,
visualwakewords_annotations_file,
small_object_area_threshold,
foreground_class_of_interest):
"""Generate visual wakewords annotations file.
Loads COCO annotation json files to generate visualwakewords annotations file.
Args:
annotations_file: JSON file containing COCO bounding box annotations
visualwakewords_annotations_file: path to output annotations file
small_object_area_threshold: threshold on fraction of image area below which
small object bounding boxes are filtered
foreground_class_of_interest: category from COCO dataset that is filtered by
the visual wakewords dataset
"""
# default object of interest is person
foreground_class_of_interest_id = 1
with tf.gfile.GFile(annotations_file, 'r') as fid:
groundtruth_data = json.load(fid)
images = groundtruth_data['images']
# Create category index
category_index = {}
for category in groundtruth_data['categories']:
if category['name'] == foreground_class_of_interest:
foreground_class_of_interest_id = category['id']
category_index[category['id']] = category
# Create annotations index, a map of image_id to it's annotations
tf.logging.info('Building annotations index...')
annotations_index = collections.defaultdict(
lambda: collections.defaultdict(list))
# structure is { "image_id": {"objects" : [list of the image annotations]}}
for annotation in groundtruth_data['annotations']:
annotations_index[annotation['image_id']]['objects'].append(annotation)
missing_annotation_count = len(images) - len(annotations_index)
tf.logging.info('%d images are missing annotations.',
missing_annotation_count)
# Create filtered annotations index
annotations_index_filtered = {}
for idx, image in enumerate(images):
if idx % 100 == 0:
tf.logging.info('On image %d of %d', idx, len(images))
annotations = annotations_index[image['id']]
annotations_filtered = _filter_annotations(
annotations, image, small_object_area_threshold,
foreground_class_of_interest_id)
annotations_index_filtered[image['id']] = annotations_filtered
with open(visualwakewords_annotations_file, 'w') as fp:
json.dump(
{
'images': images,
'annotations': annotations_index_filtered,
'categories': category_index
}, fp)
def _filter_annotations(annotations, image, small_object_area_threshold,
foreground_class_of_interest_id):
"""Filters COCO annotations to visual wakewords annotations.
Args:
annotations: dicts with keys: {
u'objects': [{u'id', u'image_id', u'category_id', u'segmentation',
u'area', u'bbox' : [x,y,width,height], u'iscrowd'}] } Notice
that bounding box coordinates in the official COCO dataset
are given as [x, y, width, height] tuples using absolute
coordinates where x, y represent the top-left (0-indexed)
corner.
image: dict with keys: [u'license', u'file_name', u'coco_url', u'height',
u'width', u'date_captured', u'flickr_url', u'id']
small_object_area_threshold: threshold on fraction of image area below which
small objects are filtered
foreground_class_of_interest_id: category of COCO dataset which visual
wakewords filters
Returns:
annotations_filtered: dict with keys: {
u'objects': [{"area", "bbox" : [x,y,width,height]}],
u'label',
}
"""
objects = []
image_area = image['height'] * image['width']
for annotation in annotations['objects']:
normalized_object_area = annotation['area'] / image_area
category_id = int(annotation['category_id'])
# Filter valid bounding boxes
if category_id == foreground_class_of_interest_id and \
normalized_object_area > small_object_area_threshold:
objects.append({
u'area': annotation['area'],
u'bbox': annotation['bbox'],
})
label = 1 if objects else 0
return {
'objects': objects,
'label': label,
}
def create_tf_record_for_visualwakewords_dataset(annotations_file, image_dir,
output_path, num_shards):
"""Loads Visual WakeWords annotations/images and converts to tf.Record format.
Args:
annotations_file: JSON file containing bounding box annotations.
image_dir: Directory containing the image files.
output_path: Path to output tf.Record file.
num_shards: number of output file shards.
"""
with contextlib2.ExitStack() as tf_record_close_stack, \
tf.gfile.GFile(annotations_file, 'r') as fid:
output_tfrecords = dataset_utils.open_sharded_output_tfrecords(
tf_record_close_stack, output_path, num_shards)
groundtruth_data = json.load(fid)
images = groundtruth_data['images']
annotations_index = groundtruth_data['annotations']
annotations_index = {int(k): v for k, v in six.iteritems(annotations_index)}
# convert 'unicode' key to 'int' key after we parse the json file
for idx, image in enumerate(images):
if idx % 100 == 0:
tf.logging.info('On image %d of %d', idx, len(images))
annotations = annotations_index[image['id']]
tf_example = _create_tf_example(image, annotations, image_dir)
shard_idx = idx % num_shards
output_tfrecords[shard_idx].write(tf_example.SerializeToString())
def _create_tf_example(image, annotations, image_dir):
"""Converts image and annotations to a tf.Example proto.
Args:
image: dict with keys: [u'license', u'file_name', u'coco_url', u'height',
u'width', u'date_captured', u'flickr_url', u'id']
annotations: dict with objects (a list of image annotations) and a label.
{u'objects':[{"area", "bbox" : [x,y,width,height}], u'label'}. Notice
that bounding box coordinates in the COCO dataset are given as[x, y,
width, height] tuples using absolute coordinates where x, y represent
the top-left (0-indexed) corner. This function also converts to the format
that can be used by the Tensorflow Object Detection API (which is [ymin,
xmin, ymax, xmax] with coordinates normalized relative to image size).
image_dir: directory containing the image files.
Returns:
tf_example: The converted tf.Example
Raises:
ValueError: if the image pointed to by data['filename'] is not a valid JPEG
"""
image_height = image['height']
image_width = image['width']
filename = image['file_name']
image_id = image['id']
full_path = os.path.join(image_dir, filename)
with tf.gfile.GFile(full_path, 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = PIL.Image.open(encoded_jpg_io)
key = hashlib.sha256(encoded_jpg).hexdigest()
xmin, xmax, ymin, ymax, area = [], [], [], [], []
for obj in annotations['objects']:
(x, y, width, height) = tuple(obj['bbox'])
xmin.append(float(x) / image_width)
xmax.append(float(x + width) / image_width)
ymin.append(float(y) / image_height)
ymax.append(float(y + height) / image_height)
area.append(obj['area'])
feature_dict = {
'image/height':
dataset_utils.int64_feature(image_height),
'image/width':
dataset_utils.int64_feature(image_width),
'image/filename':
dataset_utils.bytes_feature(filename.encode('utf8')),
'image/source_id':
dataset_utils.bytes_feature(str(image_id).encode('utf8')),
'image/key/sha256':
dataset_utils.bytes_feature(key.encode('utf8')),
'image/encoded':
dataset_utils.bytes_feature(encoded_jpg),
'image/format':
dataset_utils.bytes_feature('jpeg'.encode('utf8')),
'image/class/label':
dataset_utils.int64_feature(annotations['label']),
'image/object/bbox/xmin':
dataset_utils.float_list_feature(xmin),
'image/object/bbox/xmax':
dataset_utils.float_list_feature(xmax),
'image/object/bbox/ymin':
dataset_utils.float_list_feature(ymin),
'image/object/bbox/ymax':
dataset_utils.float_list_feature(ymax),
'image/object/area':
dataset_utils.float_list_feature(area),
}
example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
return example | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/download_and_convert_visualwakewords_lib.py | download_and_convert_visualwakewords_lib.py |
r"""Process the ImageNet Challenge bounding boxes for TensorFlow model training.
Associate the ImageNet 2012 Challenge validation data set with labels.
The raw ImageNet validation data set is expected to reside in JPEG files
located in the following directory structure.
data_dir/ILSVRC2012_val_00000001.JPEG
data_dir/ILSVRC2012_val_00000002.JPEG
...
data_dir/ILSVRC2012_val_00050000.JPEG
This script moves the files into a directory structure like such:
data_dir/n01440764/ILSVRC2012_val_00000293.JPEG
data_dir/n01440764/ILSVRC2012_val_00000543.JPEG
...
where 'n01440764' is the unique synset label associated with
these images.
This directory reorganization requires a mapping from validation image
number (i.e. suffix of the original file) to the associated label. This
is provided in the ImageNet development kit via a Matlab file.
In order to make life easier and divorce ourselves from Matlab, we instead
supply a custom text file that provides this mapping for us.
Sample usage:
./preprocess_imagenet_validation_data.py ILSVRC2012_img_val \
imagenet_2012_validation_synset_labels.txt
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
from six.moves import xrange # pylint: disable=redefined-builtin
if __name__ == '__main__':
if len(sys.argv) < 3:
print('Invalid usage\n'
'usage: preprocess_imagenet_validation_data.py '
'<validation data dir> <validation labels file>')
sys.exit(-1)
data_dir = sys.argv[1]
validation_labels_file = sys.argv[2]
# Read in the 50000 synsets associated with the validation data set.
labels = [l.strip() for l in open(validation_labels_file).readlines()]
unique_labels = set(labels)
# Make all sub-directories in the validation data dir.
for label in unique_labels:
labeled_data_dir = os.path.join(data_dir, label)
os.makedirs(labeled_data_dir)
# Move all of the image to the appropriate sub-directory.
for i in xrange(len(labels)):
basename = 'ILSVRC2012_val_000%.5d.JPEG' % (i + 1)
original_filename = os.path.join(data_dir, basename)
if not os.path.exists(original_filename):
print('Failed to find: ', original_filename)
sys.exit(-1)
new_filename = os.path.join(data_dir, labels[i], basename)
os.rename(original_filename, new_filename) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/datasets/preprocess_imagenet_validation_data.py | preprocess_imagenet_validation_data.py |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import tensorflow.compat.v1 as tf
import tf_slim as slim
__all__ = ['create_clones',
'deploy',
'optimize_clones',
'DeployedModel',
'DeploymentConfig',
'Clone',
]
# Namedtuple used to represent a clone during deployment.
Clone = collections.namedtuple('Clone',
['outputs', # Whatever model_fn() returned.
'scope', # The scope used to create it.
'device', # The device used to create.
])
# Namedtuple used to represent a DeployedModel, returned by deploy().
DeployedModel = collections.namedtuple('DeployedModel',
['train_op', # The `train_op`
'summary_op', # The `summary_op`
'total_loss', # The loss `Tensor`
'clones', # A list of `Clones` tuples.
])
# Default parameters for DeploymentConfig
_deployment_params = {'num_clones': 1,
'clone_on_cpu': False,
'replica_id': 0,
'num_replicas': 1,
'num_ps_tasks': 0,
'worker_job_name': 'worker',
'ps_job_name': 'ps'}
def create_clones(config, model_fn, args=None, kwargs=None):
"""Creates multiple clones according to config using a `model_fn`.
The returned values of `model_fn(*args, **kwargs)` are collected along with
the scope and device used to created it in a namedtuple
`Clone(outputs, scope, device)`
Note: it is assumed that any loss created by `model_fn` is collected at
the tf.GraphKeys.LOSSES collection.
To recover the losses, summaries or update_ops created by the clone use:
```python
losses = tf.get_collection(tf.GraphKeys.LOSSES, clone.scope)
summaries = tf.get_collection(tf.GraphKeys.SUMMARIES, clone.scope)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS, clone.scope)
```
The deployment options are specified by the config object and support
deploying one or several clones on different GPUs and one or several replicas
of such clones.
The argument `model_fn` is called `config.num_clones` times to create the
model clones as `model_fn(*args, **kwargs)`.
If `config` specifies deployment on multiple replicas then the default
tensorflow device is set appropriatly for each call to `model_fn` and for the
slim variable creation functions: model and global variables will be created
on the `ps` device, the clone operations will be on the `worker` device.
Args:
config: A DeploymentConfig object.
model_fn: A callable. Called as `model_fn(*args, **kwargs)`
args: Optional list of arguments to pass to `model_fn`.
kwargs: Optional list of keyword arguments to pass to `model_fn`.
Returns:
A list of namedtuples `Clone`.
"""
clones = []
args = args or []
kwargs = kwargs or {}
with slim.arg_scope([slim.model_variable, slim.variable],
device=config.variables_device()):
# Create clones.
for i in range(0, config.num_clones):
with tf.name_scope(config.clone_scope(i)) as clone_scope:
clone_device = config.clone_device(i)
with tf.device(clone_device):
with tf.variable_scope(tf.get_variable_scope(),
reuse=True if i > 0 else None):
outputs = model_fn(*args, **kwargs)
clones.append(Clone(outputs, clone_scope, clone_device))
return clones
def _gather_clone_loss(clone, num_clones, regularization_losses):
"""Gather the loss for a single clone.
Args:
clone: A Clone namedtuple.
num_clones: The number of clones being deployed.
regularization_losses: Possibly empty list of regularization_losses
to add to the clone losses.
Returns:
A tensor for the total loss for the clone. Can be None.
"""
# The return value.
sum_loss = None
# Individual components of the loss that will need summaries.
clone_loss = None
regularization_loss = None
# Compute and aggregate losses on the clone device.
with tf.device(clone.device):
all_losses = []
clone_losses = tf.get_collection(tf.GraphKeys.LOSSES, clone.scope)
if clone_losses:
clone_loss = tf.add_n(clone_losses, name='clone_loss')
if num_clones > 1:
clone_loss = tf.div(clone_loss, 1.0 * num_clones,
name='scaled_clone_loss')
all_losses.append(clone_loss)
if regularization_losses:
regularization_loss = tf.add_n(regularization_losses,
name='regularization_loss')
all_losses.append(regularization_loss)
if all_losses:
sum_loss = tf.add_n(all_losses)
# Add the summaries out of the clone device block.
if clone_loss is not None:
tf.summary.scalar('/'.join(filter(None,
['Losses', clone.scope, 'clone_loss'])),
clone_loss)
if regularization_loss is not None:
tf.summary.scalar('Losses/regularization_loss', regularization_loss)
return sum_loss
def _optimize_clone(optimizer, clone, num_clones, regularization_losses,
**kwargs):
"""Compute losses and gradients for a single clone.
Args:
optimizer: A tf.Optimizer object.
clone: A Clone namedtuple.
num_clones: The number of clones being deployed.
regularization_losses: Possibly empty list of regularization_losses
to add to the clone losses.
**kwargs: Dict of kwarg to pass to compute_gradients().
Returns:
A tuple (clone_loss, clone_grads_and_vars).
- clone_loss: A tensor for the total loss for the clone. Can be None.
- clone_grads_and_vars: List of (gradient, variable) for the clone.
Can be empty.
"""
sum_loss = _gather_clone_loss(clone, num_clones, regularization_losses)
clone_grad = None
if sum_loss is not None:
with tf.device(clone.device):
clone_grad = optimizer.compute_gradients(sum_loss, **kwargs)
return sum_loss, clone_grad
def optimize_clones(clones, optimizer,
regularization_losses=None,
**kwargs):
"""Compute clone losses and gradients for the given list of `Clones`.
Note: The regularization_losses are added to the first clone losses.
Args:
clones: List of `Clones` created by `create_clones()`.
optimizer: An `Optimizer` object.
regularization_losses: Optional list of regularization losses. If None it
will gather them from tf.GraphKeys.REGULARIZATION_LOSSES. Pass `[]` to
exclude them.
**kwargs: Optional list of keyword arguments to pass to `compute_gradients`.
Returns:
A tuple (total_loss, grads_and_vars).
- total_loss: A Tensor containing the average of the clone losses including
the regularization loss.
- grads_and_vars: A List of tuples (gradient, variable) containing the sum
of the gradients for each variable.
"""
grads_and_vars = []
clones_losses = []
num_clones = len(clones)
if regularization_losses is None:
regularization_losses = tf.get_collection(
tf.GraphKeys.REGULARIZATION_LOSSES)
for clone in clones:
with tf.name_scope(clone.scope):
clone_loss, clone_grad = _optimize_clone(
optimizer, clone, num_clones, regularization_losses, **kwargs)
if clone_loss is not None:
clones_losses.append(clone_loss)
grads_and_vars.append(clone_grad)
# Only use regularization_losses for the first clone
regularization_losses = None
# Compute the total_loss summing all the clones_losses.
total_loss = tf.add_n(clones_losses, name='total_loss')
# Sum the gradients across clones.
grads_and_vars = _sum_clones_gradients(grads_and_vars)
return total_loss, grads_and_vars
def deploy(config,
model_fn,
args=None,
kwargs=None,
optimizer=None,
summarize_gradients=False):
"""Deploys a Slim-constructed model across multiple clones.
The deployment options are specified by the config object and support
deploying one or several clones on different GPUs and one or several replicas
of such clones.
The argument `model_fn` is called `config.num_clones` times to create the
model clones as `model_fn(*args, **kwargs)`.
The optional argument `optimizer` is an `Optimizer` object. If not `None`,
the deployed model is configured for training with that optimizer.
If `config` specifies deployment on multiple replicas then the default
tensorflow device is set appropriatly for each call to `model_fn` and for the
slim variable creation functions: model and global variables will be created
on the `ps` device, the clone operations will be on the `worker` device.
Args:
config: A `DeploymentConfig` object.
model_fn: A callable. Called as `model_fn(*args, **kwargs)`
args: Optional list of arguments to pass to `model_fn`.
kwargs: Optional list of keyword arguments to pass to `model_fn`.
optimizer: Optional `Optimizer` object. If passed the model is deployed
for training with that optimizer.
summarize_gradients: Whether or not add summaries to the gradients.
Returns:
A `DeployedModel` namedtuple.
"""
# Gather initial summaries.
summaries = set(tf.get_collection(tf.GraphKeys.SUMMARIES))
# Create Clones.
clones = create_clones(config, model_fn, args, kwargs)
first_clone = clones[0]
# Gather update_ops from the first clone. These contain, for example,
# the updates for the batch_norm variables created by model_fn.
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS, first_clone.scope)
train_op = None
total_loss = None
with tf.device(config.optimizer_device()):
if optimizer:
# Place the global step on the device storing the variables.
with tf.device(config.variables_device()):
global_step = slim.get_or_create_global_step()
# Compute the gradients for the clones.
total_loss, clones_gradients = optimize_clones(clones, optimizer)
if clones_gradients:
if summarize_gradients:
# Add summaries to the gradients.
summaries |= set(_add_gradients_summaries(clones_gradients))
# Create gradient updates.
grad_updates = optimizer.apply_gradients(clones_gradients,
global_step=global_step)
update_ops.append(grad_updates)
update_op = tf.group(*update_ops)
with tf.control_dependencies([update_op]):
train_op = tf.identity(total_loss, name='train_op')
else:
clones_losses = []
regularization_losses = tf.get_collection(
tf.GraphKeys.REGULARIZATION_LOSSES)
for clone in clones:
with tf.name_scope(clone.scope):
clone_loss = _gather_clone_loss(clone, len(clones),
regularization_losses)
if clone_loss is not None:
clones_losses.append(clone_loss)
# Only use regularization_losses for the first clone
regularization_losses = None
if clones_losses:
total_loss = tf.add_n(clones_losses, name='total_loss')
# Add the summaries from the first clone. These contain the summaries
# created by model_fn and either optimize_clones() or _gather_clone_loss().
summaries |= set(tf.get_collection(tf.GraphKeys.SUMMARIES,
first_clone.scope))
if total_loss is not None:
# Add total_loss to summary.
summaries.add(tf.summary.scalar('total_loss', total_loss))
if summaries:
# Merge all summaries together.
summary_op = tf.summary.merge(list(summaries), name='summary_op')
else:
summary_op = None
return DeployedModel(train_op, summary_op, total_loss, clones)
def _sum_clones_gradients(clone_grads):
"""Calculate the sum gradient for each shared variable across all clones.
This function assumes that the clone_grads has been scaled appropriately by
1 / num_clones.
Args:
clone_grads: A List of List of tuples (gradient, variable), one list per
`Clone`.
Returns:
List of tuples of (gradient, variable) where the gradient has been summed
across all clones.
"""
sum_grads = []
for grad_and_vars in zip(*clone_grads):
# Note that each grad_and_vars looks like the following:
# ((grad_var0_clone0, var0), ... (grad_varN_cloneN, varN))
grads = []
var = grad_and_vars[0][1]
for g, v in grad_and_vars:
assert v == var
if g is not None:
grads.append(g)
if grads:
if len(grads) > 1:
sum_grad = tf.add_n(grads, name=var.op.name + '/sum_grads')
else:
sum_grad = grads[0]
sum_grads.append((sum_grad, var))
return sum_grads
def _add_gradients_summaries(grads_and_vars):
"""Add histogram summaries to gradients.
Note: The summaries are also added to the SUMMARIES collection.
Args:
grads_and_vars: A list of gradient to variable pairs (tuples).
Returns:
The _list_ of the added summaries for grads_and_vars.
"""
summaries = []
for grad, var in grads_and_vars:
if grad is not None:
if isinstance(grad, tf.IndexedSlices):
grad_values = grad.values
else:
grad_values = grad
summaries.append(tf.summary.histogram(var.op.name + ':gradient',
grad_values))
summaries.append(tf.summary.histogram(var.op.name + ':gradient_norm',
tf.global_norm([grad_values])))
else:
tf.logging.info('Var %s has no gradient', var.op.name)
return summaries
class DeploymentConfig(object):
"""Configuration for deploying a model with `deploy()`.
You can pass an instance of this class to `deploy()` to specify exactly
how to deploy the model to build. If you do not pass one, an instance built
from the default deployment_hparams will be used.
"""
def __init__(self,
num_clones=1,
clone_on_cpu=False,
replica_id=0,
num_replicas=1,
num_ps_tasks=0,
worker_job_name='worker',
ps_job_name='ps'):
"""Create a DeploymentConfig.
The config describes how to deploy a model across multiple clones and
replicas. The model will be replicated `num_clones` times in each replica.
If `clone_on_cpu` is True, each clone will placed on CPU.
If `num_replicas` is 1, the model is deployed via a single process. In that
case `worker_device`, `num_ps_tasks`, and `ps_device` are ignored.
If `num_replicas` is greater than 1, then `worker_device` and `ps_device`
must specify TensorFlow devices for the `worker` and `ps` jobs and
`num_ps_tasks` must be positive.
Args:
num_clones: Number of model clones to deploy in each replica.
clone_on_cpu: If True clones would be placed on CPU.
replica_id: Integer. Index of the replica for which the model is
deployed. Usually 0 for the chief replica.
num_replicas: Number of replicas to use.
num_ps_tasks: Number of tasks for the `ps` job. 0 to not use replicas.
worker_job_name: A name for the worker job.
ps_job_name: A name for the parameter server job.
Raises:
ValueError: If the arguments are invalid.
"""
if num_replicas > 1:
if num_ps_tasks < 1:
raise ValueError('When using replicas num_ps_tasks must be positive')
if num_replicas > 1 or num_ps_tasks > 0:
if not worker_job_name:
raise ValueError('Must specify worker_job_name when using replicas')
if not ps_job_name:
raise ValueError('Must specify ps_job_name when using parameter server')
if replica_id >= num_replicas:
raise ValueError('replica_id must be less than num_replicas')
self._num_clones = num_clones
self._clone_on_cpu = clone_on_cpu
self._replica_id = replica_id
self._num_replicas = num_replicas
self._num_ps_tasks = num_ps_tasks
self._ps_device = '/job:' + ps_job_name if num_ps_tasks > 0 else ''
self._worker_device = '/job:' + worker_job_name if num_ps_tasks > 0 else ''
@property
def num_clones(self):
return self._num_clones
@property
def clone_on_cpu(self):
return self._clone_on_cpu
@property
def replica_id(self):
return self._replica_id
@property
def num_replicas(self):
return self._num_replicas
@property
def num_ps_tasks(self):
return self._num_ps_tasks
@property
def ps_device(self):
return self._ps_device
@property
def worker_device(self):
return self._worker_device
def caching_device(self):
"""Returns the device to use for caching variables.
Variables are cached on the worker CPU when using replicas.
Returns:
A device string or None if the variables do not need to be cached.
"""
if self._num_ps_tasks > 0:
return lambda op: op.device
else:
return None
def clone_device(self, clone_index):
"""Device used to create the clone and all the ops inside the clone.
Args:
clone_index: Int, representing the clone_index.
Returns:
A value suitable for `tf.device()`.
Raises:
ValueError: if `clone_index` is greater or equal to the number of clones".
"""
if clone_index >= self._num_clones:
raise ValueError('clone_index must be less than num_clones')
device = ''
if self._num_ps_tasks > 0:
device += self._worker_device
if self._clone_on_cpu:
device += '/device:CPU:0'
else:
device += '/device:GPU:%d' % clone_index
return device
def clone_scope(self, clone_index):
"""Name scope to create the clone.
Args:
clone_index: Int, representing the clone_index.
Returns:
A name_scope suitable for `tf.name_scope()`.
Raises:
ValueError: if `clone_index` is greater or equal to the number of clones".
"""
if clone_index >= self._num_clones:
raise ValueError('clone_index must be less than num_clones')
scope = ''
if self._num_clones > 1:
scope = 'clone_%d' % clone_index
return scope
def optimizer_device(self):
"""Device to use with the optimizer.
Returns:
A value suitable for `tf.device()`.
"""
if self._num_ps_tasks > 0 or self._num_clones > 0:
return self._worker_device + '/device:CPU:0'
else:
return ''
def inputs_device(self):
"""Device to use to build the inputs.
Returns:
A value suitable for `tf.device()`.
"""
device = ''
if self._num_ps_tasks > 0:
device += self._worker_device
device += '/device:CPU:0'
return device
def variables_device(self):
"""Returns the device to use for variables created inside the clone.
Returns:
A value suitable for `tf.device()`.
"""
device = ''
if self._num_ps_tasks > 0:
device += self._ps_device
device += '/device:CPU:0'
class _PSDeviceChooser(object):
"""Slim device chooser for variables when using PS."""
def __init__(self, device, tasks):
self._device = device
self._tasks = tasks
self._task = 0
def choose(self, op):
if op.device:
return op.device
node_def = op if isinstance(op, tf.NodeDef) else op.node_def
if node_def.op.startswith('Variable'):
t = self._task
self._task = (self._task + 1) % self._tasks
d = '%s/task:%d' % (self._device, t)
return d
else:
return op.device
if not self._num_ps_tasks:
return device
else:
chooser = _PSDeviceChooser(device, self._num_ps_tasks)
return chooser.choose | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/slim/deployment/model_deploy.py | model_deploy.py |
r"""Creates and runs `Estimator` for object detection model on TPUs.
This uses the TPUEstimator API to define and run a model in TRAIN/EVAL modes.
"""
# pylint: enable=line-too-long
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl import flags
import tensorflow.compat.v1 as tf
from object_detection import model_lib
tf.flags.DEFINE_bool('use_tpu', True, 'Use TPUs rather than plain CPUs')
# Cloud TPU Cluster Resolvers
flags.DEFINE_string(
'gcp_project',
default=None,
help='Project name for the Cloud TPU-enabled project. If not specified, we '
'will attempt to automatically detect the GCE project from metadata.')
flags.DEFINE_string(
'tpu_zone',
default=None,
help='GCE zone where the Cloud TPU is located in. If not specified, we '
'will attempt to automatically detect the GCE project from metadata.')
flags.DEFINE_string(
'tpu_name',
default=None,
help='Name of the Cloud TPU for Cluster Resolvers.')
flags.DEFINE_integer('num_shards', 8, 'Number of shards (TPU cores).')
flags.DEFINE_integer('iterations_per_loop', 100,
'Number of iterations per TPU training loop.')
# For mode=train_and_eval, evaluation occurs after training is finished.
# Note: independently of steps_per_checkpoint, estimator will save the most
# recent checkpoint every 10 minutes by default for train_and_eval
flags.DEFINE_string('mode', 'train',
'Mode to run: train, eval')
flags.DEFINE_integer('train_batch_size', None, 'Batch size for training. If '
'this is not provided, batch size is read from training '
'config.')
flags.DEFINE_integer('num_train_steps', None, 'Number of train steps.')
flags.DEFINE_boolean('eval_training_data', False,
'If training data should be evaluated for this job.')
flags.DEFINE_integer('sample_1_of_n_eval_examples', 1, 'Will sample one of '
'every n eval input examples, where n is provided.')
flags.DEFINE_integer('sample_1_of_n_eval_on_train_examples', 5, 'Will sample '
'one of every n train input examples for evaluation, '
'where n is provided. This is only used if '
'`eval_training_data` is True.')
flags.DEFINE_string(
'model_dir', None, 'Path to output model directory '
'where event and checkpoint files will be written.')
flags.DEFINE_string('pipeline_config_path', None, 'Path to pipeline config '
'file.')
flags.DEFINE_integer(
'max_eval_retries', 0, 'If running continuous eval, the maximum number of '
'retries upon encountering tf.errors.InvalidArgumentError. If negative, '
'will always retry the evaluation.'
)
FLAGS = tf.flags.FLAGS
def main(unused_argv):
flags.mark_flag_as_required('model_dir')
flags.mark_flag_as_required('pipeline_config_path')
tpu_cluster_resolver = (
tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=[FLAGS.tpu_name], zone=FLAGS.tpu_zone, project=FLAGS.gcp_project))
tpu_grpc_url = tpu_cluster_resolver.get_master()
config = tf.estimator.tpu.RunConfig(
master=tpu_grpc_url,
evaluation_master=tpu_grpc_url,
model_dir=FLAGS.model_dir,
tpu_config=tf.estimator.tpu.TPUConfig(
iterations_per_loop=FLAGS.iterations_per_loop,
num_shards=FLAGS.num_shards))
kwargs = {}
if FLAGS.train_batch_size:
kwargs['batch_size'] = FLAGS.train_batch_size
train_and_eval_dict = model_lib.create_estimator_and_inputs(
run_config=config,
pipeline_config_path=FLAGS.pipeline_config_path,
train_steps=FLAGS.num_train_steps,
sample_1_of_n_eval_examples=FLAGS.sample_1_of_n_eval_examples,
sample_1_of_n_eval_on_train_examples=(
FLAGS.sample_1_of_n_eval_on_train_examples),
use_tpu_estimator=True,
use_tpu=FLAGS.use_tpu,
num_shards=FLAGS.num_shards,
save_final_config=FLAGS.mode == 'train',
**kwargs)
estimator = train_and_eval_dict['estimator']
train_input_fn = train_and_eval_dict['train_input_fn']
eval_input_fns = train_and_eval_dict['eval_input_fns']
eval_on_train_input_fn = train_and_eval_dict['eval_on_train_input_fn']
train_steps = train_and_eval_dict['train_steps']
if FLAGS.mode == 'train':
estimator.train(input_fn=train_input_fn, max_steps=train_steps)
# Continuously evaluating.
if FLAGS.mode == 'eval':
if FLAGS.eval_training_data:
name = 'training_data'
input_fn = eval_on_train_input_fn
else:
name = 'validation_data'
# Currently only a single eval input is allowed.
input_fn = eval_input_fns[0]
model_lib.continuous_eval(estimator, FLAGS.model_dir, input_fn, train_steps,
name, FLAGS.max_eval_retries)
if __name__ == '__main__':
tf.app.run() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/model_tpu_main.py | model_tpu_main.py |
"""Common utility functions for evaluation."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import os
import re
import time
import numpy as np
from six.moves import range
import tensorflow.compat.v1 as tf
import tf_slim as slim
from object_detection.core import box_list
from object_detection.core import box_list_ops
from object_detection.core import keypoint_ops
from object_detection.core import standard_fields as fields
from object_detection.metrics import coco_evaluation
from object_detection.metrics import lvis_evaluation
from object_detection.protos import eval_pb2
from object_detection.utils import label_map_util
from object_detection.utils import object_detection_evaluation
from object_detection.utils import ops
from object_detection.utils import shape_utils
from object_detection.utils import visualization_utils as vis_utils
EVAL_KEYPOINT_METRIC = 'coco_keypoint_metrics'
# A dictionary of metric names to classes that implement the metric. The classes
# in the dictionary must implement
# utils.object_detection_evaluation.DetectionEvaluator interface.
EVAL_METRICS_CLASS_DICT = {
'coco_detection_metrics':
coco_evaluation.CocoDetectionEvaluator,
'coco_keypoint_metrics':
coco_evaluation.CocoKeypointEvaluator,
'coco_mask_metrics':
coco_evaluation.CocoMaskEvaluator,
'coco_panoptic_metrics':
coco_evaluation.CocoPanopticSegmentationEvaluator,
'lvis_mask_metrics':
lvis_evaluation.LVISMaskEvaluator,
'oid_challenge_detection_metrics':
object_detection_evaluation.OpenImagesDetectionChallengeEvaluator,
'oid_challenge_segmentation_metrics':
object_detection_evaluation
.OpenImagesInstanceSegmentationChallengeEvaluator,
'pascal_voc_detection_metrics':
object_detection_evaluation.PascalDetectionEvaluator,
'weighted_pascal_voc_detection_metrics':
object_detection_evaluation.WeightedPascalDetectionEvaluator,
'precision_at_recall_detection_metrics':
object_detection_evaluation.PrecisionAtRecallDetectionEvaluator,
'pascal_voc_instance_segmentation_metrics':
object_detection_evaluation.PascalInstanceSegmentationEvaluator,
'weighted_pascal_voc_instance_segmentation_metrics':
object_detection_evaluation.WeightedPascalInstanceSegmentationEvaluator,
'oid_V2_detection_metrics':
object_detection_evaluation.OpenImagesDetectionEvaluator,
}
EVAL_DEFAULT_METRIC = 'coco_detection_metrics'
def write_metrics(metrics, global_step, summary_dir):
"""Write metrics to a summary directory.
Args:
metrics: A dictionary containing metric names and values.
global_step: Global step at which the metrics are computed.
summary_dir: Directory to write tensorflow summaries to.
"""
tf.logging.info('Writing metrics to tf summary.')
summary_writer = tf.summary.FileWriterCache.get(summary_dir)
for key in sorted(metrics):
summary = tf.Summary(value=[
tf.Summary.Value(tag=key, simple_value=metrics[key]),
])
summary_writer.add_summary(summary, global_step)
tf.logging.info('%s: %f', key, metrics[key])
tf.logging.info('Metrics written to tf summary.')
# TODO(rathodv): Add tests.
def visualize_detection_results(result_dict,
tag,
global_step,
categories,
summary_dir='',
export_dir='',
agnostic_mode=False,
show_groundtruth=False,
groundtruth_box_visualization_color='black',
min_score_thresh=.5,
max_num_predictions=20,
skip_scores=False,
skip_labels=False,
keep_image_id_for_visualization_export=False):
"""Visualizes detection results and writes visualizations to image summaries.
This function visualizes an image with its detected bounding boxes and writes
to image summaries which can be viewed on tensorboard. It optionally also
writes images to a directory. In the case of missing entry in the label map,
unknown class name in the visualization is shown as "N/A".
Args:
result_dict: a dictionary holding groundtruth and detection
data corresponding to each image being evaluated. The following keys
are required:
'original_image': a numpy array representing the image with shape
[1, height, width, 3] or [1, height, width, 1]
'detection_boxes': a numpy array of shape [N, 4]
'detection_scores': a numpy array of shape [N]
'detection_classes': a numpy array of shape [N]
The following keys are optional:
'groundtruth_boxes': a numpy array of shape [N, 4]
'groundtruth_keypoints': a numpy array of shape [N, num_keypoints, 2]
Detections are assumed to be provided in decreasing order of score and for
display, and we assume that scores are probabilities between 0 and 1.
tag: tensorboard tag (string) to associate with image.
global_step: global step at which the visualization are generated.
categories: a list of dictionaries representing all possible categories.
Each dict in this list has the following keys:
'id': (required) an integer id uniquely identifying this category
'name': (required) string representing category name
e.g., 'cat', 'dog', 'pizza'
'supercategory': (optional) string representing the supercategory
e.g., 'animal', 'vehicle', 'food', etc
summary_dir: the output directory to which the image summaries are written.
export_dir: the output directory to which images are written. If this is
empty (default), then images are not exported.
agnostic_mode: boolean (default: False) controlling whether to evaluate in
class-agnostic mode or not.
show_groundtruth: boolean (default: False) controlling whether to show
groundtruth boxes in addition to detected boxes
groundtruth_box_visualization_color: box color for visualizing groundtruth
boxes
min_score_thresh: minimum score threshold for a box to be visualized
max_num_predictions: maximum number of detections to visualize
skip_scores: whether to skip score when drawing a single detection
skip_labels: whether to skip label when drawing a single detection
keep_image_id_for_visualization_export: whether to keep image identifier in
filename when exported to export_dir
Raises:
ValueError: if result_dict does not contain the expected keys (i.e.,
'original_image', 'detection_boxes', 'detection_scores',
'detection_classes')
"""
detection_fields = fields.DetectionResultFields
input_fields = fields.InputDataFields
if not set([
input_fields.original_image,
detection_fields.detection_boxes,
detection_fields.detection_scores,
detection_fields.detection_classes,
]).issubset(set(result_dict.keys())):
raise ValueError('result_dict does not contain all expected keys.')
if show_groundtruth and input_fields.groundtruth_boxes not in result_dict:
raise ValueError('If show_groundtruth is enabled, result_dict must contain '
'groundtruth_boxes.')
tf.logging.info('Creating detection visualizations.')
category_index = label_map_util.create_category_index(categories)
image = np.squeeze(result_dict[input_fields.original_image], axis=0)
if image.shape[2] == 1: # If one channel image, repeat in RGB.
image = np.tile(image, [1, 1, 3])
detection_boxes = result_dict[detection_fields.detection_boxes]
detection_scores = result_dict[detection_fields.detection_scores]
detection_classes = np.int32((result_dict[
detection_fields.detection_classes]))
detection_keypoints = result_dict.get(detection_fields.detection_keypoints)
detection_masks = result_dict.get(detection_fields.detection_masks)
detection_boundaries = result_dict.get(detection_fields.detection_boundaries)
# Plot groundtruth underneath detections
if show_groundtruth:
groundtruth_boxes = result_dict[input_fields.groundtruth_boxes]
groundtruth_keypoints = result_dict.get(input_fields.groundtruth_keypoints)
vis_utils.visualize_boxes_and_labels_on_image_array(
image=image,
boxes=groundtruth_boxes,
classes=None,
scores=None,
category_index=category_index,
keypoints=groundtruth_keypoints,
use_normalized_coordinates=False,
max_boxes_to_draw=None,
groundtruth_box_visualization_color=groundtruth_box_visualization_color)
vis_utils.visualize_boxes_and_labels_on_image_array(
image,
detection_boxes,
detection_classes,
detection_scores,
category_index,
instance_masks=detection_masks,
instance_boundaries=detection_boundaries,
keypoints=detection_keypoints,
use_normalized_coordinates=False,
max_boxes_to_draw=max_num_predictions,
min_score_thresh=min_score_thresh,
agnostic_mode=agnostic_mode,
skip_scores=skip_scores,
skip_labels=skip_labels)
if export_dir:
if keep_image_id_for_visualization_export and result_dict[fields.
InputDataFields()
.key]:
export_path = os.path.join(export_dir, 'export-{}-{}.png'.format(
tag, result_dict[fields.InputDataFields().key]))
else:
export_path = os.path.join(export_dir, 'export-{}.png'.format(tag))
vis_utils.save_image_array_as_png(image, export_path)
summary = tf.Summary(value=[
tf.Summary.Value(
tag=tag,
image=tf.Summary.Image(
encoded_image_string=vis_utils.encode_image_array_as_png_str(
image)))
])
summary_writer = tf.summary.FileWriterCache.get(summary_dir)
summary_writer.add_summary(summary, global_step)
tf.logging.info('Detection visualizations written to summary with tag %s.',
tag)
def _run_checkpoint_once(tensor_dict,
evaluators=None,
batch_processor=None,
checkpoint_dirs=None,
variables_to_restore=None,
restore_fn=None,
num_batches=1,
master='',
save_graph=False,
save_graph_dir='',
losses_dict=None,
eval_export_path=None,
process_metrics_fn=None):
"""Evaluates metrics defined in evaluators and returns summaries.
This function loads the latest checkpoint in checkpoint_dirs and evaluates
all metrics defined in evaluators. The metrics are processed in batch by the
batch_processor.
Args:
tensor_dict: a dictionary holding tensors representing a batch of detections
and corresponding groundtruth annotations.
evaluators: a list of object of type DetectionEvaluator to be used for
evaluation. Note that the metric names produced by different evaluators
must be unique.
batch_processor: a function taking four arguments:
1. tensor_dict: the same tensor_dict that is passed in as the first
argument to this function.
2. sess: a tensorflow session
3. batch_index: an integer representing the index of the batch amongst
all batches
By default, batch_processor is None, which defaults to running:
return sess.run(tensor_dict)
To skip an image, it suffices to return an empty dictionary in place of
result_dict.
checkpoint_dirs: list of directories to load into an EnsembleModel. If it
has only one directory, EnsembleModel will not be used --
a DetectionModel
will be instantiated directly. Not used if restore_fn is set.
variables_to_restore: None, or a dictionary mapping variable names found in
a checkpoint to model variables. The dictionary would normally be
generated by creating a tf.train.ExponentialMovingAverage object and
calling its variables_to_restore() method. Not used if restore_fn is set.
restore_fn: None, or a function that takes a tf.Session object and correctly
restores all necessary variables from the correct checkpoint file. If
None, attempts to restore from the first directory in checkpoint_dirs.
num_batches: the number of batches to use for evaluation.
master: the location of the Tensorflow session.
save_graph: whether or not the Tensorflow graph is stored as a pbtxt file.
save_graph_dir: where to store the Tensorflow graph on disk. If save_graph
is True this must be non-empty.
losses_dict: optional dictionary of scalar detection losses.
eval_export_path: Path for saving a json file that contains the detection
results in json format.
process_metrics_fn: a callback called with evaluation results after each
evaluation is done. It could be used e.g. to back up checkpoints with
best evaluation scores, or to call an external system to update evaluation
results in order to drive best hyper-parameter search. Parameters are:
int checkpoint_number, Dict[str, ObjectDetectionEvalMetrics] metrics,
str checkpoint_file path.
Returns:
global_step: the count of global steps.
all_evaluator_metrics: A dictionary containing metric names and values.
Raises:
ValueError: if restore_fn is None and checkpoint_dirs doesn't have at least
one element.
ValueError: if save_graph is True and save_graph_dir is not defined.
"""
if save_graph and not save_graph_dir:
raise ValueError('`save_graph_dir` must be defined.')
sess = tf.Session(master, graph=tf.get_default_graph())
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
sess.run(tf.tables_initializer())
checkpoint_file = None
if restore_fn:
restore_fn(sess)
else:
if not checkpoint_dirs:
raise ValueError('`checkpoint_dirs` must have at least one entry.')
checkpoint_file = tf.train.latest_checkpoint(checkpoint_dirs[0])
saver = tf.train.Saver(variables_to_restore)
saver.restore(sess, checkpoint_file)
if save_graph:
tf.train.write_graph(sess.graph_def, save_graph_dir, 'eval.pbtxt')
counters = {'skipped': 0, 'success': 0}
aggregate_result_losses_dict = collections.defaultdict(list)
with slim.queues.QueueRunners(sess):
try:
for batch in range(int(num_batches)):
if (batch + 1) % 100 == 0:
tf.logging.info('Running eval ops batch %d/%d', batch + 1,
num_batches)
if not batch_processor:
try:
if not losses_dict:
losses_dict = {}
result_dict, result_losses_dict = sess.run([tensor_dict,
losses_dict])
counters['success'] += 1
except tf.errors.InvalidArgumentError:
tf.logging.info('Skipping image')
counters['skipped'] += 1
result_dict = {}
else:
result_dict, result_losses_dict = batch_processor(
tensor_dict, sess, batch, counters, losses_dict=losses_dict)
if not result_dict:
continue
for key, value in iter(result_losses_dict.items()):
aggregate_result_losses_dict[key].append(value)
for evaluator in evaluators:
# TODO(b/65130867): Use image_id tensor once we fix the input data
# decoders to return correct image_id.
# TODO(akuznetsa): result_dict contains batches of images, while
# add_single_ground_truth_image_info expects a single image. Fix
if (isinstance(result_dict, dict) and
fields.InputDataFields.key in result_dict and
result_dict[fields.InputDataFields.key]):
image_id = result_dict[fields.InputDataFields.key]
else:
image_id = batch
evaluator.add_single_ground_truth_image_info(
image_id=image_id, groundtruth_dict=result_dict)
evaluator.add_single_detected_image_info(
image_id=image_id, detections_dict=result_dict)
tf.logging.info('Running eval batches done.')
except tf.errors.OutOfRangeError:
tf.logging.info('Done evaluating -- epoch limit reached')
finally:
# When done, ask the threads to stop.
tf.logging.info('# success: %d', counters['success'])
tf.logging.info('# skipped: %d', counters['skipped'])
all_evaluator_metrics = {}
if eval_export_path and eval_export_path is not None:
for evaluator in evaluators:
if (isinstance(evaluator, coco_evaluation.CocoDetectionEvaluator) or
isinstance(evaluator, coco_evaluation.CocoMaskEvaluator)):
tf.logging.info('Started dumping to json file.')
evaluator.dump_detections_to_json_file(
json_output_path=eval_export_path)
tf.logging.info('Finished dumping to json file.')
for evaluator in evaluators:
metrics = evaluator.evaluate()
evaluator.clear()
if any(key in all_evaluator_metrics for key in metrics):
raise ValueError('Metric names between evaluators must not collide.')
all_evaluator_metrics.update(metrics)
global_step = tf.train.global_step(sess, tf.train.get_global_step())
for key, value in iter(aggregate_result_losses_dict.items()):
all_evaluator_metrics['Losses/' + key] = np.mean(value)
if process_metrics_fn and checkpoint_file:
m = re.search(r'model.ckpt-(\d+)$', checkpoint_file)
if not m:
tf.logging.error('Failed to parse checkpoint number from: %s',
checkpoint_file)
else:
checkpoint_number = int(m.group(1))
process_metrics_fn(checkpoint_number, all_evaluator_metrics,
checkpoint_file)
sess.close()
return (global_step, all_evaluator_metrics)
# TODO(rathodv): Add tests.
def repeated_checkpoint_run(tensor_dict,
summary_dir,
evaluators,
batch_processor=None,
checkpoint_dirs=None,
variables_to_restore=None,
restore_fn=None,
num_batches=1,
eval_interval_secs=120,
max_number_of_evaluations=None,
max_evaluation_global_step=None,
master='',
save_graph=False,
save_graph_dir='',
losses_dict=None,
eval_export_path=None,
process_metrics_fn=None):
"""Periodically evaluates desired tensors using checkpoint_dirs or restore_fn.
This function repeatedly loads a checkpoint and evaluates a desired
set of tensors (provided by tensor_dict) and hands the resulting numpy
arrays to a function result_processor which can be used to further
process/save/visualize the results.
Args:
tensor_dict: a dictionary holding tensors representing a batch of detections
and corresponding groundtruth annotations.
summary_dir: a directory to write metrics summaries.
evaluators: a list of object of type DetectionEvaluator to be used for
evaluation. Note that the metric names produced by different evaluators
must be unique.
batch_processor: a function taking three arguments:
1. tensor_dict: the same tensor_dict that is passed in as the first
argument to this function.
2. sess: a tensorflow session
3. batch_index: an integer representing the index of the batch amongst
all batches
By default, batch_processor is None, which defaults to running:
return sess.run(tensor_dict)
checkpoint_dirs: list of directories to load into a DetectionModel or an
EnsembleModel if restore_fn isn't set. Also used to determine when to run
next evaluation. Must have at least one element.
variables_to_restore: None, or a dictionary mapping variable names found in
a checkpoint to model variables. The dictionary would normally be
generated by creating a tf.train.ExponentialMovingAverage object and
calling its variables_to_restore() method. Not used if restore_fn is set.
restore_fn: a function that takes a tf.Session object and correctly restores
all necessary variables from the correct checkpoint file.
num_batches: the number of batches to use for evaluation.
eval_interval_secs: the number of seconds between each evaluation run.
max_number_of_evaluations: the max number of iterations of the evaluation.
If the value is left as None the evaluation continues indefinitely.
max_evaluation_global_step: global step when evaluation stops.
master: the location of the Tensorflow session.
save_graph: whether or not the Tensorflow graph is saved as a pbtxt file.
save_graph_dir: where to save on disk the Tensorflow graph. If store_graph
is True this must be non-empty.
losses_dict: optional dictionary of scalar detection losses.
eval_export_path: Path for saving a json file that contains the detection
results in json format.
process_metrics_fn: a callback called with evaluation results after each
evaluation is done. It could be used e.g. to back up checkpoints with
best evaluation scores, or to call an external system to update evaluation
results in order to drive best hyper-parameter search. Parameters are:
int checkpoint_number, Dict[str, ObjectDetectionEvalMetrics] metrics,
str checkpoint_file path.
Returns:
metrics: A dictionary containing metric names and values in the latest
evaluation.
Raises:
ValueError: if max_num_of_evaluations is not None or a positive number.
ValueError: if checkpoint_dirs doesn't have at least one element.
"""
if max_number_of_evaluations and max_number_of_evaluations <= 0:
raise ValueError(
'`max_number_of_evaluations` must be either None or a positive number.')
if max_evaluation_global_step and max_evaluation_global_step <= 0:
raise ValueError(
'`max_evaluation_global_step` must be either None or positive.')
if not checkpoint_dirs:
raise ValueError('`checkpoint_dirs` must have at least one entry.')
last_evaluated_model_path = None
number_of_evaluations = 0
while True:
start = time.time()
tf.logging.info('Starting evaluation at ' + time.strftime(
'%Y-%m-%d-%H:%M:%S', time.gmtime()))
model_path = tf.train.latest_checkpoint(checkpoint_dirs[0])
if not model_path:
tf.logging.info('No model found in %s. Will try again in %d seconds',
checkpoint_dirs[0], eval_interval_secs)
elif model_path == last_evaluated_model_path:
tf.logging.info('Found already evaluated checkpoint. Will try again in '
'%d seconds', eval_interval_secs)
else:
last_evaluated_model_path = model_path
global_step, metrics = _run_checkpoint_once(
tensor_dict,
evaluators,
batch_processor,
checkpoint_dirs,
variables_to_restore,
restore_fn,
num_batches,
master,
save_graph,
save_graph_dir,
losses_dict=losses_dict,
eval_export_path=eval_export_path,
process_metrics_fn=process_metrics_fn)
write_metrics(metrics, global_step, summary_dir)
if (max_evaluation_global_step and
global_step >= max_evaluation_global_step):
tf.logging.info('Finished evaluation!')
break
number_of_evaluations += 1
if (max_number_of_evaluations and
number_of_evaluations >= max_number_of_evaluations):
tf.logging.info('Finished evaluation!')
break
time_to_next_eval = start + eval_interval_secs - time.time()
if time_to_next_eval > 0:
time.sleep(time_to_next_eval)
return metrics
def _scale_box_to_absolute(args):
boxes, image_shape = args
return box_list_ops.to_absolute_coordinates(
box_list.BoxList(boxes), image_shape[0], image_shape[1]).get()
def _resize_detection_masks(arg_tuple):
"""Resizes detection masks.
Args:
arg_tuple: A (detection_boxes, detection_masks, image_shape, pad_shape)
tuple where
detection_boxes is a tf.float32 tensor of size [num_masks, 4] containing
the box corners. Row i contains [ymin, xmin, ymax, xmax] of the box
corresponding to mask i. Note that the box corners are in
normalized coordinates.
detection_masks is a tensor of size
[num_masks, mask_height, mask_width].
image_shape is a tensor of shape [2]
pad_shape is a tensor of shape [2] --- this is assumed to be greater
than or equal to image_shape along both dimensions and represents a
shape to-be-padded-to.
Returns:
"""
detection_boxes, detection_masks, image_shape, pad_shape = arg_tuple
detection_masks_reframed = ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image_shape[0], image_shape[1])
pad_instance_dim = tf.zeros([3, 1], dtype=tf.int32)
pad_hw_dim = tf.concat([tf.zeros([1], dtype=tf.int32),
pad_shape - image_shape], axis=0)
pad_hw_dim = tf.expand_dims(pad_hw_dim, 1)
paddings = tf.concat([pad_instance_dim, pad_hw_dim], axis=1)
detection_masks_reframed = tf.pad(detection_masks_reframed, paddings)
# If the masks are currently float, binarize them. Otherwise keep them as
# integers, since they have already been thresholded.
if detection_masks_reframed.dtype == tf.float32:
detection_masks_reframed = tf.greater(detection_masks_reframed, 0.5)
return tf.cast(detection_masks_reframed, tf.uint8)
def resize_detection_masks(detection_boxes, detection_masks,
original_image_spatial_shapes):
"""Resizes per-box detection masks to be relative to the entire image.
Note that this function only works when the spatial size of all images in
the batch is the same. If not, this function should be used with batch_size=1.
Args:
detection_boxes: A [batch_size, num_instances, 4] float tensor containing
bounding boxes.
detection_masks: A [batch_size, num_instances, height, width] float tensor
containing binary instance masks per box.
original_image_spatial_shapes: a [batch_size, 3] shaped int tensor
holding the spatial dimensions of each image in the batch.
Returns:
masks: Masks resized to the spatial extents given by
(original_image_spatial_shapes[0, 0], original_image_spatial_shapes[0, 1])
"""
# modify original image spatial shapes to be max along each dim
# in evaluator, should have access to original_image_spatial_shape field
# in add_Eval_Dict
max_spatial_shape = tf.reduce_max(
original_image_spatial_shapes, axis=0, keep_dims=True)
tiled_max_spatial_shape = tf.tile(
max_spatial_shape,
multiples=[tf.shape(original_image_spatial_shapes)[0], 1])
return shape_utils.static_or_dynamic_map_fn(
_resize_detection_masks,
elems=[detection_boxes,
detection_masks,
original_image_spatial_shapes,
tiled_max_spatial_shape],
dtype=tf.uint8)
def _resize_groundtruth_masks(args):
"""Resizes groundtruth masks to the original image size."""
mask, true_image_shape, original_image_shape, pad_shape = args
true_height = true_image_shape[0]
true_width = true_image_shape[1]
mask = mask[:, :true_height, :true_width]
mask = tf.expand_dims(mask, 3)
mask = tf.image.resize_images(
mask,
original_image_shape,
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
align_corners=True)
paddings = tf.concat(
[tf.zeros([3, 1], dtype=tf.int32),
tf.expand_dims(
tf.concat([tf.zeros([1], dtype=tf.int32),
pad_shape-original_image_shape], axis=0),
1)], axis=1)
mask = tf.pad(tf.squeeze(mask, 3), paddings)
return tf.cast(mask, tf.uint8)
def _resize_surface_coordinate_masks(args):
detection_boxes, surface_coords, image_shape = args
surface_coords_v, surface_coords_u = tf.unstack(surface_coords, axis=-1)
surface_coords_v_reframed = ops.reframe_box_masks_to_image_masks(
surface_coords_v, detection_boxes, image_shape[0], image_shape[1])
surface_coords_u_reframed = ops.reframe_box_masks_to_image_masks(
surface_coords_u, detection_boxes, image_shape[0], image_shape[1])
return tf.stack([surface_coords_v_reframed, surface_coords_u_reframed],
axis=-1)
def _scale_keypoint_to_absolute(args):
keypoints, image_shape = args
return keypoint_ops.scale(keypoints, image_shape[0], image_shape[1])
def result_dict_for_single_example(image,
key,
detections,
groundtruth=None,
class_agnostic=False,
scale_to_absolute=False):
"""Merges all detection and groundtruth information for a single example.
Note that evaluation tools require classes that are 1-indexed, and so this
function performs the offset. If `class_agnostic` is True, all output classes
have label 1.
Args:
image: A single 4D uint8 image tensor of shape [1, H, W, C].
key: A single string tensor identifying the image.
detections: A dictionary of detections, returned from
DetectionModel.postprocess().
groundtruth: (Optional) Dictionary of groundtruth items, with fields:
'groundtruth_boxes': [num_boxes, 4] float32 tensor of boxes, in
normalized coordinates.
'groundtruth_classes': [num_boxes] int64 tensor of 1-indexed classes.
'groundtruth_area': [num_boxes] float32 tensor of bbox area. (Optional)
'groundtruth_is_crowd': [num_boxes] int64 tensor. (Optional)
'groundtruth_difficult': [num_boxes] int64 tensor. (Optional)
'groundtruth_group_of': [num_boxes] int64 tensor. (Optional)
'groundtruth_instance_masks': 3D int64 tensor of instance masks
(Optional).
'groundtruth_keypoints': [num_boxes, num_keypoints, 2] float32 tensor with
keypoints (Optional).
class_agnostic: Boolean indicating whether the detections are class-agnostic
(i.e. binary). Default False.
scale_to_absolute: Boolean indicating whether boxes and keypoints should be
scaled to absolute coordinates. Note that for IoU based evaluations, it
does not matter whether boxes are expressed in absolute or relative
coordinates. Default False.
Returns:
A dictionary with:
'original_image': A [1, H, W, C] uint8 image tensor.
'key': A string tensor with image identifier.
'detection_boxes': [max_detections, 4] float32 tensor of boxes, in
normalized or absolute coordinates, depending on the value of
`scale_to_absolute`.
'detection_scores': [max_detections] float32 tensor of scores.
'detection_classes': [max_detections] int64 tensor of 1-indexed classes.
'detection_masks': [max_detections, H, W] float32 tensor of binarized
masks, reframed to full image masks.
'groundtruth_boxes': [num_boxes, 4] float32 tensor of boxes, in
normalized or absolute coordinates, depending on the value of
`scale_to_absolute`. (Optional)
'groundtruth_classes': [num_boxes] int64 tensor of 1-indexed classes.
(Optional)
'groundtruth_area': [num_boxes] float32 tensor of bbox area. (Optional)
'groundtruth_is_crowd': [num_boxes] int64 tensor. (Optional)
'groundtruth_difficult': [num_boxes] int64 tensor. (Optional)
'groundtruth_group_of': [num_boxes] int64 tensor. (Optional)
'groundtruth_instance_masks': 3D int64 tensor of instance masks
(Optional).
'groundtruth_keypoints': [num_boxes, num_keypoints, 2] float32 tensor with
keypoints (Optional).
"""
if groundtruth:
max_gt_boxes = tf.shape(
groundtruth[fields.InputDataFields.groundtruth_boxes])[0]
for gt_key in groundtruth:
# expand groundtruth dict along the batch dimension.
groundtruth[gt_key] = tf.expand_dims(groundtruth[gt_key], 0)
for detection_key in detections:
detections[detection_key] = tf.expand_dims(
detections[detection_key][0], axis=0)
batched_output_dict = result_dict_for_batched_example(
image,
tf.expand_dims(key, 0),
detections,
groundtruth,
class_agnostic,
scale_to_absolute,
max_gt_boxes=max_gt_boxes)
exclude_keys = [
fields.InputDataFields.original_image,
fields.DetectionResultFields.num_detections,
fields.InputDataFields.num_groundtruth_boxes
]
output_dict = {
fields.InputDataFields.original_image:
batched_output_dict[fields.InputDataFields.original_image]
}
for key in batched_output_dict:
# remove the batch dimension.
if key not in exclude_keys:
output_dict[key] = tf.squeeze(batched_output_dict[key], 0)
return output_dict
def result_dict_for_batched_example(images,
keys,
detections,
groundtruth=None,
class_agnostic=False,
scale_to_absolute=False,
original_image_spatial_shapes=None,
true_image_shapes=None,
max_gt_boxes=None,
label_id_offset=1):
"""Merges all detection and groundtruth information for a single example.
Note that evaluation tools require classes that are 1-indexed, and so this
function performs the offset. If `class_agnostic` is True, all output classes
have label 1.
The groundtruth coordinates of boxes/keypoints in 'groundtruth' dictionary are
normalized relative to the (potentially padded) input image, while the
coordinates in 'detection' dictionary are normalized relative to the true
image shape.
Args:
images: A single 4D uint8 image tensor of shape [batch_size, H, W, C].
keys: A [batch_size] string/int tensor with image identifier.
detections: A dictionary of detections, returned from
DetectionModel.postprocess().
groundtruth: (Optional) Dictionary of groundtruth items, with fields:
'groundtruth_boxes': [batch_size, max_number_of_boxes, 4] float32 tensor
of boxes, in normalized coordinates.
'groundtruth_classes': [batch_size, max_number_of_boxes] int64 tensor of
1-indexed classes.
'groundtruth_area': [batch_size, max_number_of_boxes] float32 tensor of
bbox area. (Optional)
'groundtruth_is_crowd':[batch_size, max_number_of_boxes] int64
tensor. (Optional)
'groundtruth_difficult': [batch_size, max_number_of_boxes] int64
tensor. (Optional)
'groundtruth_group_of': [batch_size, max_number_of_boxes] int64
tensor. (Optional)
'groundtruth_instance_masks': 4D int64 tensor of instance
masks (Optional).
'groundtruth_keypoints': [batch_size, max_number_of_boxes, num_keypoints,
2] float32 tensor with keypoints (Optional).
'groundtruth_keypoint_visibilities': [batch_size, max_number_of_boxes,
num_keypoints] bool tensor with keypoint visibilities (Optional).
'groundtruth_labeled_classes': [batch_size, num_classes] int64
tensor of 1-indexed classes. (Optional)
'groundtruth_dp_num_points': [batch_size, max_number_of_boxes] int32
tensor. (Optional)
'groundtruth_dp_part_ids': [batch_size, max_number_of_boxes,
max_sampled_points] int32 tensor. (Optional)
'groundtruth_dp_surface_coords_list': [batch_size, max_number_of_boxes,
max_sampled_points, 4] float32 tensor. (Optional)
class_agnostic: Boolean indicating whether the detections are class-agnostic
(i.e. binary). Default False.
scale_to_absolute: Boolean indicating whether boxes and keypoints should be
scaled to absolute coordinates. Note that for IoU based evaluations, it
does not matter whether boxes are expressed in absolute or relative
coordinates. Default False.
original_image_spatial_shapes: A 2D int32 tensor of shape [batch_size, 2]
used to resize the image. When set to None, the image size is retained.
true_image_shapes: A 2D int32 tensor of shape [batch_size, 3]
containing the size of the unpadded original_image.
max_gt_boxes: [batch_size] tensor representing the maximum number of
groundtruth boxes to pad.
label_id_offset: offset for class ids.
Returns:
A dictionary with:
'original_image': A [batch_size, H, W, C] uint8 image tensor.
'original_image_spatial_shape': A [batch_size, 2] tensor containing the
original image sizes.
'true_image_shape': A [batch_size, 3] tensor containing the size of
the unpadded original_image.
'key': A [batch_size] string tensor with image identifier.
'detection_boxes': [batch_size, max_detections, 4] float32 tensor of boxes,
in normalized or absolute coordinates, depending on the value of
`scale_to_absolute`.
'detection_scores': [batch_size, max_detections] float32 tensor of scores.
'detection_classes': [batch_size, max_detections] int64 tensor of 1-indexed
classes.
'detection_masks': [batch_size, max_detections, H, W] uint8 tensor of
instance masks, reframed to full image masks. Note that these may be
binarized (e.g. {0, 1}), or may contain 1-indexed part labels. (Optional)
'detection_keypoints': [batch_size, max_detections, num_keypoints, 2]
float32 tensor containing keypoint coordinates. (Optional)
'detection_keypoint_scores': [batch_size, max_detections, num_keypoints]
float32 tensor containing keypoint scores. (Optional)
'detection_surface_coords': [batch_size, max_detection, H, W, 2] float32
tensor with normalized surface coordinates (e.g. DensePose UV
coordinates). (Optional)
'num_detections': [batch_size] int64 tensor containing number of valid
detections.
'groundtruth_boxes': [batch_size, num_boxes, 4] float32 tensor of boxes, in
normalized or absolute coordinates, depending on the value of
`scale_to_absolute`. (Optional)
'groundtruth_classes': [batch_size, num_boxes] int64 tensor of 1-indexed
classes. (Optional)
'groundtruth_area': [batch_size, num_boxes] float32 tensor of bbox
area. (Optional)
'groundtruth_is_crowd': [batch_size, num_boxes] int64 tensor. (Optional)
'groundtruth_difficult': [batch_size, num_boxes] int64 tensor. (Optional)
'groundtruth_group_of': [batch_size, num_boxes] int64 tensor. (Optional)
'groundtruth_instance_masks': 4D int64 tensor of instance masks
(Optional).
'groundtruth_keypoints': [batch_size, num_boxes, num_keypoints, 2] float32
tensor with keypoints (Optional).
'groundtruth_keypoint_visibilities': [batch_size, num_boxes, num_keypoints]
bool tensor with keypoint visibilities (Optional).
'groundtruth_labeled_classes': [batch_size, num_classes] int64 tensor
of 1-indexed classes. (Optional)
'num_groundtruth_boxes': [batch_size] tensor containing the maximum number
of groundtruth boxes per image.
Raises:
ValueError: if original_image_spatial_shape is not 2D int32 tensor of shape
[2].
ValueError: if true_image_shapes is not 2D int32 tensor of shape
[3].
"""
input_data_fields = fields.InputDataFields
if original_image_spatial_shapes is None:
original_image_spatial_shapes = tf.tile(
tf.expand_dims(tf.shape(images)[1:3], axis=0),
multiples=[tf.shape(images)[0], 1])
else:
if (len(original_image_spatial_shapes.shape) != 2 and
original_image_spatial_shapes.shape[1] != 2):
raise ValueError(
'`original_image_spatial_shape` should be a 2D tensor of shape '
'[batch_size, 2].')
if true_image_shapes is None:
true_image_shapes = tf.tile(
tf.expand_dims(tf.shape(images)[1:4], axis=0),
multiples=[tf.shape(images)[0], 1])
else:
if (len(true_image_shapes.shape) != 2
and true_image_shapes.shape[1] != 3):
raise ValueError('`true_image_shapes` should be a 2D tensor of '
'shape [batch_size, 3].')
output_dict = {
input_data_fields.original_image:
images,
input_data_fields.key:
keys,
input_data_fields.original_image_spatial_shape: (
original_image_spatial_shapes),
input_data_fields.true_image_shape:
true_image_shapes
}
detection_fields = fields.DetectionResultFields
detection_boxes = detections[detection_fields.detection_boxes]
detection_scores = detections[detection_fields.detection_scores]
num_detections = tf.cast(detections[detection_fields.num_detections],
dtype=tf.int32)
if class_agnostic:
detection_classes = tf.ones_like(detection_scores, dtype=tf.int64)
else:
detection_classes = (
tf.to_int64(detections[detection_fields.detection_classes]) +
label_id_offset)
if scale_to_absolute:
output_dict[detection_fields.detection_boxes] = (
shape_utils.static_or_dynamic_map_fn(
_scale_box_to_absolute,
elems=[detection_boxes, original_image_spatial_shapes],
dtype=tf.float32))
else:
output_dict[detection_fields.detection_boxes] = detection_boxes
output_dict[detection_fields.detection_classes] = detection_classes
output_dict[detection_fields.detection_scores] = detection_scores
output_dict[detection_fields.num_detections] = num_detections
if detection_fields.detection_masks in detections:
detection_masks = detections[detection_fields.detection_masks]
output_dict[detection_fields.detection_masks] = resize_detection_masks(
detection_boxes, detection_masks, original_image_spatial_shapes)
if detection_fields.detection_surface_coords in detections:
detection_surface_coords = detections[
detection_fields.detection_surface_coords]
output_dict[detection_fields.detection_surface_coords] = (
shape_utils.static_or_dynamic_map_fn(
_resize_surface_coordinate_masks,
elems=[detection_boxes, detection_surface_coords,
original_image_spatial_shapes],
dtype=tf.float32))
if detection_fields.detection_keypoints in detections:
detection_keypoints = detections[detection_fields.detection_keypoints]
output_dict[detection_fields.detection_keypoints] = detection_keypoints
if scale_to_absolute:
output_dict[detection_fields.detection_keypoints] = (
shape_utils.static_or_dynamic_map_fn(
_scale_keypoint_to_absolute,
elems=[detection_keypoints, original_image_spatial_shapes],
dtype=tf.float32))
if detection_fields.detection_keypoint_scores in detections:
output_dict[detection_fields.detection_keypoint_scores] = detections[
detection_fields.detection_keypoint_scores]
else:
output_dict[detection_fields.detection_keypoint_scores] = tf.ones_like(
detections[detection_fields.detection_keypoints][:, :, :, 0])
if groundtruth:
if max_gt_boxes is None:
if input_data_fields.num_groundtruth_boxes in groundtruth:
max_gt_boxes = groundtruth[input_data_fields.num_groundtruth_boxes]
else:
raise ValueError(
'max_gt_boxes must be provided when processing batched examples.')
if input_data_fields.groundtruth_instance_masks in groundtruth:
masks = groundtruth[input_data_fields.groundtruth_instance_masks]
max_spatial_shape = tf.reduce_max(
original_image_spatial_shapes, axis=0, keep_dims=True)
tiled_max_spatial_shape = tf.tile(
max_spatial_shape,
multiples=[tf.shape(original_image_spatial_shapes)[0], 1])
groundtruth[input_data_fields.groundtruth_instance_masks] = (
shape_utils.static_or_dynamic_map_fn(
_resize_groundtruth_masks,
elems=[masks, true_image_shapes,
original_image_spatial_shapes,
tiled_max_spatial_shape],
dtype=tf.uint8))
output_dict.update(groundtruth)
image_shape = tf.cast(tf.shape(images), tf.float32)
image_height, image_width = image_shape[1], image_shape[2]
def _scale_box_to_normalized_true_image(args):
"""Scale the box coordinates to be relative to the true image shape."""
boxes, true_image_shape = args
true_image_shape = tf.cast(true_image_shape, tf.float32)
true_height, true_width = true_image_shape[0], true_image_shape[1]
normalized_window = tf.stack([0.0, 0.0, true_height / image_height,
true_width / image_width])
return box_list_ops.change_coordinate_frame(
box_list.BoxList(boxes), normalized_window).get()
groundtruth_boxes = groundtruth[input_data_fields.groundtruth_boxes]
groundtruth_boxes = shape_utils.static_or_dynamic_map_fn(
_scale_box_to_normalized_true_image,
elems=[groundtruth_boxes, true_image_shapes], dtype=tf.float32)
output_dict[input_data_fields.groundtruth_boxes] = groundtruth_boxes
if input_data_fields.groundtruth_keypoints in groundtruth:
# If groundtruth_keypoints is in the groundtruth dictionary. Update the
# coordinates to conform with the true image shape.
def _scale_keypoints_to_normalized_true_image(args):
"""Scale the box coordinates to be relative to the true image shape."""
keypoints, true_image_shape = args
true_image_shape = tf.cast(true_image_shape, tf.float32)
true_height, true_width = true_image_shape[0], true_image_shape[1]
normalized_window = tf.stack(
[0.0, 0.0, true_height / image_height, true_width / image_width])
return keypoint_ops.change_coordinate_frame(keypoints,
normalized_window)
groundtruth_keypoints = groundtruth[
input_data_fields.groundtruth_keypoints]
groundtruth_keypoints = shape_utils.static_or_dynamic_map_fn(
_scale_keypoints_to_normalized_true_image,
elems=[groundtruth_keypoints, true_image_shapes],
dtype=tf.float32)
output_dict[
input_data_fields.groundtruth_keypoints] = groundtruth_keypoints
if scale_to_absolute:
groundtruth_boxes = output_dict[input_data_fields.groundtruth_boxes]
output_dict[input_data_fields.groundtruth_boxes] = (
shape_utils.static_or_dynamic_map_fn(
_scale_box_to_absolute,
elems=[groundtruth_boxes, original_image_spatial_shapes],
dtype=tf.float32))
if input_data_fields.groundtruth_keypoints in groundtruth:
groundtruth_keypoints = output_dict[
input_data_fields.groundtruth_keypoints]
output_dict[input_data_fields.groundtruth_keypoints] = (
shape_utils.static_or_dynamic_map_fn(
_scale_keypoint_to_absolute,
elems=[groundtruth_keypoints, original_image_spatial_shapes],
dtype=tf.float32))
# For class-agnostic models, groundtruth classes all become 1.
if class_agnostic:
groundtruth_classes = groundtruth[input_data_fields.groundtruth_classes]
groundtruth_classes = tf.ones_like(groundtruth_classes, dtype=tf.int64)
output_dict[input_data_fields.groundtruth_classes] = groundtruth_classes
output_dict[input_data_fields.num_groundtruth_boxes] = max_gt_boxes
return output_dict
def get_evaluators(eval_config, categories, evaluator_options=None):
"""Returns the evaluator class according to eval_config, valid for categories.
Args:
eval_config: An `eval_pb2.EvalConfig`.
categories: A list of dicts, each of which has the following keys -
'id': (required) an integer id uniquely identifying this category.
'name': (required) string representing category name e.g., 'cat', 'dog'.
'keypoints': (optional) dict mapping this category's keypoints to unique
ids.
evaluator_options: A dictionary of metric names (see
EVAL_METRICS_CLASS_DICT) to `DetectionEvaluator` initialization
keyword arguments. For example:
evalator_options = {
'coco_detection_metrics': {'include_metrics_per_category': True}
}
Returns:
An list of instances of DetectionEvaluator.
Raises:
ValueError: if metric is not in the metric class dictionary.
"""
evaluator_options = evaluator_options or {}
eval_metric_fn_keys = eval_config.metrics_set
if not eval_metric_fn_keys:
eval_metric_fn_keys = [EVAL_DEFAULT_METRIC]
evaluators_list = []
for eval_metric_fn_key in eval_metric_fn_keys:
if eval_metric_fn_key not in EVAL_METRICS_CLASS_DICT:
raise ValueError('Metric not found: {}'.format(eval_metric_fn_key))
kwargs_dict = (evaluator_options[eval_metric_fn_key] if eval_metric_fn_key
in evaluator_options else {})
evaluators_list.append(EVAL_METRICS_CLASS_DICT[eval_metric_fn_key](
categories,
**kwargs_dict))
if isinstance(eval_config, eval_pb2.EvalConfig):
parameterized_metrics = eval_config.parameterized_metric
for parameterized_metric in parameterized_metrics:
assert parameterized_metric.HasField('parameterized_metric')
if parameterized_metric.WhichOneof(
'parameterized_metric') == EVAL_KEYPOINT_METRIC:
keypoint_metrics = parameterized_metric.coco_keypoint_metrics
# Create category to keypoints mapping dict.
category_keypoints = {}
class_label = keypoint_metrics.class_label
category = None
for cat in categories:
if cat['name'] == class_label:
category = cat
break
if not category:
continue
keypoints_for_this_class = category['keypoints']
category_keypoints = [{
'id': keypoints_for_this_class[kp_name], 'name': kp_name
} for kp_name in keypoints_for_this_class]
# Create keypoint evaluator for this category.
evaluators_list.append(EVAL_METRICS_CLASS_DICT[EVAL_KEYPOINT_METRIC](
category['id'], category_keypoints, class_label,
keypoint_metrics.keypoint_label_to_sigmas))
return evaluators_list
def get_eval_metric_ops_for_evaluators(eval_config,
categories,
eval_dict):
"""Returns eval metrics ops to use with `tf.estimator.EstimatorSpec`.
Args:
eval_config: An `eval_pb2.EvalConfig`.
categories: A list of dicts, each of which has the following keys -
'id': (required) an integer id uniquely identifying this category.
'name': (required) string representing category name e.g., 'cat', 'dog'.
eval_dict: An evaluation dictionary, returned from
result_dict_for_single_example().
Returns:
A dictionary of metric names to tuple of value_op and update_op that can be
used as eval metric ops in tf.EstimatorSpec.
"""
eval_metric_ops = {}
evaluator_options = evaluator_options_from_eval_config(eval_config)
evaluators_list = get_evaluators(eval_config, categories, evaluator_options)
for evaluator in evaluators_list:
eval_metric_ops.update(evaluator.get_estimator_eval_metric_ops(
eval_dict))
return eval_metric_ops
def evaluator_options_from_eval_config(eval_config):
"""Produces a dictionary of evaluation options for each eval metric.
Args:
eval_config: An `eval_pb2.EvalConfig`.
Returns:
evaluator_options: A dictionary of metric names (see
EVAL_METRICS_CLASS_DICT) to `DetectionEvaluator` initialization
keyword arguments. For example:
evalator_options = {
'coco_detection_metrics': {'include_metrics_per_category': True}
}
"""
eval_metric_fn_keys = eval_config.metrics_set
evaluator_options = {}
for eval_metric_fn_key in eval_metric_fn_keys:
if eval_metric_fn_key in (
'coco_detection_metrics', 'coco_mask_metrics', 'lvis_mask_metrics'):
evaluator_options[eval_metric_fn_key] = {
'include_metrics_per_category': (
eval_config.include_metrics_per_category)
}
if (hasattr(eval_config, 'all_metrics_per_category') and
eval_config.all_metrics_per_category):
evaluator_options[eval_metric_fn_key].update({
'all_metrics_per_category': eval_config.all_metrics_per_category
})
# For coco detection eval, if the eval_config proto contains the
# "skip_predictions_for_unlabeled_class" field, include this field in
# evaluator_options.
if eval_metric_fn_key == 'coco_detection_metrics' and hasattr(
eval_config, 'skip_predictions_for_unlabeled_class'):
evaluator_options[eval_metric_fn_key].update({
'skip_predictions_for_unlabeled_class':
(eval_config.skip_predictions_for_unlabeled_class)
})
for super_category in eval_config.super_categories:
if 'super_categories' not in evaluator_options[eval_metric_fn_key]:
evaluator_options[eval_metric_fn_key]['super_categories'] = {}
key = super_category
value = eval_config.super_categories[key].split(',')
evaluator_options[eval_metric_fn_key]['super_categories'][key] = value
if eval_metric_fn_key == 'lvis_mask_metrics' and hasattr(
eval_config, 'export_path'):
evaluator_options[eval_metric_fn_key].update({
'export_path': eval_config.export_path
})
elif eval_metric_fn_key == 'precision_at_recall_detection_metrics':
evaluator_options[eval_metric_fn_key] = {
'recall_lower_bound': (eval_config.recall_lower_bound),
'recall_upper_bound': (eval_config.recall_upper_bound)
}
return evaluator_options
def has_densepose(eval_dict):
return (fields.DetectionResultFields.detection_masks in eval_dict and
fields.DetectionResultFields.detection_surface_coords in eval_dict) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/eval_util.py | eval_util.py |
"""Functions to export object detection inference graph."""
import os
import tempfile
import tensorflow.compat.v1 as tf
import tf_slim as slim
from tensorflow.core.protobuf import saver_pb2
from tensorflow.python.tools import freeze_graph # pylint: disable=g-direct-tensorflow-import
from object_detection.builders import graph_rewriter_builder
from object_detection.builders import model_builder
from object_detection.core import standard_fields as fields
from object_detection.data_decoders import tf_example_decoder
from object_detection.utils import config_util
from object_detection.utils import shape_utils
# pylint: disable=g-import-not-at-top
try:
from tensorflow.contrib import tfprof as contrib_tfprof
from tensorflow.contrib.quantize.python import graph_matcher
except ImportError:
# TF 2.0 doesn't ship with contrib.
pass
# pylint: enable=g-import-not-at-top
freeze_graph_with_def_protos = freeze_graph.freeze_graph_with_def_protos
def parse_side_inputs(side_input_shapes_string, side_input_names_string,
side_input_types_string):
"""Parses side input flags.
Args:
side_input_shapes_string: The shape of the side input tensors, provided as a
comma-separated list of integers. A value of -1 is used for unknown
dimensions. A `/` denotes a break, starting the shape of the next side
input tensor.
side_input_names_string: The names of the side input tensors, provided as a
comma-separated list of strings.
side_input_types_string: The type of the side input tensors, provided as a
comma-separated list of types, each of `string`, `integer`, or `float`.
Returns:
side_input_shapes: A list of shapes.
side_input_names: A list of strings.
side_input_types: A list of tensorflow dtypes.
"""
if side_input_shapes_string:
side_input_shapes = []
for side_input_shape_list in side_input_shapes_string.split('/'):
side_input_shape = [
int(dim) if dim != '-1' else None
for dim in side_input_shape_list.split(',')
]
side_input_shapes.append(side_input_shape)
else:
raise ValueError('When using side_inputs, side_input_shapes must be '
'specified in the input flags.')
if side_input_names_string:
side_input_names = list(side_input_names_string.split(','))
else:
raise ValueError('When using side_inputs, side_input_names must be '
'specified in the input flags.')
if side_input_types_string:
typelookup = {'float': tf.float32, 'int': tf.int32, 'string': tf.string}
side_input_types = [
typelookup[side_input_type]
for side_input_type in side_input_types_string.split(',')
]
else:
raise ValueError('When using side_inputs, side_input_types must be '
'specified in the input flags.')
return side_input_shapes, side_input_names, side_input_types
def rewrite_nn_resize_op(is_quantized=False):
"""Replaces a custom nearest-neighbor resize op with the Tensorflow version.
Some graphs use this custom version for TPU-compatibility.
Args:
is_quantized: True if the default graph is quantized.
"""
def remove_nn():
"""Remove nearest neighbor upsampling structures and replace with TF op."""
input_pattern = graph_matcher.OpTypePattern(
'FakeQuantWithMinMaxVars' if is_quantized else '*')
stack_1_pattern = graph_matcher.OpTypePattern(
'Pack', inputs=[input_pattern, input_pattern], ordered_inputs=False)
stack_2_pattern = graph_matcher.OpTypePattern(
'Pack', inputs=[stack_1_pattern, stack_1_pattern], ordered_inputs=False)
reshape_pattern = graph_matcher.OpTypePattern(
'Reshape', inputs=[stack_2_pattern, 'Const'], ordered_inputs=False)
consumer_pattern1 = graph_matcher.OpTypePattern(
'Add|AddV2|Max|Mul', inputs=[reshape_pattern, '*'],
ordered_inputs=False)
consumer_pattern2 = graph_matcher.OpTypePattern(
'StridedSlice', inputs=[reshape_pattern, '*', '*', '*'],
ordered_inputs=False)
def replace_matches(consumer_pattern):
"""Search for nearest neighbor pattern and replace with TF op."""
match_counter = 0
matcher = graph_matcher.GraphMatcher(consumer_pattern)
for match in matcher.match_graph(tf.get_default_graph()):
match_counter += 1
projection_op = match.get_op(input_pattern)
reshape_op = match.get_op(reshape_pattern)
consumer_op = match.get_op(consumer_pattern)
nn_resize = tf.image.resize_nearest_neighbor(
projection_op.outputs[0],
reshape_op.outputs[0].shape.dims[1:3],
align_corners=False,
name=os.path.split(reshape_op.name)[0] + '/resize_nearest_neighbor')
for index, op_input in enumerate(consumer_op.inputs):
if op_input == reshape_op.outputs[0]:
consumer_op._update_input(index, nn_resize) # pylint: disable=protected-access
break
return match_counter
match_counter = replace_matches(consumer_pattern1)
match_counter += replace_matches(consumer_pattern2)
tf.logging.info('Found and fixed {} matches'.format(match_counter))
return match_counter
# Applying twice because both inputs to Add could be NN pattern
total_removals = 0
while remove_nn():
total_removals += 1
# This number is chosen based on the nas-fpn architecture.
if total_removals > 4:
raise ValueError('Graph removal encountered a infinite loop.')
def replace_variable_values_with_moving_averages(graph,
current_checkpoint_file,
new_checkpoint_file,
no_ema_collection=None):
"""Replaces variable values in the checkpoint with their moving averages.
If the current checkpoint has shadow variables maintaining moving averages of
the variables defined in the graph, this function generates a new checkpoint
where the variables contain the values of their moving averages.
Args:
graph: a tf.Graph object.
current_checkpoint_file: a checkpoint containing both original variables and
their moving averages.
new_checkpoint_file: file path to write a new checkpoint.
no_ema_collection: A list of namescope substrings to match the variables
to eliminate EMA.
"""
with graph.as_default():
variable_averages = tf.train.ExponentialMovingAverage(0.0)
ema_variables_to_restore = variable_averages.variables_to_restore()
ema_variables_to_restore = config_util.remove_unnecessary_ema(
ema_variables_to_restore, no_ema_collection)
with tf.Session() as sess:
read_saver = tf.train.Saver(ema_variables_to_restore)
read_saver.restore(sess, current_checkpoint_file)
write_saver = tf.train.Saver()
write_saver.save(sess, new_checkpoint_file)
def _image_tensor_input_placeholder(input_shape=None):
"""Returns input placeholder and a 4-D uint8 image tensor."""
if input_shape is None:
input_shape = (None, None, None, 3)
input_tensor = tf.placeholder(
dtype=tf.uint8, shape=input_shape, name='image_tensor')
return input_tensor, input_tensor
def _side_input_tensor_placeholder(side_input_shape, side_input_name,
side_input_type):
"""Returns side input placeholder and side input tensor."""
side_input_tensor = tf.placeholder(
dtype=side_input_type, shape=side_input_shape, name=side_input_name)
return side_input_tensor, side_input_tensor
def _tf_example_input_placeholder(input_shape=None):
"""Returns input that accepts a batch of strings with tf examples.
Args:
input_shape: the shape to resize the output decoded images to (optional).
Returns:
a tuple of input placeholder and the output decoded images.
"""
batch_tf_example_placeholder = tf.placeholder(
tf.string, shape=[None], name='tf_example')
def decode(tf_example_string_tensor):
tensor_dict = tf_example_decoder.TfExampleDecoder().decode(
tf_example_string_tensor)
image_tensor = tensor_dict[fields.InputDataFields.image]
if input_shape is not None:
image_tensor = tf.image.resize(image_tensor, input_shape[1:3])
return image_tensor
return (batch_tf_example_placeholder,
shape_utils.static_or_dynamic_map_fn(
decode,
elems=batch_tf_example_placeholder,
dtype=tf.uint8,
parallel_iterations=32,
back_prop=False))
def _encoded_image_string_tensor_input_placeholder(input_shape=None):
"""Returns input that accepts a batch of PNG or JPEG strings.
Args:
input_shape: the shape to resize the output decoded images to (optional).
Returns:
a tuple of input placeholder and the output decoded images.
"""
batch_image_str_placeholder = tf.placeholder(
dtype=tf.string,
shape=[None],
name='encoded_image_string_tensor')
def decode(encoded_image_string_tensor):
image_tensor = tf.image.decode_image(encoded_image_string_tensor,
channels=3)
image_tensor.set_shape((None, None, 3))
if input_shape is not None:
image_tensor = tf.image.resize(image_tensor, input_shape[1:3])
return image_tensor
return (batch_image_str_placeholder,
tf.map_fn(
decode,
elems=batch_image_str_placeholder,
dtype=tf.uint8,
parallel_iterations=32,
back_prop=False))
input_placeholder_fn_map = {
'image_tensor': _image_tensor_input_placeholder,
'encoded_image_string_tensor':
_encoded_image_string_tensor_input_placeholder,
'tf_example': _tf_example_input_placeholder
}
def add_output_tensor_nodes(postprocessed_tensors,
output_collection_name='inference_op'):
"""Adds output nodes for detection boxes and scores.
Adds the following nodes for output tensors -
* num_detections: float32 tensor of shape [batch_size].
* detection_boxes: float32 tensor of shape [batch_size, num_boxes, 4]
containing detected boxes.
* detection_scores: float32 tensor of shape [batch_size, num_boxes]
containing scores for the detected boxes.
* detection_multiclass_scores: (Optional) float32 tensor of shape
[batch_size, num_boxes, num_classes_with_background] for containing class
score distribution for detected boxes including background if any.
* detection_features: (Optional) float32 tensor of shape
[batch, num_boxes, roi_height, roi_width, depth]
containing classifier features
for each detected box
* detection_classes: float32 tensor of shape [batch_size, num_boxes]
containing class predictions for the detected boxes.
* detection_keypoints: (Optional) float32 tensor of shape
[batch_size, num_boxes, num_keypoints, 2] containing keypoints for each
detection box.
* detection_masks: (Optional) float32 tensor of shape
[batch_size, num_boxes, mask_height, mask_width] containing masks for each
detection box.
Args:
postprocessed_tensors: a dictionary containing the following fields
'detection_boxes': [batch, max_detections, 4]
'detection_scores': [batch, max_detections]
'detection_multiclass_scores': [batch, max_detections,
num_classes_with_background]
'detection_features': [batch, num_boxes, roi_height, roi_width, depth]
'detection_classes': [batch, max_detections]
'detection_masks': [batch, max_detections, mask_height, mask_width]
(optional).
'detection_keypoints': [batch, max_detections, num_keypoints, 2]
(optional).
'num_detections': [batch]
output_collection_name: Name of collection to add output tensors to.
Returns:
A tensor dict containing the added output tensor nodes.
"""
detection_fields = fields.DetectionResultFields
label_id_offset = 1
boxes = postprocessed_tensors.get(detection_fields.detection_boxes)
scores = postprocessed_tensors.get(detection_fields.detection_scores)
multiclass_scores = postprocessed_tensors.get(
detection_fields.detection_multiclass_scores)
box_classifier_features = postprocessed_tensors.get(
detection_fields.detection_features)
raw_boxes = postprocessed_tensors.get(detection_fields.raw_detection_boxes)
raw_scores = postprocessed_tensors.get(detection_fields.raw_detection_scores)
classes = postprocessed_tensors.get(
detection_fields.detection_classes) + label_id_offset
keypoints = postprocessed_tensors.get(detection_fields.detection_keypoints)
masks = postprocessed_tensors.get(detection_fields.detection_masks)
num_detections = postprocessed_tensors.get(detection_fields.num_detections)
outputs = {}
outputs[detection_fields.detection_boxes] = tf.identity(
boxes, name=detection_fields.detection_boxes)
outputs[detection_fields.detection_scores] = tf.identity(
scores, name=detection_fields.detection_scores)
if multiclass_scores is not None:
outputs[detection_fields.detection_multiclass_scores] = tf.identity(
multiclass_scores, name=detection_fields.detection_multiclass_scores)
if box_classifier_features is not None:
outputs[detection_fields.detection_features] = tf.identity(
box_classifier_features,
name=detection_fields.detection_features)
outputs[detection_fields.detection_classes] = tf.identity(
classes, name=detection_fields.detection_classes)
outputs[detection_fields.num_detections] = tf.identity(
num_detections, name=detection_fields.num_detections)
if raw_boxes is not None:
outputs[detection_fields.raw_detection_boxes] = tf.identity(
raw_boxes, name=detection_fields.raw_detection_boxes)
if raw_scores is not None:
outputs[detection_fields.raw_detection_scores] = tf.identity(
raw_scores, name=detection_fields.raw_detection_scores)
if keypoints is not None:
outputs[detection_fields.detection_keypoints] = tf.identity(
keypoints, name=detection_fields.detection_keypoints)
if masks is not None:
outputs[detection_fields.detection_masks] = tf.identity(
masks, name=detection_fields.detection_masks)
for output_key in outputs:
tf.add_to_collection(output_collection_name, outputs[output_key])
return outputs
def write_saved_model(saved_model_path,
frozen_graph_def,
inputs,
outputs):
"""Writes SavedModel to disk.
If checkpoint_path is not None bakes the weights into the graph thereby
eliminating the need of checkpoint files during inference. If the model
was trained with moving averages, setting use_moving_averages to true
restores the moving averages, otherwise the original set of variables
is restored.
Args:
saved_model_path: Path to write SavedModel.
frozen_graph_def: tf.GraphDef holding frozen graph.
inputs: A tensor dictionary containing the inputs to a DetectionModel.
outputs: A tensor dictionary containing the outputs of a DetectionModel.
"""
with tf.Graph().as_default():
with tf.Session() as sess:
tf.import_graph_def(frozen_graph_def, name='')
builder = tf.saved_model.builder.SavedModelBuilder(saved_model_path)
tensor_info_inputs = {}
if isinstance(inputs, dict):
for k, v in inputs.items():
tensor_info_inputs[k] = tf.saved_model.utils.build_tensor_info(v)
else:
tensor_info_inputs['inputs'] = tf.saved_model.utils.build_tensor_info(
inputs)
tensor_info_outputs = {}
for k, v in outputs.items():
tensor_info_outputs[k] = tf.saved_model.utils.build_tensor_info(v)
detection_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs=tensor_info_inputs,
outputs=tensor_info_outputs,
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
))
builder.add_meta_graph_and_variables(
sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={
tf.saved_model.signature_constants
.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
detection_signature,
},
)
builder.save()
def write_graph_and_checkpoint(inference_graph_def,
model_path,
input_saver_def,
trained_checkpoint_prefix):
"""Writes the graph and the checkpoint into disk."""
for node in inference_graph_def.node:
node.device = ''
with tf.Graph().as_default():
tf.import_graph_def(inference_graph_def, name='')
with tf.Session() as sess:
saver = tf.train.Saver(
saver_def=input_saver_def, save_relative_paths=True)
saver.restore(sess, trained_checkpoint_prefix)
saver.save(sess, model_path)
def _get_outputs_from_inputs(input_tensors, detection_model,
output_collection_name, **side_inputs):
inputs = tf.cast(input_tensors, dtype=tf.float32)
preprocessed_inputs, true_image_shapes = detection_model.preprocess(inputs)
output_tensors = detection_model.predict(
preprocessed_inputs, true_image_shapes, **side_inputs)
postprocessed_tensors = detection_model.postprocess(
output_tensors, true_image_shapes)
return add_output_tensor_nodes(postprocessed_tensors,
output_collection_name)
def build_detection_graph(input_type, detection_model, input_shape,
output_collection_name, graph_hook_fn,
use_side_inputs=False, side_input_shapes=None,
side_input_names=None, side_input_types=None):
"""Build the detection graph."""
if input_type not in input_placeholder_fn_map:
raise ValueError('Unknown input type: {}'.format(input_type))
placeholder_args = {}
side_inputs = {}
if input_shape is not None:
if (input_type != 'image_tensor' and
input_type != 'encoded_image_string_tensor' and
input_type != 'tf_example' and
input_type != 'tf_sequence_example'):
raise ValueError('Can only specify input shape for `image_tensor`, '
'`encoded_image_string_tensor`, `tf_example`, '
' or `tf_sequence_example` inputs.')
placeholder_args['input_shape'] = input_shape
placeholder_tensor, input_tensors = input_placeholder_fn_map[input_type](
**placeholder_args)
placeholder_tensors = {'inputs': placeholder_tensor}
if use_side_inputs:
for idx, side_input_name in enumerate(side_input_names):
side_input_placeholder, side_input = _side_input_tensor_placeholder(
side_input_shapes[idx], side_input_name, side_input_types[idx])
print(side_input)
side_inputs[side_input_name] = side_input
placeholder_tensors[side_input_name] = side_input_placeholder
outputs = _get_outputs_from_inputs(
input_tensors=input_tensors,
detection_model=detection_model,
output_collection_name=output_collection_name,
**side_inputs)
# Add global step to the graph.
slim.get_or_create_global_step()
if graph_hook_fn: graph_hook_fn()
return outputs, placeholder_tensors
def _export_inference_graph(input_type,
detection_model,
use_moving_averages,
trained_checkpoint_prefix,
output_directory,
additional_output_tensor_names=None,
input_shape=None,
output_collection_name='inference_op',
graph_hook_fn=None,
write_inference_graph=False,
temp_checkpoint_prefix='',
use_side_inputs=False,
side_input_shapes=None,
side_input_names=None,
side_input_types=None):
"""Export helper."""
tf.gfile.MakeDirs(output_directory)
frozen_graph_path = os.path.join(output_directory,
'frozen_inference_graph.pb')
saved_model_path = os.path.join(output_directory, 'saved_model')
model_path = os.path.join(output_directory, 'model.ckpt')
outputs, placeholder_tensor_dict = build_detection_graph(
input_type=input_type,
detection_model=detection_model,
input_shape=input_shape,
output_collection_name=output_collection_name,
graph_hook_fn=graph_hook_fn,
use_side_inputs=use_side_inputs,
side_input_shapes=side_input_shapes,
side_input_names=side_input_names,
side_input_types=side_input_types)
profile_inference_graph(tf.get_default_graph())
saver_kwargs = {}
if use_moving_averages:
if not temp_checkpoint_prefix:
# This check is to be compatible with both version of SaverDef.
if os.path.isfile(trained_checkpoint_prefix):
saver_kwargs['write_version'] = saver_pb2.SaverDef.V1
temp_checkpoint_prefix = tempfile.NamedTemporaryFile().name
else:
temp_checkpoint_prefix = tempfile.mkdtemp()
replace_variable_values_with_moving_averages(
tf.get_default_graph(), trained_checkpoint_prefix,
temp_checkpoint_prefix)
checkpoint_to_use = temp_checkpoint_prefix
else:
checkpoint_to_use = trained_checkpoint_prefix
saver = tf.train.Saver(**saver_kwargs)
input_saver_def = saver.as_saver_def()
write_graph_and_checkpoint(
inference_graph_def=tf.get_default_graph().as_graph_def(),
model_path=model_path,
input_saver_def=input_saver_def,
trained_checkpoint_prefix=checkpoint_to_use)
if write_inference_graph:
inference_graph_def = tf.get_default_graph().as_graph_def()
inference_graph_path = os.path.join(output_directory,
'inference_graph.pbtxt')
for node in inference_graph_def.node:
node.device = ''
with tf.gfile.GFile(inference_graph_path, 'wb') as f:
f.write(str(inference_graph_def))
if additional_output_tensor_names is not None:
output_node_names = ','.join(list(outputs.keys())+(
additional_output_tensor_names))
else:
output_node_names = ','.join(outputs.keys())
frozen_graph_def = freeze_graph.freeze_graph_with_def_protos(
input_graph_def=tf.get_default_graph().as_graph_def(),
input_saver_def=input_saver_def,
input_checkpoint=checkpoint_to_use,
output_node_names=output_node_names,
restore_op_name='save/restore_all',
filename_tensor_name='save/Const:0',
output_graph=frozen_graph_path,
clear_devices=True,
initializer_nodes='')
write_saved_model(saved_model_path, frozen_graph_def,
placeholder_tensor_dict, outputs)
def export_inference_graph(input_type,
pipeline_config,
trained_checkpoint_prefix,
output_directory,
input_shape=None,
output_collection_name='inference_op',
additional_output_tensor_names=None,
write_inference_graph=False,
use_side_inputs=False,
side_input_shapes=None,
side_input_names=None,
side_input_types=None):
"""Exports inference graph for the model specified in the pipeline config.
Args:
input_type: Type of input for the graph. Can be one of ['image_tensor',
'encoded_image_string_tensor', 'tf_example'].
pipeline_config: pipeline_pb2.TrainAndEvalPipelineConfig proto.
trained_checkpoint_prefix: Path to the trained checkpoint file.
output_directory: Path to write outputs.
input_shape: Sets a fixed shape for an `image_tensor` input. If not
specified, will default to [None, None, None, 3].
output_collection_name: Name of collection to add output tensors to.
If None, does not add output tensors to a collection.
additional_output_tensor_names: list of additional output
tensors to include in the frozen graph.
write_inference_graph: If true, writes inference graph to disk.
use_side_inputs: If True, the model requires side_inputs.
side_input_shapes: List of shapes of the side input tensors,
required if use_side_inputs is True.
side_input_names: List of names of the side input tensors,
required if use_side_inputs is True.
side_input_types: List of types of the side input tensors,
required if use_side_inputs is True.
"""
detection_model = model_builder.build(pipeline_config.model,
is_training=False)
graph_rewriter_fn = None
if pipeline_config.HasField('graph_rewriter'):
graph_rewriter_config = pipeline_config.graph_rewriter
graph_rewriter_fn = graph_rewriter_builder.build(graph_rewriter_config,
is_training=False)
_export_inference_graph(
input_type,
detection_model,
pipeline_config.eval_config.use_moving_averages,
trained_checkpoint_prefix,
output_directory,
additional_output_tensor_names,
input_shape,
output_collection_name,
graph_hook_fn=graph_rewriter_fn,
write_inference_graph=write_inference_graph,
use_side_inputs=use_side_inputs,
side_input_shapes=side_input_shapes,
side_input_names=side_input_names,
side_input_types=side_input_types)
pipeline_config.eval_config.use_moving_averages = False
config_util.save_pipeline_config(pipeline_config, output_directory)
def profile_inference_graph(graph):
"""Profiles the inference graph.
Prints model parameters and computation FLOPs given an inference graph.
BatchNorms are excluded from the parameter count due to the fact that
BatchNorms are usually folded. BatchNorm, Initializer, Regularizer
and BiasAdd are not considered in FLOP count.
Args:
graph: the inference graph.
"""
tfprof_vars_option = (
contrib_tfprof.model_analyzer.TRAINABLE_VARS_PARAMS_STAT_OPTIONS)
tfprof_flops_option = contrib_tfprof.model_analyzer.FLOAT_OPS_OPTIONS
# Batchnorm is usually folded during inference.
tfprof_vars_option['trim_name_regexes'] = ['.*BatchNorm.*']
# Initializer and Regularizer are only used in training.
tfprof_flops_option['trim_name_regexes'] = [
'.*BatchNorm.*', '.*Initializer.*', '.*Regularizer.*', '.*BiasAdd.*'
]
contrib_tfprof.model_analyzer.print_model_analysis(
graph, tfprof_options=tfprof_vars_option)
contrib_tfprof.model_analyzer.print_model_analysis(
graph, tfprof_options=tfprof_flops_option) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/exporter.py | exporter.py |
r"""Tool to export an object detection model for inference.
Prepares an object detection tensorflow graph for inference using model
configuration and a trained checkpoint. Outputs inference
graph, associated checkpoint files, a frozen inference graph and a
SavedModel (https://tensorflow.github.io/serving/serving_basic.html).
The inference graph contains one of three input nodes depending on the user
specified option.
* `image_tensor`: Accepts a uint8 4-D tensor of shape [None, None, None, 3]
* `encoded_image_string_tensor`: Accepts a 1-D string tensor of shape [None]
containing encoded PNG or JPEG images. Image resolutions are expected to be
the same if more than 1 image is provided.
* `tf_example`: Accepts a 1-D string tensor of shape [None] containing
serialized TFExample protos. Image resolutions are expected to be the same
if more than 1 image is provided.
and the following output nodes returned by the model.postprocess(..):
* `num_detections`: Outputs float32 tensors of the form [batch]
that specifies the number of valid boxes per image in the batch.
* `detection_boxes`: Outputs float32 tensors of the form
[batch, num_boxes, 4] containing detected boxes.
* `detection_scores`: Outputs float32 tensors of the form
[batch, num_boxes] containing class scores for the detections.
* `detection_classes`: Outputs float32 tensors of the form
[batch, num_boxes] containing classes for the detections.
* `raw_detection_boxes`: Outputs float32 tensors of the form
[batch, raw_num_boxes, 4] containing detection boxes without
post-processing.
* `raw_detection_scores`: Outputs float32 tensors of the form
[batch, raw_num_boxes, num_classes_with_background] containing class score
logits for raw detection boxes.
* `detection_masks`: (Optional) Outputs float32 tensors of the form
[batch, num_boxes, mask_height, mask_width] containing predicted instance
masks for each box if its present in the dictionary of postprocessed
tensors returned by the model.
* detection_multiclass_scores: (Optional) Outputs float32 tensor of shape
[batch, num_boxes, num_classes_with_background] for containing class
score distribution for detected boxes including background if any.
* detection_features: (Optional) float32 tensor of shape
[batch, num_boxes, roi_height, roi_width, depth]
containing classifier features
Notes:
* This tool uses `use_moving_averages` from eval_config to decide which
weights to freeze.
Example Usage:
--------------
python export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path path/to/ssd_inception_v2.config \
--trained_checkpoint_prefix path/to/model.ckpt \
--output_directory path/to/exported_model_directory
The expected output would be in the directory
path/to/exported_model_directory (which is created if it does not exist)
with contents:
- inference_graph.pbtxt
- model.ckpt.data-00000-of-00001
- model.ckpt.info
- model.ckpt.meta
- frozen_inference_graph.pb
+ saved_model (a directory)
Config overrides (see the `config_override` flag) are text protobufs
(also of type pipeline_pb2.TrainEvalPipelineConfig) which are used to override
certain fields in the provided pipeline_config_path. These are useful for
making small changes to the inference graph that differ from the training or
eval config.
Example Usage (in which we change the second stage post-processing score
threshold to be 0.5):
python export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path path/to/ssd_inception_v2.config \
--trained_checkpoint_prefix path/to/model.ckpt \
--output_directory path/to/exported_model_directory \
--config_override " \
model{ \
faster_rcnn { \
second_stage_post_processing { \
batch_non_max_suppression { \
score_threshold: 0.5 \
} \
} \
} \
}"
"""
import tensorflow.compat.v1 as tf
from google.protobuf import text_format
from object_detection import exporter
from object_detection.protos import pipeline_pb2
flags = tf.app.flags
flags.DEFINE_string('input_type', 'image_tensor', 'Type of input node. Can be '
'one of [`image_tensor`, `encoded_image_string_tensor`, '
'`tf_example`]')
flags.DEFINE_string('input_shape', None,
'If input_type is `image_tensor`, this can explicitly set '
'the shape of this input tensor to a fixed size. The '
'dimensions are to be provided as a comma-separated list '
'of integers. A value of -1 can be used for unknown '
'dimensions. If not specified, for an `image_tensor, the '
'default shape will be partially specified as '
'`[None, None, None, 3]`.')
flags.DEFINE_string('pipeline_config_path', None,
'Path to a pipeline_pb2.TrainEvalPipelineConfig config '
'file.')
flags.DEFINE_string('trained_checkpoint_prefix', None,
'Path to trained checkpoint, typically of the form '
'path/to/model.ckpt')
flags.DEFINE_string('output_directory', None, 'Path to write outputs.')
flags.DEFINE_string('config_override', '',
'pipeline_pb2.TrainEvalPipelineConfig '
'text proto to override pipeline_config_path.')
flags.DEFINE_boolean('write_inference_graph', False,
'If true, writes inference graph to disk.')
flags.DEFINE_string('additional_output_tensor_names', None,
'Additional Tensors to output, to be specified as a comma '
'separated list of tensor names.')
flags.DEFINE_boolean('use_side_inputs', False,
'If True, uses side inputs as well as image inputs.')
flags.DEFINE_string('side_input_shapes', None,
'If use_side_inputs is True, this explicitly sets '
'the shape of the side input tensors to a fixed size. The '
'dimensions are to be provided as a comma-separated list '
'of integers. A value of -1 can be used for unknown '
'dimensions. A `/` denotes a break, starting the shape of '
'the next side input tensor. This flag is required if '
'using side inputs.')
flags.DEFINE_string('side_input_types', None,
'If use_side_inputs is True, this explicitly sets '
'the type of the side input tensors. The '
'dimensions are to be provided as a comma-separated list '
'of types, each of `string`, `integer`, or `float`. '
'This flag is required if using side inputs.')
flags.DEFINE_string('side_input_names', None,
'If use_side_inputs is True, this explicitly sets '
'the names of the side input tensors required by the model '
'assuming the names will be a comma-separated list of '
'strings. This flag is required if using side inputs.')
tf.app.flags.mark_flag_as_required('pipeline_config_path')
tf.app.flags.mark_flag_as_required('trained_checkpoint_prefix')
tf.app.flags.mark_flag_as_required('output_directory')
FLAGS = flags.FLAGS
def main(_):
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
with tf.gfile.GFile(FLAGS.pipeline_config_path, 'r') as f:
text_format.Merge(f.read(), pipeline_config)
text_format.Merge(FLAGS.config_override, pipeline_config)
if FLAGS.input_shape:
input_shape = [
int(dim) if dim != '-1' else None
for dim in FLAGS.input_shape.split(',')
]
else:
input_shape = None
if FLAGS.use_side_inputs:
side_input_shapes, side_input_names, side_input_types = (
exporter.parse_side_inputs(
FLAGS.side_input_shapes,
FLAGS.side_input_names,
FLAGS.side_input_types))
else:
side_input_shapes = None
side_input_names = None
side_input_types = None
if FLAGS.additional_output_tensor_names:
additional_output_tensor_names = list(
FLAGS.additional_output_tensor_names.split(','))
else:
additional_output_tensor_names = None
exporter.export_inference_graph(
FLAGS.input_type, pipeline_config, FLAGS.trained_checkpoint_prefix,
FLAGS.output_directory, input_shape=input_shape,
write_inference_graph=FLAGS.write_inference_graph,
additional_output_tensor_names=additional_output_tensor_names,
use_side_inputs=FLAGS.use_side_inputs,
side_input_shapes=side_input_shapes,
side_input_names=side_input_names,
side_input_types=side_input_types)
if __name__ == '__main__':
tf.app.run() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/export_inference_graph.py | export_inference_graph.py |
r"""Tool to export an object detection model for inference.
Prepares an object detection tensorflow graph for inference using model
configuration and a trained checkpoint. Outputs associated checkpoint files,
a SavedModel, and a copy of the model config.
The inference graph contains one of three input nodes depending on the user
specified option.
* `image_tensor`: Accepts a uint8 4-D tensor of shape [1, None, None, 3]
* `float_image_tensor`: Accepts a float32 4-D tensor of shape
[1, None, None, 3]
* `encoded_image_string_tensor`: Accepts a 1-D string tensor of shape [None]
containing encoded PNG or JPEG images. Image resolutions are expected to be
the same if more than 1 image is provided.
* `tf_example`: Accepts a 1-D string tensor of shape [None] containing
serialized TFExample protos. Image resolutions are expected to be the same
if more than 1 image is provided.
* `image_and_boxes_tensor`: Accepts a 4-D image tensor of size
[1, None, None, 3] and a boxes tensor of size [1, None, 4] of normalized
bounding boxes. To be able to support this option, the model needs
to implement a predict_masks_from_boxes method. See the documentation
for DetectionFromImageAndBoxModule for details.
and the following output nodes returned by the model.postprocess(..):
* `num_detections`: Outputs float32 tensors of the form [batch]
that specifies the number of valid boxes per image in the batch.
* `detection_boxes`: Outputs float32 tensors of the form
[batch, num_boxes, 4] containing detected boxes.
* `detection_scores`: Outputs float32 tensors of the form
[batch, num_boxes] containing class scores for the detections.
* `detection_classes`: Outputs float32 tensors of the form
[batch, num_boxes] containing classes for the detections.
Example Usage:
--------------
python exporter_main_v2.py \
--input_type image_tensor \
--pipeline_config_path path/to/ssd_inception_v2.config \
--trained_checkpoint_dir path/to/checkpoint \
--output_directory path/to/exported_model_directory
--use_side_inputs True/False \
--side_input_shapes dim_0,dim_1,...dim_a/.../dim_0,dim_1,...,dim_z \
--side_input_names name_a,name_b,...,name_c \
--side_input_types type_1,type_2
The expected output would be in the directory
path/to/exported_model_directory (which is created if it does not exist)
holding two subdirectories (corresponding to checkpoint and SavedModel,
respectively) and a copy of the pipeline config.
Config overrides (see the `config_override` flag) are text protobufs
(also of type pipeline_pb2.TrainEvalPipelineConfig) which are used to override
certain fields in the provided pipeline_config_path. These are useful for
making small changes to the inference graph that differ from the training or
eval config.
Example Usage (in which we change the second stage post-processing score
threshold to be 0.5):
python exporter_main_v2.py \
--input_type image_tensor \
--pipeline_config_path path/to/ssd_inception_v2.config \
--trained_checkpoint_dir path/to/checkpoint \
--output_directory path/to/exported_model_directory \
--config_override " \
model{ \
faster_rcnn { \
second_stage_post_processing { \
batch_non_max_suppression { \
score_threshold: 0.5 \
} \
} \
} \
}"
If side inputs are desired, the following arguments could be appended
(the example below is for Context R-CNN).
--use_side_inputs True \
--side_input_shapes 1,2000,2057/1 \
--side_input_names context_features,valid_context_size \
--side_input_types tf.float32,tf.int32
"""
from absl import app
from absl import flags
import tensorflow.compat.v2 as tf
from google.protobuf import text_format
from object_detection import exporter_lib_v2
from object_detection.protos import pipeline_pb2
tf.enable_v2_behavior()
FLAGS = flags.FLAGS
flags.DEFINE_string('input_type', 'image_tensor', 'Type of input node. Can be '
'one of [`image_tensor`, `encoded_image_string_tensor`, '
'`tf_example`, `float_image_tensor`, '
'`image_and_boxes_tensor`]')
flags.DEFINE_string('pipeline_config_path', None,
'Path to a pipeline_pb2.TrainEvalPipelineConfig config '
'file.')
flags.DEFINE_string('trained_checkpoint_dir', None,
'Path to trained checkpoint directory')
flags.DEFINE_string('output_directory', None, 'Path to write outputs.')
flags.DEFINE_string('config_override', '',
'pipeline_pb2.TrainEvalPipelineConfig '
'text proto to override pipeline_config_path.')
flags.DEFINE_boolean('use_side_inputs', False,
'If True, uses side inputs as well as image inputs.')
flags.DEFINE_string('side_input_shapes', '',
'If use_side_inputs is True, this explicitly sets '
'the shape of the side input tensors to a fixed size. The '
'dimensions are to be provided as a comma-separated list '
'of integers. A value of -1 can be used for unknown '
'dimensions. A `/` denotes a break, starting the shape of '
'the next side input tensor. This flag is required if '
'using side inputs.')
flags.DEFINE_string('side_input_types', '',
'If use_side_inputs is True, this explicitly sets '
'the type of the side input tensors. The '
'dimensions are to be provided as a comma-separated list '
'of types, each of `string`, `integer`, or `float`. '
'This flag is required if using side inputs.')
flags.DEFINE_string('side_input_names', '',
'If use_side_inputs is True, this explicitly sets '
'the names of the side input tensors required by the model '
'assuming the names will be a comma-separated list of '
'strings. This flag is required if using side inputs.')
flags.mark_flag_as_required('pipeline_config_path')
flags.mark_flag_as_required('trained_checkpoint_dir')
flags.mark_flag_as_required('output_directory')
def main(_):
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
with tf.io.gfile.GFile(FLAGS.pipeline_config_path, 'r') as f:
text_format.Merge(f.read(), pipeline_config)
text_format.Merge(FLAGS.config_override, pipeline_config)
exporter_lib_v2.export_inference_graph(
FLAGS.input_type, pipeline_config, FLAGS.trained_checkpoint_dir,
FLAGS.output_directory, FLAGS.use_side_inputs, FLAGS.side_input_shapes,
FLAGS.side_input_types, FLAGS.side_input_names)
if __name__ == '__main__':
app.run(main) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/exporter_main_v2.py | exporter_main_v2.py |
import os
import tempfile
import numpy as np
import tensorflow.compat.v1 as tf
from tensorflow.core.framework import attr_value_pb2
from tensorflow.core.framework import types_pb2
from tensorflow.core.protobuf import saver_pb2
from object_detection import exporter
from object_detection.builders import graph_rewriter_builder
from object_detection.builders import model_builder
from object_detection.builders import post_processing_builder
from object_detection.core import box_list
from object_detection.utils import tf_version
_DEFAULT_NUM_CHANNELS = 3
_DEFAULT_NUM_COORD_BOX = 4
if tf_version.is_tf1():
from tensorflow.tools.graph_transforms import TransformGraph # pylint: disable=g-import-not-at-top
def get_const_center_size_encoded_anchors(anchors):
"""Exports center-size encoded anchors as a constant tensor.
Args:
anchors: a float32 tensor of shape [num_anchors, 4] containing the anchor
boxes
Returns:
encoded_anchors: a float32 constant tensor of shape [num_anchors, 4]
containing the anchor boxes.
"""
anchor_boxlist = box_list.BoxList(anchors)
y, x, h, w = anchor_boxlist.get_center_coordinates_and_sizes()
num_anchors = y.get_shape().as_list()
with tf.Session() as sess:
y_out, x_out, h_out, w_out = sess.run([y, x, h, w])
encoded_anchors = tf.constant(
np.transpose(np.stack((y_out, x_out, h_out, w_out))),
dtype=tf.float32,
shape=[num_anchors[0], _DEFAULT_NUM_COORD_BOX],
name='anchors')
return encoded_anchors
def append_postprocessing_op(frozen_graph_def,
max_detections,
max_classes_per_detection,
nms_score_threshold,
nms_iou_threshold,
num_classes,
scale_values,
detections_per_class=100,
use_regular_nms=False,
additional_output_tensors=()):
"""Appends postprocessing custom op.
Args:
frozen_graph_def: Frozen GraphDef for SSD model after freezing the
checkpoint
max_detections: Maximum number of detections (boxes) to show
max_classes_per_detection: Number of classes to display per detection
nms_score_threshold: Score threshold used in Non-maximal suppression in
post-processing
nms_iou_threshold: Intersection-over-union threshold used in Non-maximal
suppression in post-processing
num_classes: number of classes in SSD detector
scale_values: scale values is a dict with following key-value pairs
{y_scale: 10, x_scale: 10, h_scale: 5, w_scale: 5} that are used in decode
centersize boxes
detections_per_class: In regular NonMaxSuppression, number of anchors used
for NonMaxSuppression per class
use_regular_nms: Flag to set postprocessing op to use Regular NMS instead of
Fast NMS.
additional_output_tensors: Array of additional tensor names to output.
Tensors are appended after postprocessing output.
Returns:
transformed_graph_def: Frozen GraphDef with postprocessing custom op
appended
TFLite_Detection_PostProcess custom op node has four outputs:
detection_boxes: a float32 tensor of shape [1, num_boxes, 4] with box
locations
detection_classes: a float32 tensor of shape [1, num_boxes]
with class indices
detection_scores: a float32 tensor of shape [1, num_boxes]
with class scores
num_boxes: a float32 tensor of size 1 containing the number of detected
boxes
"""
new_output = frozen_graph_def.node.add()
new_output.op = 'TFLite_Detection_PostProcess'
new_output.name = 'TFLite_Detection_PostProcess'
new_output.attr['_output_quantized'].CopyFrom(
attr_value_pb2.AttrValue(b=True))
new_output.attr['_output_types'].list.type.extend([
types_pb2.DT_FLOAT, types_pb2.DT_FLOAT, types_pb2.DT_FLOAT,
types_pb2.DT_FLOAT
])
new_output.attr['_support_output_type_float_in_quantized_op'].CopyFrom(
attr_value_pb2.AttrValue(b=True))
new_output.attr['max_detections'].CopyFrom(
attr_value_pb2.AttrValue(i=max_detections))
new_output.attr['max_classes_per_detection'].CopyFrom(
attr_value_pb2.AttrValue(i=max_classes_per_detection))
new_output.attr['nms_score_threshold'].CopyFrom(
attr_value_pb2.AttrValue(f=nms_score_threshold.pop()))
new_output.attr['nms_iou_threshold'].CopyFrom(
attr_value_pb2.AttrValue(f=nms_iou_threshold.pop()))
new_output.attr['num_classes'].CopyFrom(
attr_value_pb2.AttrValue(i=num_classes))
new_output.attr['y_scale'].CopyFrom(
attr_value_pb2.AttrValue(f=scale_values['y_scale'].pop()))
new_output.attr['x_scale'].CopyFrom(
attr_value_pb2.AttrValue(f=scale_values['x_scale'].pop()))
new_output.attr['h_scale'].CopyFrom(
attr_value_pb2.AttrValue(f=scale_values['h_scale'].pop()))
new_output.attr['w_scale'].CopyFrom(
attr_value_pb2.AttrValue(f=scale_values['w_scale'].pop()))
new_output.attr['detections_per_class'].CopyFrom(
attr_value_pb2.AttrValue(i=detections_per_class))
new_output.attr['use_regular_nms'].CopyFrom(
attr_value_pb2.AttrValue(b=use_regular_nms))
new_output.input.extend(
['raw_outputs/box_encodings', 'raw_outputs/class_predictions', 'anchors'])
# Transform the graph to append new postprocessing op
input_names = []
output_names = ['TFLite_Detection_PostProcess'
] + list(additional_output_tensors)
transforms = ['strip_unused_nodes']
transformed_graph_def = TransformGraph(frozen_graph_def, input_names,
output_names, transforms)
return transformed_graph_def
def export_tflite_graph(pipeline_config,
trained_checkpoint_prefix,
output_dir,
add_postprocessing_op,
max_detections,
max_classes_per_detection,
detections_per_class=100,
use_regular_nms=False,
binary_graph_name='tflite_graph.pb',
txt_graph_name='tflite_graph.pbtxt',
additional_output_tensors=()):
"""Exports a tflite compatible graph and anchors for ssd detection model.
Anchors are written to a tensor and tflite compatible graph
is written to output_dir/tflite_graph.pb.
Args:
pipeline_config: a pipeline.proto object containing the configuration for
SSD model to export.
trained_checkpoint_prefix: a file prefix for the checkpoint containing the
trained parameters of the SSD model.
output_dir: A directory to write the tflite graph and anchor file to.
add_postprocessing_op: If add_postprocessing_op is true: frozen graph adds a
TFLite_Detection_PostProcess custom op
max_detections: Maximum number of detections (boxes) to show
max_classes_per_detection: Number of classes to display per detection
detections_per_class: In regular NonMaxSuppression, number of anchors used
for NonMaxSuppression per class
use_regular_nms: Flag to set postprocessing op to use Regular NMS instead of
Fast NMS.
binary_graph_name: Name of the exported graph file in binary format.
txt_graph_name: Name of the exported graph file in text format.
additional_output_tensors: Array of additional tensor names to output.
Additional tensors are appended to the end of output tensor list.
Raises:
ValueError: if the pipeline config contains models other than ssd or uses an
fixed_shape_resizer and provides a shape as well.
"""
tf.gfile.MakeDirs(output_dir)
if pipeline_config.model.WhichOneof('model') != 'ssd':
raise ValueError('Only ssd models are supported in tflite. '
'Found {} in config'.format(
pipeline_config.model.WhichOneof('model')))
num_classes = pipeline_config.model.ssd.num_classes
nms_score_threshold = {
pipeline_config.model.ssd.post_processing.batch_non_max_suppression
.score_threshold
}
nms_iou_threshold = {
pipeline_config.model.ssd.post_processing.batch_non_max_suppression
.iou_threshold
}
scale_values = {}
scale_values['y_scale'] = {
pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.y_scale
}
scale_values['x_scale'] = {
pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.x_scale
}
scale_values['h_scale'] = {
pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.height_scale
}
scale_values['w_scale'] = {
pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.width_scale
}
image_resizer_config = pipeline_config.model.ssd.image_resizer
image_resizer = image_resizer_config.WhichOneof('image_resizer_oneof')
num_channels = _DEFAULT_NUM_CHANNELS
if image_resizer == 'fixed_shape_resizer':
height = image_resizer_config.fixed_shape_resizer.height
width = image_resizer_config.fixed_shape_resizer.width
if image_resizer_config.fixed_shape_resizer.convert_to_grayscale:
num_channels = 1
shape = [1, height, width, num_channels]
else:
raise ValueError(
'Only fixed_shape_resizer'
'is supported with tflite. Found {}'.format(
image_resizer_config.WhichOneof('image_resizer_oneof')))
image = tf.placeholder(
tf.float32, shape=shape, name='normalized_input_image_tensor')
detection_model = model_builder.build(
pipeline_config.model, is_training=False)
predicted_tensors = detection_model.predict(image, true_image_shapes=None)
# The score conversion occurs before the post-processing custom op
_, score_conversion_fn = post_processing_builder.build(
pipeline_config.model.ssd.post_processing)
class_predictions = score_conversion_fn(
predicted_tensors['class_predictions_with_background'])
with tf.name_scope('raw_outputs'):
# 'raw_outputs/box_encodings': a float32 tensor of shape [1, num_anchors, 4]
# containing the encoded box predictions. Note that these are raw
# predictions and no Non-Max suppression is applied on them and
# no decode center size boxes is applied to them.
tf.identity(predicted_tensors['box_encodings'], name='box_encodings')
# 'raw_outputs/class_predictions': a float32 tensor of shape
# [1, num_anchors, num_classes] containing the class scores for each anchor
# after applying score conversion.
tf.identity(class_predictions, name='class_predictions')
# 'anchors': a float32 tensor of shape
# [4, num_anchors] containing the anchors as a constant node.
tf.identity(
get_const_center_size_encoded_anchors(predicted_tensors['anchors']),
name='anchors')
# Add global step to the graph, so we know the training step number when we
# evaluate the model.
tf.train.get_or_create_global_step()
# graph rewriter
is_quantized = pipeline_config.HasField('graph_rewriter')
if is_quantized:
graph_rewriter_config = pipeline_config.graph_rewriter
graph_rewriter_fn = graph_rewriter_builder.build(
graph_rewriter_config, is_training=False)
graph_rewriter_fn()
if pipeline_config.model.ssd.feature_extractor.HasField('fpn'):
exporter.rewrite_nn_resize_op(is_quantized)
# freeze the graph
saver_kwargs = {}
if pipeline_config.eval_config.use_moving_averages:
saver_kwargs['write_version'] = saver_pb2.SaverDef.V1
moving_average_checkpoint = tempfile.NamedTemporaryFile()
exporter.replace_variable_values_with_moving_averages(
tf.get_default_graph(), trained_checkpoint_prefix,
moving_average_checkpoint.name)
checkpoint_to_use = moving_average_checkpoint.name
else:
checkpoint_to_use = trained_checkpoint_prefix
saver = tf.train.Saver(**saver_kwargs)
input_saver_def = saver.as_saver_def()
frozen_graph_def = exporter.freeze_graph_with_def_protos(
input_graph_def=tf.get_default_graph().as_graph_def(),
input_saver_def=input_saver_def,
input_checkpoint=checkpoint_to_use,
output_node_names=','.join([
'raw_outputs/box_encodings', 'raw_outputs/class_predictions',
'anchors'
] + list(additional_output_tensors)),
restore_op_name='save/restore_all',
filename_tensor_name='save/Const:0',
clear_devices=True,
output_graph='',
initializer_nodes='')
# Add new operation to do post processing in a custom op (TF Lite only)
if add_postprocessing_op:
transformed_graph_def = append_postprocessing_op(
frozen_graph_def,
max_detections,
max_classes_per_detection,
nms_score_threshold,
nms_iou_threshold,
num_classes,
scale_values,
detections_per_class,
use_regular_nms,
additional_output_tensors=additional_output_tensors)
else:
# Return frozen without adding post-processing custom op
transformed_graph_def = frozen_graph_def
binary_graph = os.path.join(output_dir, binary_graph_name)
with tf.gfile.GFile(binary_graph, 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
txt_graph = os.path.join(output_dir, txt_graph_name)
with tf.gfile.GFile(txt_graph, 'w') as f:
f.write(str(transformed_graph_def)) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/export_tflite_ssd_graph_lib.py | export_tflite_ssd_graph_lib.py |
"""Model input function for tf-learn object detection model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import tensorflow.compat.v1 as tf
from object_detection.builders import dataset_builder
from object_detection.builders import image_resizer_builder
from object_detection.builders import model_builder
from object_detection.builders import preprocessor_builder
from object_detection.core import box_list
from object_detection.core import box_list_ops
from object_detection.core import densepose_ops
from object_detection.core import keypoint_ops
from object_detection.core import preprocessor
from object_detection.core import standard_fields as fields
from object_detection.data_decoders import tf_example_decoder
from object_detection.protos import eval_pb2
from object_detection.protos import image_resizer_pb2
from object_detection.protos import input_reader_pb2
from object_detection.protos import model_pb2
from object_detection.protos import train_pb2
from object_detection.utils import config_util
from object_detection.utils import ops as util_ops
from object_detection.utils import shape_utils
HASH_KEY = 'hash'
HASH_BINS = 1 << 31
SERVING_FED_EXAMPLE_KEY = 'serialized_example'
_LABEL_OFFSET = 1
# A map of names to methods that help build the input pipeline.
INPUT_BUILDER_UTIL_MAP = {
'dataset_build': dataset_builder.build,
'model_build': model_builder.build,
}
def _multiclass_scores_or_one_hot_labels(multiclass_scores,
groundtruth_boxes,
groundtruth_classes, num_classes):
"""Returns one-hot encoding of classes when multiclass_scores is empty."""
# Replace groundtruth_classes tensor with multiclass_scores tensor when its
# non-empty. If multiclass_scores is empty fall back on groundtruth_classes
# tensor.
def true_fn():
return tf.reshape(multiclass_scores,
[tf.shape(groundtruth_boxes)[0], num_classes])
def false_fn():
return tf.one_hot(groundtruth_classes, num_classes)
return tf.cond(tf.size(multiclass_scores) > 0, true_fn, false_fn)
def convert_labeled_classes_to_k_hot(groundtruth_labeled_classes,
num_classes,
map_empty_to_ones=False):
"""Returns k-hot encoding of the labeled classes.
If map_empty_to_ones is enabled and the input labeled_classes is empty,
this function assumes all classes are exhaustively labeled, thus returning
an all-one encoding.
Args:
groundtruth_labeled_classes: a Tensor holding a sparse representation of
labeled classes.
num_classes: an integer representing the number of classes
map_empty_to_ones: boolean (default: False). Set this to be True to default
to an all-ones result if given an empty `groundtruth_labeled_classes`.
Returns:
A k-hot (and 0-indexed) tensor representation of
`groundtruth_labeled_classes`.
"""
# If the input labeled_classes is empty, it assumes all classes are
# exhaustively labeled, thus returning an all-one encoding.
def true_fn():
return tf.sparse_to_dense(
groundtruth_labeled_classes - _LABEL_OFFSET, [num_classes],
tf.constant(1, dtype=tf.float32),
validate_indices=False)
def false_fn():
return tf.ones(num_classes, dtype=tf.float32)
if map_empty_to_ones:
return tf.cond(tf.size(groundtruth_labeled_classes) > 0, true_fn, false_fn)
return true_fn()
def _remove_unrecognized_classes(class_ids, unrecognized_label):
"""Returns class ids with unrecognized classes filtered out."""
recognized_indices = tf.squeeze(
tf.where(tf.greater(class_ids, unrecognized_label)), -1)
return tf.gather(class_ids, recognized_indices)
def assert_or_prune_invalid_boxes(boxes):
"""Makes sure boxes have valid sizes (ymax >= ymin, xmax >= xmin).
When the hardware supports assertions, the function raises an error when
boxes have an invalid size. If assertions are not supported (e.g. on TPU),
boxes with invalid sizes are filtered out.
Args:
boxes: float tensor of shape [num_boxes, 4]
Returns:
boxes: float tensor of shape [num_valid_boxes, 4] with invalid boxes
filtered out.
Raises:
tf.errors.InvalidArgumentError: When we detect boxes with invalid size.
This is not supported on TPUs.
"""
ymin, xmin, ymax, xmax = tf.split(
boxes, num_or_size_splits=4, axis=1)
height_check = tf.Assert(tf.reduce_all(ymax >= ymin), [ymin, ymax])
width_check = tf.Assert(tf.reduce_all(xmax >= xmin), [xmin, xmax])
with tf.control_dependencies([height_check, width_check]):
boxes_tensor = tf.concat([ymin, xmin, ymax, xmax], axis=1)
boxlist = box_list.BoxList(boxes_tensor)
# TODO(b/149221748) Remove pruning when XLA supports assertions.
boxlist = box_list_ops.prune_small_boxes(boxlist, 0)
return boxlist.get()
def transform_input_data(tensor_dict,
model_preprocess_fn,
image_resizer_fn,
num_classes,
data_augmentation_fn=None,
merge_multiple_boxes=False,
retain_original_image=False,
use_multiclass_scores=False,
use_bfloat16=False,
retain_original_image_additional_channels=False,
keypoint_type_weight=None):
"""A single function that is responsible for all input data transformations.
Data transformation functions are applied in the following order.
1. If key fields.InputDataFields.image_additional_channels is present in
tensor_dict, the additional channels will be merged into
fields.InputDataFields.image.
2. data_augmentation_fn (optional): applied on tensor_dict.
3. model_preprocess_fn: applied only on image tensor in tensor_dict.
4. keypoint_type_weight (optional): If groundtruth keypoints are in
the tensor dictionary, per-keypoint weights are produced. These weights are
initialized by `keypoint_type_weight` (or ones if left None).
Then, for all keypoints that are not visible, the weights are set to 0 (to
avoid penalizing the model in a loss function).
5. image_resizer_fn: applied on original image and instance mask tensor in
tensor_dict.
6. one_hot_encoding: applied to classes tensor in tensor_dict.
7. merge_multiple_boxes (optional): when groundtruth boxes are exactly the
same they can be merged into a single box with an associated k-hot class
label.
Args:
tensor_dict: dictionary containing input tensors keyed by
fields.InputDataFields.
model_preprocess_fn: model's preprocess function to apply on image tensor.
This function must take in a 4-D float tensor and return a 4-D preprocess
float tensor and a tensor containing the true image shape.
image_resizer_fn: image resizer function to apply on groundtruth instance
`masks. This function must take a 3-D float tensor of an image and a 3-D
tensor of instance masks and return a resized version of these along with
the true shapes.
num_classes: number of max classes to one-hot (or k-hot) encode the class
labels.
data_augmentation_fn: (optional) data augmentation function to apply on
input `tensor_dict`.
merge_multiple_boxes: (optional) whether to merge multiple groundtruth boxes
and classes for a given image if the boxes are exactly the same.
retain_original_image: (optional) whether to retain original image in the
output dictionary.
use_multiclass_scores: whether to use multiclass scores as class targets
instead of one-hot encoding of `groundtruth_classes`. When
this is True and multiclass_scores is empty, one-hot encoding of
`groundtruth_classes` is used as a fallback.
use_bfloat16: (optional) a bool, whether to use bfloat16 in training.
retain_original_image_additional_channels: (optional) Whether to retain
original image additional channels in the output dictionary.
keypoint_type_weight: A list (of length num_keypoints) containing
groundtruth loss weights to use for each keypoint. If None, will use a
weight of 1.
Returns:
A dictionary keyed by fields.InputDataFields containing the tensors obtained
after applying all the transformations.
Raises:
KeyError: If both groundtruth_labeled_classes and groundtruth_image_classes
are provided by the decoder in tensor_dict since both fields are
considered to contain the same information.
"""
out_tensor_dict = tensor_dict.copy()
input_fields = fields.InputDataFields
labeled_classes_field = input_fields.groundtruth_labeled_classes
image_classes_field = input_fields.groundtruth_image_classes
verified_neg_classes_field = input_fields.groundtruth_verified_neg_classes
not_exhaustive_field = input_fields.groundtruth_not_exhaustive_classes
if (labeled_classes_field in out_tensor_dict and
image_classes_field in out_tensor_dict):
raise KeyError('groundtruth_labeled_classes and groundtruth_image_classes'
'are provided by the decoder, but only one should be set.')
for field, map_empty_to_ones in [
(labeled_classes_field, True),
(image_classes_field, True),
(verified_neg_classes_field, False),
(not_exhaustive_field, False)]:
if field in out_tensor_dict:
out_tensor_dict[field] = _remove_unrecognized_classes(
out_tensor_dict[field], unrecognized_label=-1)
out_tensor_dict[field] = convert_labeled_classes_to_k_hot(
out_tensor_dict[field], num_classes, map_empty_to_ones)
if input_fields.multiclass_scores in out_tensor_dict:
out_tensor_dict[
input_fields
.multiclass_scores] = _multiclass_scores_or_one_hot_labels(
out_tensor_dict[input_fields.multiclass_scores],
out_tensor_dict[input_fields.groundtruth_boxes],
out_tensor_dict[input_fields.groundtruth_classes],
num_classes)
if input_fields.groundtruth_boxes in out_tensor_dict:
out_tensor_dict = util_ops.filter_groundtruth_with_nan_box_coordinates(
out_tensor_dict)
out_tensor_dict = util_ops.filter_unrecognized_classes(out_tensor_dict)
if retain_original_image:
out_tensor_dict[input_fields.original_image] = tf.cast(
image_resizer_fn(out_tensor_dict[input_fields.image],
None)[0], tf.uint8)
if input_fields.image_additional_channels in out_tensor_dict:
channels = out_tensor_dict[input_fields.image_additional_channels]
out_tensor_dict[input_fields.image] = tf.concat(
[out_tensor_dict[input_fields.image], channels], axis=2)
if retain_original_image_additional_channels:
out_tensor_dict[
input_fields.image_additional_channels] = tf.cast(
image_resizer_fn(channels, None)[0], tf.uint8)
# Apply data augmentation ops.
if data_augmentation_fn is not None:
out_tensor_dict = data_augmentation_fn(out_tensor_dict)
# Apply model preprocessing ops and resize instance masks.
image = out_tensor_dict[input_fields.image]
preprocessed_resized_image, true_image_shape = model_preprocess_fn(
tf.expand_dims(tf.cast(image, dtype=tf.float32), axis=0))
preprocessed_shape = tf.shape(preprocessed_resized_image)
new_height, new_width = preprocessed_shape[1], preprocessed_shape[2]
im_box = tf.stack([
0.0, 0.0,
tf.to_float(new_height) / tf.to_float(true_image_shape[0, 0]),
tf.to_float(new_width) / tf.to_float(true_image_shape[0, 1])
])
if input_fields.groundtruth_boxes in tensor_dict:
bboxes = out_tensor_dict[input_fields.groundtruth_boxes]
boxlist = box_list.BoxList(bboxes)
realigned_bboxes = box_list_ops.change_coordinate_frame(boxlist, im_box)
realigned_boxes_tensor = realigned_bboxes.get()
valid_boxes_tensor = assert_or_prune_invalid_boxes(realigned_boxes_tensor)
out_tensor_dict[
input_fields.groundtruth_boxes] = valid_boxes_tensor
if input_fields.groundtruth_keypoints in tensor_dict:
keypoints = out_tensor_dict[input_fields.groundtruth_keypoints]
realigned_keypoints = keypoint_ops.change_coordinate_frame(keypoints,
im_box)
out_tensor_dict[
input_fields.groundtruth_keypoints] = realigned_keypoints
flds_gt_kpt = input_fields.groundtruth_keypoints
flds_gt_kpt_vis = input_fields.groundtruth_keypoint_visibilities
flds_gt_kpt_weights = input_fields.groundtruth_keypoint_weights
if flds_gt_kpt_vis not in out_tensor_dict:
out_tensor_dict[flds_gt_kpt_vis] = tf.ones_like(
out_tensor_dict[flds_gt_kpt][:, :, 0],
dtype=tf.bool)
flds_gt_kpt_depth = fields.InputDataFields.groundtruth_keypoint_depths
flds_gt_kpt_depth_weight = (
fields.InputDataFields.groundtruth_keypoint_depth_weights)
if flds_gt_kpt_depth in out_tensor_dict:
out_tensor_dict[flds_gt_kpt_depth] = out_tensor_dict[flds_gt_kpt_depth]
out_tensor_dict[flds_gt_kpt_depth_weight] = out_tensor_dict[
flds_gt_kpt_depth_weight]
out_tensor_dict[flds_gt_kpt_weights] = (
keypoint_ops.keypoint_weights_from_visibilities(
out_tensor_dict[flds_gt_kpt_vis],
keypoint_type_weight))
dp_surface_coords_fld = input_fields.groundtruth_dp_surface_coords
if dp_surface_coords_fld in tensor_dict:
dp_surface_coords = out_tensor_dict[dp_surface_coords_fld]
realigned_dp_surface_coords = densepose_ops.change_coordinate_frame(
dp_surface_coords, im_box)
out_tensor_dict[dp_surface_coords_fld] = realigned_dp_surface_coords
if use_bfloat16:
preprocessed_resized_image = tf.cast(
preprocessed_resized_image, tf.bfloat16)
if input_fields.context_features in out_tensor_dict:
out_tensor_dict[input_fields.context_features] = tf.cast(
out_tensor_dict[input_fields.context_features], tf.bfloat16)
out_tensor_dict[input_fields.image] = tf.squeeze(
preprocessed_resized_image, axis=0)
out_tensor_dict[input_fields.true_image_shape] = tf.squeeze(
true_image_shape, axis=0)
if input_fields.groundtruth_instance_masks in out_tensor_dict:
masks = out_tensor_dict[input_fields.groundtruth_instance_masks]
_, resized_masks, _ = image_resizer_fn(image, masks)
if use_bfloat16:
resized_masks = tf.cast(resized_masks, tf.bfloat16)
out_tensor_dict[
input_fields.groundtruth_instance_masks] = resized_masks
zero_indexed_groundtruth_classes = out_tensor_dict[
input_fields.groundtruth_classes] - _LABEL_OFFSET
if use_multiclass_scores:
out_tensor_dict[
input_fields.groundtruth_classes] = out_tensor_dict[
input_fields.multiclass_scores]
else:
out_tensor_dict[input_fields.groundtruth_classes] = tf.one_hot(
zero_indexed_groundtruth_classes, num_classes)
out_tensor_dict.pop(input_fields.multiclass_scores, None)
if input_fields.groundtruth_confidences in out_tensor_dict:
groundtruth_confidences = out_tensor_dict[
input_fields.groundtruth_confidences]
# Map the confidences to the one-hot encoding of classes
out_tensor_dict[input_fields.groundtruth_confidences] = (
tf.reshape(groundtruth_confidences, [-1, 1]) *
out_tensor_dict[input_fields.groundtruth_classes])
else:
groundtruth_confidences = tf.ones_like(
zero_indexed_groundtruth_classes, dtype=tf.float32)
out_tensor_dict[input_fields.groundtruth_confidences] = (
out_tensor_dict[input_fields.groundtruth_classes])
if merge_multiple_boxes:
merged_boxes, merged_classes, merged_confidences, _ = (
util_ops.merge_boxes_with_multiple_labels(
out_tensor_dict[input_fields.groundtruth_boxes],
zero_indexed_groundtruth_classes,
groundtruth_confidences,
num_classes))
merged_classes = tf.cast(merged_classes, tf.float32)
out_tensor_dict[input_fields.groundtruth_boxes] = merged_boxes
out_tensor_dict[input_fields.groundtruth_classes] = merged_classes
out_tensor_dict[input_fields.groundtruth_confidences] = (
merged_confidences)
if input_fields.groundtruth_boxes in out_tensor_dict:
out_tensor_dict[input_fields.num_groundtruth_boxes] = tf.shape(
out_tensor_dict[input_fields.groundtruth_boxes])[0]
return out_tensor_dict
def pad_input_data_to_static_shapes(tensor_dict,
max_num_boxes,
num_classes,
spatial_image_shape=None,
max_num_context_features=None,
context_feature_length=None,
max_dp_points=336):
"""Pads input tensors to static shapes.
In case num_additional_channels > 0, we assume that the additional channels
have already been concatenated to the base image.
Args:
tensor_dict: Tensor dictionary of input data
max_num_boxes: Max number of groundtruth boxes needed to compute shapes for
padding.
num_classes: Number of classes in the dataset needed to compute shapes for
padding.
spatial_image_shape: A list of two integers of the form [height, width]
containing expected spatial shape of the image.
max_num_context_features (optional): The maximum number of context
features needed to compute shapes padding.
context_feature_length (optional): The length of the context feature.
max_dp_points (optional): The maximum number of DensePose sampled points per
instance. The default (336) is selected since the original DensePose paper
(https://arxiv.org/pdf/1802.00434.pdf) indicates that the maximum number
of samples per part is 14, and therefore 24 * 14 = 336 is the maximum
sampler per instance.
Returns:
A dictionary keyed by fields.InputDataFields containing padding shapes for
tensors in the dataset.
Raises:
ValueError: If groundtruth classes is neither rank 1 nor rank 2, or if we
detect that additional channels have not been concatenated yet, or if
max_num_context_features is not specified and context_features is in the
tensor dict.
"""
if not spatial_image_shape or spatial_image_shape == [-1, -1]:
height, width = None, None
else:
height, width = spatial_image_shape # pylint: disable=unpacking-non-sequence
input_fields = fields.InputDataFields
num_additional_channels = 0
if input_fields.image_additional_channels in tensor_dict:
num_additional_channels = shape_utils.get_dim_as_int(tensor_dict[
input_fields.image_additional_channels].shape[2])
# We assume that if num_additional_channels > 0, then it has already been
# concatenated to the base image (but not the ground truth).
num_channels = 3
if input_fields.image in tensor_dict:
num_channels = shape_utils.get_dim_as_int(
tensor_dict[input_fields.image].shape[2])
if num_additional_channels:
if num_additional_channels >= num_channels:
raise ValueError(
'Image must be already concatenated with additional channels.')
if (input_fields.original_image in tensor_dict and
shape_utils.get_dim_as_int(
tensor_dict[input_fields.original_image].shape[2]) ==
num_channels):
raise ValueError(
'Image must be already concatenated with additional channels.')
if input_fields.context_features in tensor_dict and (
max_num_context_features is None):
raise ValueError('max_num_context_features must be specified in the model '
'config if include_context is specified in the input '
'config')
padding_shapes = {
input_fields.image: [height, width, num_channels],
input_fields.original_image_spatial_shape: [2],
input_fields.image_additional_channels: [
height, width, num_additional_channels
],
input_fields.source_id: [],
input_fields.filename: [],
input_fields.key: [],
input_fields.groundtruth_difficult: [max_num_boxes],
input_fields.groundtruth_boxes: [max_num_boxes, 4],
input_fields.groundtruth_classes: [max_num_boxes, num_classes],
input_fields.groundtruth_instance_masks: [
max_num_boxes, height, width
],
input_fields.groundtruth_instance_mask_weights: [max_num_boxes],
input_fields.groundtruth_is_crowd: [max_num_boxes],
input_fields.groundtruth_group_of: [max_num_boxes],
input_fields.groundtruth_area: [max_num_boxes],
input_fields.groundtruth_weights: [max_num_boxes],
input_fields.groundtruth_confidences: [
max_num_boxes, num_classes
],
input_fields.num_groundtruth_boxes: [],
input_fields.groundtruth_label_types: [max_num_boxes],
input_fields.groundtruth_label_weights: [max_num_boxes],
input_fields.true_image_shape: [3],
input_fields.groundtruth_image_classes: [num_classes],
input_fields.groundtruth_image_confidences: [num_classes],
input_fields.groundtruth_labeled_classes: [num_classes],
}
if input_fields.original_image in tensor_dict:
padding_shapes[input_fields.original_image] = [
height, width,
shape_utils.get_dim_as_int(tensor_dict[input_fields.
original_image].shape[2])
]
if input_fields.groundtruth_keypoints in tensor_dict:
tensor_shape = (
tensor_dict[input_fields.groundtruth_keypoints].shape)
padding_shape = [max_num_boxes,
shape_utils.get_dim_as_int(tensor_shape[1]),
shape_utils.get_dim_as_int(tensor_shape[2])]
padding_shapes[input_fields.groundtruth_keypoints] = padding_shape
if input_fields.groundtruth_keypoint_visibilities in tensor_dict:
tensor_shape = tensor_dict[input_fields.
groundtruth_keypoint_visibilities].shape
padding_shape = [max_num_boxes, shape_utils.get_dim_as_int(tensor_shape[1])]
padding_shapes[input_fields.
groundtruth_keypoint_visibilities] = padding_shape
if fields.InputDataFields.groundtruth_keypoint_depths in tensor_dict:
tensor_shape = tensor_dict[fields.InputDataFields.
groundtruth_keypoint_depths].shape
padding_shape = [max_num_boxes, shape_utils.get_dim_as_int(tensor_shape[1])]
padding_shapes[fields.InputDataFields.
groundtruth_keypoint_depths] = padding_shape
padding_shapes[fields.InputDataFields.
groundtruth_keypoint_depth_weights] = padding_shape
if input_fields.groundtruth_keypoint_weights in tensor_dict:
tensor_shape = (
tensor_dict[input_fields.groundtruth_keypoint_weights].shape)
padding_shape = [max_num_boxes, shape_utils.get_dim_as_int(tensor_shape[1])]
padding_shapes[input_fields.
groundtruth_keypoint_weights] = padding_shape
if input_fields.groundtruth_dp_num_points in tensor_dict:
padding_shapes[
input_fields.groundtruth_dp_num_points] = [max_num_boxes]
padding_shapes[
input_fields.groundtruth_dp_part_ids] = [
max_num_boxes, max_dp_points]
padding_shapes[
input_fields.groundtruth_dp_surface_coords] = [
max_num_boxes, max_dp_points, 4]
if input_fields.groundtruth_track_ids in tensor_dict:
padding_shapes[
input_fields.groundtruth_track_ids] = [max_num_boxes]
if input_fields.groundtruth_verified_neg_classes in tensor_dict:
padding_shapes[
input_fields.groundtruth_verified_neg_classes] = [num_classes]
if input_fields.groundtruth_not_exhaustive_classes in tensor_dict:
padding_shapes[
input_fields.groundtruth_not_exhaustive_classes] = [num_classes]
# Prepare for ContextRCNN related fields.
if input_fields.context_features in tensor_dict:
padding_shape = [max_num_context_features, context_feature_length]
padding_shapes[input_fields.context_features] = padding_shape
tensor_shape = tf.shape(
tensor_dict[fields.InputDataFields.context_features])
tensor_dict[fields.InputDataFields.valid_context_size] = tensor_shape[0]
padding_shapes[fields.InputDataFields.valid_context_size] = []
if fields.InputDataFields.context_feature_length in tensor_dict:
padding_shapes[fields.InputDataFields.context_feature_length] = []
if fields.InputDataFields.context_features_image_id_list in tensor_dict:
padding_shapes[fields.InputDataFields.context_features_image_id_list] = [
max_num_context_features]
if input_fields.is_annotated in tensor_dict:
padding_shapes[input_fields.is_annotated] = []
padded_tensor_dict = {}
for tensor_name in tensor_dict:
padded_tensor_dict[tensor_name] = shape_utils.pad_or_clip_nd(
tensor_dict[tensor_name], padding_shapes[tensor_name])
# Make sure that the number of groundtruth boxes now reflects the
# padded/clipped tensors.
if input_fields.num_groundtruth_boxes in padded_tensor_dict:
padded_tensor_dict[input_fields.num_groundtruth_boxes] = (
tf.minimum(
padded_tensor_dict[input_fields.num_groundtruth_boxes],
max_num_boxes))
return padded_tensor_dict
def augment_input_data(tensor_dict, data_augmentation_options):
"""Applies data augmentation ops to input tensors.
Args:
tensor_dict: A dictionary of input tensors keyed by fields.InputDataFields.
data_augmentation_options: A list of tuples, where each tuple contains a
function and a dictionary that contains arguments and their values.
Usually, this is the output of core/preprocessor.build.
Returns:
A dictionary of tensors obtained by applying data augmentation ops to the
input tensor dictionary.
"""
tensor_dict[fields.InputDataFields.image] = tf.expand_dims(
tf.cast(tensor_dict[fields.InputDataFields.image], dtype=tf.float32), 0)
include_instance_masks = (fields.InputDataFields.groundtruth_instance_masks
in tensor_dict)
include_instance_mask_weights = (
fields.InputDataFields.groundtruth_instance_mask_weights in tensor_dict)
include_keypoints = (fields.InputDataFields.groundtruth_keypoints
in tensor_dict)
include_keypoint_visibilities = (
fields.InputDataFields.groundtruth_keypoint_visibilities in tensor_dict)
include_keypoint_depths = (
fields.InputDataFields.groundtruth_keypoint_depths in tensor_dict)
include_label_weights = (fields.InputDataFields.groundtruth_weights
in tensor_dict)
include_label_confidences = (fields.InputDataFields.groundtruth_confidences
in tensor_dict)
include_multiclass_scores = (fields.InputDataFields.multiclass_scores in
tensor_dict)
dense_pose_fields = [fields.InputDataFields.groundtruth_dp_num_points,
fields.InputDataFields.groundtruth_dp_part_ids,
fields.InputDataFields.groundtruth_dp_surface_coords]
include_dense_pose = all(field in tensor_dict for field in dense_pose_fields)
tensor_dict = preprocessor.preprocess(
tensor_dict, data_augmentation_options,
func_arg_map=preprocessor.get_default_func_arg_map(
include_label_weights=include_label_weights,
include_label_confidences=include_label_confidences,
include_multiclass_scores=include_multiclass_scores,
include_instance_masks=include_instance_masks,
include_instance_mask_weights=include_instance_mask_weights,
include_keypoints=include_keypoints,
include_keypoint_visibilities=include_keypoint_visibilities,
include_dense_pose=include_dense_pose,
include_keypoint_depths=include_keypoint_depths))
tensor_dict[fields.InputDataFields.image] = tf.squeeze(
tensor_dict[fields.InputDataFields.image], axis=0)
return tensor_dict
def _get_labels_dict(input_dict):
"""Extracts labels dict from input dict."""
required_label_keys = [
fields.InputDataFields.num_groundtruth_boxes,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
fields.InputDataFields.groundtruth_weights,
]
labels_dict = {}
for key in required_label_keys:
labels_dict[key] = input_dict[key]
optional_label_keys = [
fields.InputDataFields.groundtruth_confidences,
fields.InputDataFields.groundtruth_labeled_classes,
fields.InputDataFields.groundtruth_keypoints,
fields.InputDataFields.groundtruth_keypoint_depths,
fields.InputDataFields.groundtruth_keypoint_depth_weights,
fields.InputDataFields.groundtruth_instance_masks,
fields.InputDataFields.groundtruth_instance_mask_weights,
fields.InputDataFields.groundtruth_area,
fields.InputDataFields.groundtruth_is_crowd,
fields.InputDataFields.groundtruth_group_of,
fields.InputDataFields.groundtruth_difficult,
fields.InputDataFields.groundtruth_keypoint_visibilities,
fields.InputDataFields.groundtruth_keypoint_weights,
fields.InputDataFields.groundtruth_dp_num_points,
fields.InputDataFields.groundtruth_dp_part_ids,
fields.InputDataFields.groundtruth_dp_surface_coords,
fields.InputDataFields.groundtruth_track_ids,
fields.InputDataFields.groundtruth_verified_neg_classes,
fields.InputDataFields.groundtruth_not_exhaustive_classes
]
for key in optional_label_keys:
if key in input_dict:
labels_dict[key] = input_dict[key]
if fields.InputDataFields.groundtruth_difficult in labels_dict:
labels_dict[fields.InputDataFields.groundtruth_difficult] = tf.cast(
labels_dict[fields.InputDataFields.groundtruth_difficult], tf.int32)
return labels_dict
def _replace_empty_string_with_random_number(string_tensor):
"""Returns string unchanged if non-empty, and random string tensor otherwise.
The random string is an integer 0 and 2**63 - 1, casted as string.
Args:
string_tensor: A tf.tensor of dtype string.
Returns:
out_string: A tf.tensor of dtype string. If string_tensor contains the empty
string, out_string will contain a random integer casted to a string.
Otherwise string_tensor is returned unchanged.
"""
empty_string = tf.constant('', dtype=tf.string, name='EmptyString')
random_source_id = tf.as_string(
tf.random_uniform(shape=[], maxval=2**63 - 1, dtype=tf.int64))
out_string = tf.cond(
tf.equal(string_tensor, empty_string),
true_fn=lambda: random_source_id,
false_fn=lambda: string_tensor)
return out_string
def _get_features_dict(input_dict, include_source_id=False):
"""Extracts features dict from input dict."""
source_id = _replace_empty_string_with_random_number(
input_dict[fields.InputDataFields.source_id])
hash_from_source_id = tf.string_to_hash_bucket_fast(source_id, HASH_BINS)
features = {
fields.InputDataFields.image:
input_dict[fields.InputDataFields.image],
HASH_KEY: tf.cast(hash_from_source_id, tf.int32),
fields.InputDataFields.true_image_shape:
input_dict[fields.InputDataFields.true_image_shape],
fields.InputDataFields.original_image_spatial_shape:
input_dict[fields.InputDataFields.original_image_spatial_shape]
}
if include_source_id:
features[fields.InputDataFields.source_id] = source_id
if fields.InputDataFields.original_image in input_dict:
features[fields.InputDataFields.original_image] = input_dict[
fields.InputDataFields.original_image]
if fields.InputDataFields.image_additional_channels in input_dict:
features[fields.InputDataFields.image_additional_channels] = input_dict[
fields.InputDataFields.image_additional_channels]
if fields.InputDataFields.context_features in input_dict:
features[fields.InputDataFields.context_features] = input_dict[
fields.InputDataFields.context_features]
if fields.InputDataFields.valid_context_size in input_dict:
features[fields.InputDataFields.valid_context_size] = input_dict[
fields.InputDataFields.valid_context_size]
if fields.InputDataFields.context_features_image_id_list in input_dict:
features[fields.InputDataFields.context_features_image_id_list] = (
input_dict[fields.InputDataFields.context_features_image_id_list])
return features
def create_train_input_fn(train_config, train_input_config,
model_config):
"""Creates a train `input` function for `Estimator`.
Args:
train_config: A train_pb2.TrainConfig.
train_input_config: An input_reader_pb2.InputReader.
model_config: A model_pb2.DetectionModel.
Returns:
`input_fn` for `Estimator` in TRAIN mode.
"""
def _train_input_fn(params=None):
return train_input(train_config, train_input_config, model_config,
params=params)
return _train_input_fn
def train_input(train_config, train_input_config,
model_config, model=None, params=None, input_context=None):
"""Returns `features` and `labels` tensor dictionaries for training.
Args:
train_config: A train_pb2.TrainConfig.
train_input_config: An input_reader_pb2.InputReader.
model_config: A model_pb2.DetectionModel.
model: A pre-constructed Detection Model.
If None, one will be created from the config.
params: Parameter dictionary passed from the estimator.
input_context: optional, A tf.distribute.InputContext object used to
shard filenames and compute per-replica batch_size when this function
is being called per-replica.
Returns:
A tf.data.Dataset that holds (features, labels) tuple.
features: Dictionary of feature tensors.
features[fields.InputDataFields.image] is a [batch_size, H, W, C]
float32 tensor with preprocessed images.
features[HASH_KEY] is a [batch_size] int32 tensor representing unique
identifiers for the images.
features[fields.InputDataFields.true_image_shape] is a [batch_size, 3]
int32 tensor representing the true image shapes, as preprocessed
images could be padded.
features[fields.InputDataFields.original_image] (optional) is a
[batch_size, H, W, C] float32 tensor with original images.
labels: Dictionary of groundtruth tensors.
labels[fields.InputDataFields.num_groundtruth_boxes] is a [batch_size]
int32 tensor indicating the number of groundtruth boxes.
labels[fields.InputDataFields.groundtruth_boxes] is a
[batch_size, num_boxes, 4] float32 tensor containing the corners of
the groundtruth boxes.
labels[fields.InputDataFields.groundtruth_classes] is a
[batch_size, num_boxes, num_classes] float32 one-hot tensor of
classes.
labels[fields.InputDataFields.groundtruth_weights] is a
[batch_size, num_boxes] float32 tensor containing groundtruth weights
for the boxes.
-- Optional --
labels[fields.InputDataFields.groundtruth_instance_masks] is a
[batch_size, num_boxes, H, W] float32 tensor containing only binary
values, which represent instance masks for objects.
labels[fields.InputDataFields.groundtruth_instance_mask_weights] is a
[batch_size, num_boxes] float32 tensor containing groundtruth weights
for each instance mask.
labels[fields.InputDataFields.groundtruth_keypoints] is a
[batch_size, num_boxes, num_keypoints, 2] float32 tensor containing
keypoints for each box.
labels[fields.InputDataFields.groundtruth_weights] is a
[batch_size, num_boxes, num_keypoints] float32 tensor containing
groundtruth weights for the keypoints.
labels[fields.InputDataFields.groundtruth_visibilities] is a
[batch_size, num_boxes, num_keypoints] bool tensor containing
groundtruth visibilities for each keypoint.
labels[fields.InputDataFields.groundtruth_labeled_classes] is a
[batch_size, num_classes] float32 k-hot tensor of classes.
labels[fields.InputDataFields.groundtruth_dp_num_points] is a
[batch_size, num_boxes] int32 tensor with the number of sampled
DensePose points per object.
labels[fields.InputDataFields.groundtruth_dp_part_ids] is a
[batch_size, num_boxes, max_sampled_points] int32 tensor with the
DensePose part ids (0-indexed) per object.
labels[fields.InputDataFields.groundtruth_dp_surface_coords] is a
[batch_size, num_boxes, max_sampled_points, 4] float32 tensor with the
DensePose surface coordinates. The format is (y, x, v, u), where (y, x)
are normalized image coordinates and (v, u) are normalized surface part
coordinates.
labels[fields.InputDataFields.groundtruth_track_ids] is a
[batch_size, num_boxes] int32 tensor with the track ID for each object.
Raises:
TypeError: if the `train_config`, `train_input_config` or `model_config`
are not of the correct type.
"""
if not isinstance(train_config, train_pb2.TrainConfig):
raise TypeError('For training mode, the `train_config` must be a '
'train_pb2.TrainConfig.')
if not isinstance(train_input_config, input_reader_pb2.InputReader):
raise TypeError('The `train_input_config` must be a '
'input_reader_pb2.InputReader.')
if not isinstance(model_config, model_pb2.DetectionModel):
raise TypeError('The `model_config` must be a '
'model_pb2.DetectionModel.')
if model is None:
model_preprocess_fn = INPUT_BUILDER_UTIL_MAP['model_build'](
model_config, is_training=True).preprocess
else:
model_preprocess_fn = model.preprocess
num_classes = config_util.get_number_of_classes(model_config)
def transform_and_pad_input_data_fn(tensor_dict):
"""Combines transform and pad operation."""
data_augmentation_options = [
preprocessor_builder.build(step)
for step in train_config.data_augmentation_options
]
data_augmentation_fn = functools.partial(
augment_input_data,
data_augmentation_options=data_augmentation_options)
image_resizer_config = config_util.get_image_resizer_config(model_config)
image_resizer_fn = image_resizer_builder.build(image_resizer_config)
keypoint_type_weight = train_input_config.keypoint_type_weight or None
transform_data_fn = functools.partial(
transform_input_data, model_preprocess_fn=model_preprocess_fn,
image_resizer_fn=image_resizer_fn,
num_classes=num_classes,
data_augmentation_fn=data_augmentation_fn,
merge_multiple_boxes=train_config.merge_multiple_label_boxes,
retain_original_image=train_config.retain_original_images,
use_multiclass_scores=train_config.use_multiclass_scores,
use_bfloat16=train_config.use_bfloat16,
keypoint_type_weight=keypoint_type_weight)
tensor_dict = pad_input_data_to_static_shapes(
tensor_dict=transform_data_fn(tensor_dict),
max_num_boxes=train_input_config.max_number_of_boxes,
num_classes=num_classes,
spatial_image_shape=config_util.get_spatial_image_size(
image_resizer_config),
max_num_context_features=config_util.get_max_num_context_features(
model_config),
context_feature_length=config_util.get_context_feature_length(
model_config))
include_source_id = train_input_config.include_source_id
return (_get_features_dict(tensor_dict, include_source_id),
_get_labels_dict(tensor_dict))
reduce_to_frame_fn = get_reduce_to_frame_fn(train_input_config, True)
dataset = INPUT_BUILDER_UTIL_MAP['dataset_build'](
train_input_config,
transform_input_data_fn=transform_and_pad_input_data_fn,
batch_size=params['batch_size'] if params else train_config.batch_size,
input_context=input_context,
reduce_to_frame_fn=reduce_to_frame_fn)
return dataset
def create_eval_input_fn(eval_config, eval_input_config, model_config):
"""Creates an eval `input` function for `Estimator`.
Args:
eval_config: An eval_pb2.EvalConfig.
eval_input_config: An input_reader_pb2.InputReader.
model_config: A model_pb2.DetectionModel.
Returns:
`input_fn` for `Estimator` in EVAL mode.
"""
def _eval_input_fn(params=None):
return eval_input(eval_config, eval_input_config, model_config,
params=params)
return _eval_input_fn
def eval_input(eval_config, eval_input_config, model_config,
model=None, params=None, input_context=None):
"""Returns `features` and `labels` tensor dictionaries for evaluation.
Args:
eval_config: An eval_pb2.EvalConfig.
eval_input_config: An input_reader_pb2.InputReader.
model_config: A model_pb2.DetectionModel.
model: A pre-constructed Detection Model.
If None, one will be created from the config.
params: Parameter dictionary passed from the estimator.
input_context: optional, A tf.distribute.InputContext object used to
shard filenames and compute per-replica batch_size when this function
is being called per-replica.
Returns:
A tf.data.Dataset that holds (features, labels) tuple.
features: Dictionary of feature tensors.
features[fields.InputDataFields.image] is a [1, H, W, C] float32 tensor
with preprocessed images.
features[HASH_KEY] is a [1] int32 tensor representing unique
identifiers for the images.
features[fields.InputDataFields.true_image_shape] is a [1, 3]
int32 tensor representing the true image shapes, as preprocessed
images could be padded.
features[fields.InputDataFields.original_image] is a [1, H', W', C]
float32 tensor with the original image.
labels: Dictionary of groundtruth tensors.
labels[fields.InputDataFields.groundtruth_boxes] is a [1, num_boxes, 4]
float32 tensor containing the corners of the groundtruth boxes.
labels[fields.InputDataFields.groundtruth_classes] is a
[num_boxes, num_classes] float32 one-hot tensor of classes.
labels[fields.InputDataFields.groundtruth_area] is a [1, num_boxes]
float32 tensor containing object areas.
labels[fields.InputDataFields.groundtruth_is_crowd] is a [1, num_boxes]
bool tensor indicating if the boxes enclose a crowd.
labels[fields.InputDataFields.groundtruth_difficult] is a [1, num_boxes]
int32 tensor indicating if the boxes represent difficult instances.
-- Optional --
labels[fields.InputDataFields.groundtruth_instance_masks] is a
[1, num_boxes, H, W] float32 tensor containing only binary values,
which represent instance masks for objects.
labels[fields.InputDataFields.groundtruth_instance_mask_weights] is a
[1, num_boxes] float32 tensor containing groundtruth weights for each
instance mask.
labels[fields.InputDataFields.groundtruth_weights] is a
[batch_size, num_boxes, num_keypoints] float32 tensor containing
groundtruth weights for the keypoints.
labels[fields.InputDataFields.groundtruth_visibilities] is a
[batch_size, num_boxes, num_keypoints] bool tensor containing
groundtruth visibilities for each keypoint.
labels[fields.InputDataFields.groundtruth_group_of] is a [1, num_boxes]
bool tensor indicating if the box covers more than 5 instances of the
same class which heavily occlude each other.
labels[fields.InputDataFields.groundtruth_labeled_classes] is a
[num_boxes, num_classes] float32 k-hot tensor of classes.
labels[fields.InputDataFields.groundtruth_dp_num_points] is a
[batch_size, num_boxes] int32 tensor with the number of sampled
DensePose points per object.
labels[fields.InputDataFields.groundtruth_dp_part_ids] is a
[batch_size, num_boxes, max_sampled_points] int32 tensor with the
DensePose part ids (0-indexed) per object.
labels[fields.InputDataFields.groundtruth_dp_surface_coords] is a
[batch_size, num_boxes, max_sampled_points, 4] float32 tensor with the
DensePose surface coordinates. The format is (y, x, v, u), where (y, x)
are normalized image coordinates and (v, u) are normalized surface part
coordinates.
labels[fields.InputDataFields.groundtruth_track_ids] is a
[batch_size, num_boxes] int32 tensor with the track ID for each object.
Raises:
TypeError: if the `eval_config`, `eval_input_config` or `model_config`
are not of the correct type.
"""
params = params or {}
if not isinstance(eval_config, eval_pb2.EvalConfig):
raise TypeError('For eval mode, the `eval_config` must be a '
'train_pb2.EvalConfig.')
if not isinstance(eval_input_config, input_reader_pb2.InputReader):
raise TypeError('The `eval_input_config` must be a '
'input_reader_pb2.InputReader.')
if not isinstance(model_config, model_pb2.DetectionModel):
raise TypeError('The `model_config` must be a '
'model_pb2.DetectionModel.')
if eval_config.force_no_resize:
arch = model_config.WhichOneof('model')
arch_config = getattr(model_config, arch)
image_resizer_proto = image_resizer_pb2.ImageResizer()
image_resizer_proto.identity_resizer.CopyFrom(
image_resizer_pb2.IdentityResizer())
arch_config.image_resizer.CopyFrom(image_resizer_proto)
if model is None:
model_preprocess_fn = INPUT_BUILDER_UTIL_MAP['model_build'](
model_config, is_training=False).preprocess
else:
model_preprocess_fn = model.preprocess
def transform_and_pad_input_data_fn(tensor_dict):
"""Combines transform and pad operation."""
num_classes = config_util.get_number_of_classes(model_config)
image_resizer_config = config_util.get_image_resizer_config(model_config)
image_resizer_fn = image_resizer_builder.build(image_resizer_config)
keypoint_type_weight = eval_input_config.keypoint_type_weight or None
transform_data_fn = functools.partial(
transform_input_data, model_preprocess_fn=model_preprocess_fn,
image_resizer_fn=image_resizer_fn,
num_classes=num_classes,
data_augmentation_fn=None,
retain_original_image=eval_config.retain_original_images,
retain_original_image_additional_channels=
eval_config.retain_original_image_additional_channels,
keypoint_type_weight=keypoint_type_weight)
tensor_dict = pad_input_data_to_static_shapes(
tensor_dict=transform_data_fn(tensor_dict),
max_num_boxes=eval_input_config.max_number_of_boxes,
num_classes=config_util.get_number_of_classes(model_config),
spatial_image_shape=config_util.get_spatial_image_size(
image_resizer_config),
max_num_context_features=config_util.get_max_num_context_features(
model_config),
context_feature_length=config_util.get_context_feature_length(
model_config))
include_source_id = eval_input_config.include_source_id
return (_get_features_dict(tensor_dict, include_source_id),
_get_labels_dict(tensor_dict))
reduce_to_frame_fn = get_reduce_to_frame_fn(eval_input_config, False)
dataset = INPUT_BUILDER_UTIL_MAP['dataset_build'](
eval_input_config,
batch_size=params['batch_size'] if params else eval_config.batch_size,
transform_input_data_fn=transform_and_pad_input_data_fn,
input_context=input_context,
reduce_to_frame_fn=reduce_to_frame_fn)
return dataset
def create_predict_input_fn(model_config, predict_input_config):
"""Creates a predict `input` function for `Estimator`.
Args:
model_config: A model_pb2.DetectionModel.
predict_input_config: An input_reader_pb2.InputReader.
Returns:
`input_fn` for `Estimator` in PREDICT mode.
"""
def _predict_input_fn(params=None):
"""Decodes serialized tf.Examples and returns `ServingInputReceiver`.
Args:
params: Parameter dictionary passed from the estimator.
Returns:
`ServingInputReceiver`.
"""
del params
example = tf.placeholder(dtype=tf.string, shape=[], name='tf_example')
num_classes = config_util.get_number_of_classes(model_config)
model_preprocess_fn = INPUT_BUILDER_UTIL_MAP['model_build'](
model_config, is_training=False).preprocess
image_resizer_config = config_util.get_image_resizer_config(model_config)
image_resizer_fn = image_resizer_builder.build(image_resizer_config)
transform_fn = functools.partial(
transform_input_data, model_preprocess_fn=model_preprocess_fn,
image_resizer_fn=image_resizer_fn,
num_classes=num_classes,
data_augmentation_fn=None)
decoder = tf_example_decoder.TfExampleDecoder(
load_instance_masks=False,
num_additional_channels=predict_input_config.num_additional_channels)
input_dict = transform_fn(decoder.decode(example))
images = tf.cast(input_dict[fields.InputDataFields.image], dtype=tf.float32)
images = tf.expand_dims(images, axis=0)
true_image_shape = tf.expand_dims(
input_dict[fields.InputDataFields.true_image_shape], axis=0)
return tf.estimator.export.ServingInputReceiver(
features={
fields.InputDataFields.image: images,
fields.InputDataFields.true_image_shape: true_image_shape},
receiver_tensors={SERVING_FED_EXAMPLE_KEY: example})
return _predict_input_fn
def get_reduce_to_frame_fn(input_reader_config, is_training):
"""Returns a function reducing sequence tensors to single frame tensors.
If the input type is not TF_SEQUENCE_EXAMPLE, the tensors are passed through
this function unchanged. Otherwise, when in training mode, a single frame is
selected at random from the sequence example, and the tensors for that frame
are converted to single frame tensors, with all associated context features.
In evaluation mode all frames are converted to single frame tensors with
copied context tensors. After the sequence example tensors are converted into
one or many single frame tensors, the images from each frame are decoded.
Args:
input_reader_config: An input_reader_pb2.InputReader.
is_training: Whether we are in training mode.
Returns:
`reduce_to_frame_fn` for the dataset builder
"""
if input_reader_config.input_type != (
input_reader_pb2.InputType.Value('TF_SEQUENCE_EXAMPLE')):
return lambda dataset, dataset_map_fn, batch_size, config: dataset
else:
def reduce_to_frame(dataset, dataset_map_fn, batch_size,
input_reader_config):
"""Returns a function reducing sequence tensors to single frame tensors.
Args:
dataset: A tf dataset containing sequence tensors.
dataset_map_fn: A function that handles whether to
map_with_legacy_function for this dataset
batch_size: used if map_with_legacy_function is true to determine
num_parallel_calls
input_reader_config: used if map_with_legacy_function is true to
determine num_parallel_calls
Returns:
A tf dataset containing single frame tensors.
"""
if is_training:
def get_single_frame(tensor_dict):
"""Returns a random frame from a sequence.
Picks a random frame and returns slices of sequence tensors
corresponding to the random frame. Returns non-sequence tensors
unchanged.
Args:
tensor_dict: A dictionary containing sequence tensors.
Returns:
Tensors for a single random frame within the sequence.
"""
num_frames = tf.cast(
tf.shape(tensor_dict[fields.InputDataFields.source_id])[0],
dtype=tf.int32)
if input_reader_config.frame_index == -1:
frame_index = tf.random.uniform((), minval=0, maxval=num_frames,
dtype=tf.int32)
else:
frame_index = tf.constant(input_reader_config.frame_index,
dtype=tf.int32)
out_tensor_dict = {}
for key in tensor_dict:
if key in fields.SEQUENCE_FIELDS:
# Slice random frame from sequence tensors
out_tensor_dict[key] = tensor_dict[key][frame_index]
else:
# Copy all context tensors.
out_tensor_dict[key] = tensor_dict[key]
return out_tensor_dict
dataset = dataset_map_fn(dataset, get_single_frame, batch_size,
input_reader_config)
else:
dataset = dataset_map_fn(dataset, util_ops.tile_context_tensors,
batch_size, input_reader_config)
dataset = dataset.unbatch()
# Decode frame here as SequenceExample tensors contain encoded images.
dataset = dataset_map_fn(dataset, util_ops.decode_image, batch_size,
input_reader_config)
return dataset
return reduce_to_frame | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/inputs.py | inputs.py |
r"""Constructs model, inputs, and training environment."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import functools
import os
import tensorflow.compat.v1 as tf
import tensorflow.compat.v2 as tf2
import tf_slim as slim
from object_detection import eval_util
from object_detection import exporter as exporter_lib
from object_detection import inputs
from object_detection.builders import graph_rewriter_builder
from object_detection.builders import model_builder
from object_detection.builders import optimizer_builder
from object_detection.core import standard_fields as fields
from object_detection.utils import config_util
from object_detection.utils import label_map_util
from object_detection.utils import ops
from object_detection.utils import shape_utils
from object_detection.utils import variables_helper
from object_detection.utils import visualization_utils as vis_utils
# pylint: disable=g-import-not-at-top
try:
from tensorflow.contrib import learn as contrib_learn
except ImportError:
# TF 2.0 doesn't ship with contrib.
pass
# pylint: enable=g-import-not-at-top
# A map of names to methods that help build the model.
MODEL_BUILD_UTIL_MAP = {
'get_configs_from_pipeline_file':
config_util.get_configs_from_pipeline_file,
'create_pipeline_proto_from_configs':
config_util.create_pipeline_proto_from_configs,
'merge_external_params_with_configs':
config_util.merge_external_params_with_configs,
'create_train_input_fn':
inputs.create_train_input_fn,
'create_eval_input_fn':
inputs.create_eval_input_fn,
'create_predict_input_fn':
inputs.create_predict_input_fn,
'detection_model_fn_base': model_builder.build,
}
def _prepare_groundtruth_for_eval(detection_model, class_agnostic,
max_number_of_boxes):
"""Extracts groundtruth data from detection_model and prepares it for eval.
Args:
detection_model: A `DetectionModel` object.
class_agnostic: Whether the detections are class_agnostic.
max_number_of_boxes: Max number of groundtruth boxes.
Returns:
A tuple of:
groundtruth: Dictionary with the following fields:
'groundtruth_boxes': [batch_size, num_boxes, 4] float32 tensor of boxes,
in normalized coordinates.
'groundtruth_classes': [batch_size, num_boxes] int64 tensor of 1-indexed
classes.
'groundtruth_masks': 4D float32 tensor of instance masks (if provided in
groundtruth)
'groundtruth_is_crowd': [batch_size, num_boxes] bool tensor indicating
is_crowd annotations (if provided in groundtruth).
'groundtruth_area': [batch_size, num_boxes] float32 tensor indicating
the area (in the original absolute coordinates) of annotations (if
provided in groundtruth).
'num_groundtruth_boxes': [batch_size] tensor containing the maximum number
of groundtruth boxes per image..
'groundtruth_keypoints': [batch_size, num_boxes, num_keypoints, 2] float32
tensor of keypoints (if provided in groundtruth).
'groundtruth_dp_num_points_list': [batch_size, num_boxes] int32 tensor
with the number of DensePose points for each instance (if provided in
groundtruth).
'groundtruth_dp_part_ids_list': [batch_size, num_boxes,
max_sampled_points] int32 tensor with the part ids for each DensePose
sampled point (if provided in groundtruth).
'groundtruth_dp_surface_coords_list': [batch_size, num_boxes,
max_sampled_points, 4] containing the DensePose surface coordinates for
each sampled point (if provided in groundtruth).
'groundtruth_track_ids_list': [batch_size, num_boxes] int32 tensor
with track ID for each instance (if provided in groundtruth).
'groundtruth_group_of': [batch_size, num_boxes] bool tensor indicating
group_of annotations (if provided in groundtruth).
'groundtruth_labeled_classes': [batch_size, num_classes] int64
tensor of 1-indexed classes.
'groundtruth_verified_neg_classes': [batch_size, num_classes] float32
K-hot representation of 1-indexed classes which were verified as not
present in the image.
'groundtruth_not_exhaustive_classes': [batch_size, num_classes] K-hot
representation of 1-indexed classes which don't have all of their
instances marked exhaustively.
class_agnostic: Boolean indicating whether detections are class agnostic.
"""
input_data_fields = fields.InputDataFields()
groundtruth_boxes = tf.stack(
detection_model.groundtruth_lists(fields.BoxListFields.boxes))
groundtruth_boxes_shape = tf.shape(groundtruth_boxes)
# For class-agnostic models, groundtruth one-hot encodings collapse to all
# ones.
if class_agnostic:
groundtruth_classes_one_hot = tf.ones(
[groundtruth_boxes_shape[0], groundtruth_boxes_shape[1], 1])
else:
groundtruth_classes_one_hot = tf.stack(
detection_model.groundtruth_lists(fields.BoxListFields.classes))
label_id_offset = 1 # Applying label id offset (b/63711816)
groundtruth_classes = (
tf.argmax(groundtruth_classes_one_hot, axis=2) + label_id_offset)
groundtruth = {
input_data_fields.groundtruth_boxes: groundtruth_boxes,
input_data_fields.groundtruth_classes: groundtruth_classes
}
if detection_model.groundtruth_has_field(fields.BoxListFields.masks):
groundtruth[input_data_fields.groundtruth_instance_masks] = tf.stack(
detection_model.groundtruth_lists(fields.BoxListFields.masks))
if detection_model.groundtruth_has_field(fields.BoxListFields.is_crowd):
groundtruth[input_data_fields.groundtruth_is_crowd] = tf.stack(
detection_model.groundtruth_lists(fields.BoxListFields.is_crowd))
if detection_model.groundtruth_has_field(input_data_fields.groundtruth_area):
groundtruth[input_data_fields.groundtruth_area] = tf.stack(
detection_model.groundtruth_lists(input_data_fields.groundtruth_area))
if detection_model.groundtruth_has_field(fields.BoxListFields.keypoints):
groundtruth[input_data_fields.groundtruth_keypoints] = tf.stack(
detection_model.groundtruth_lists(fields.BoxListFields.keypoints))
if detection_model.groundtruth_has_field(
fields.BoxListFields.keypoint_depths):
groundtruth[input_data_fields.groundtruth_keypoint_depths] = tf.stack(
detection_model.groundtruth_lists(fields.BoxListFields.keypoint_depths))
groundtruth[
input_data_fields.groundtruth_keypoint_depth_weights] = tf.stack(
detection_model.groundtruth_lists(
fields.BoxListFields.keypoint_depth_weights))
if detection_model.groundtruth_has_field(
fields.BoxListFields.keypoint_visibilities):
groundtruth[input_data_fields.groundtruth_keypoint_visibilities] = tf.stack(
detection_model.groundtruth_lists(
fields.BoxListFields.keypoint_visibilities))
if detection_model.groundtruth_has_field(fields.BoxListFields.group_of):
groundtruth[input_data_fields.groundtruth_group_of] = tf.stack(
detection_model.groundtruth_lists(fields.BoxListFields.group_of))
label_id_offset_paddings = tf.constant([[0, 0], [1, 0]])
if detection_model.groundtruth_has_field(
input_data_fields.groundtruth_verified_neg_classes):
groundtruth[input_data_fields.groundtruth_verified_neg_classes] = tf.pad(
tf.stack(detection_model.groundtruth_lists(
input_data_fields.groundtruth_verified_neg_classes)),
label_id_offset_paddings)
if detection_model.groundtruth_has_field(
input_data_fields.groundtruth_not_exhaustive_classes):
groundtruth[
input_data_fields.groundtruth_not_exhaustive_classes] = tf.pad(
tf.stack(detection_model.groundtruth_lists(
input_data_fields.groundtruth_not_exhaustive_classes)),
label_id_offset_paddings)
if detection_model.groundtruth_has_field(
fields.BoxListFields.densepose_num_points):
groundtruth[input_data_fields.groundtruth_dp_num_points] = tf.stack(
detection_model.groundtruth_lists(
fields.BoxListFields.densepose_num_points))
if detection_model.groundtruth_has_field(
fields.BoxListFields.densepose_part_ids):
groundtruth[input_data_fields.groundtruth_dp_part_ids] = tf.stack(
detection_model.groundtruth_lists(
fields.BoxListFields.densepose_part_ids))
if detection_model.groundtruth_has_field(
fields.BoxListFields.densepose_surface_coords):
groundtruth[input_data_fields.groundtruth_dp_surface_coords] = tf.stack(
detection_model.groundtruth_lists(
fields.BoxListFields.densepose_surface_coords))
if detection_model.groundtruth_has_field(fields.BoxListFields.track_ids):
groundtruth[input_data_fields.groundtruth_track_ids] = tf.stack(
detection_model.groundtruth_lists(fields.BoxListFields.track_ids))
if detection_model.groundtruth_has_field(
input_data_fields.groundtruth_labeled_classes):
groundtruth[input_data_fields.groundtruth_labeled_classes] = tf.pad(
tf.stack(
detection_model.groundtruth_lists(
input_data_fields.groundtruth_labeled_classes)),
label_id_offset_paddings)
groundtruth[input_data_fields.num_groundtruth_boxes] = (
tf.tile([max_number_of_boxes], multiples=[groundtruth_boxes_shape[0]]))
return groundtruth
def unstack_batch(tensor_dict, unpad_groundtruth_tensors=True):
"""Unstacks all tensors in `tensor_dict` along 0th dimension.
Unstacks tensor from the tensor dict along 0th dimension and returns a
tensor_dict containing values that are lists of unstacked, unpadded tensors.
Tensors in the `tensor_dict` are expected to be of one of the three shapes:
1. [batch_size]
2. [batch_size, height, width, channels]
3. [batch_size, num_boxes, d1, d2, ... dn]
When unpad_groundtruth_tensors is set to true, unstacked tensors of form 3
above are sliced along the `num_boxes` dimension using the value in tensor
field.InputDataFields.num_groundtruth_boxes.
Note that this function has a static list of input data fields and has to be
kept in sync with the InputDataFields defined in core/standard_fields.py
Args:
tensor_dict: A dictionary of batched groundtruth tensors.
unpad_groundtruth_tensors: Whether to remove padding along `num_boxes`
dimension of the groundtruth tensors.
Returns:
A dictionary where the keys are from fields.InputDataFields and values are
a list of unstacked (optionally unpadded) tensors.
Raises:
ValueError: If unpad_tensors is True and `tensor_dict` does not contain
`num_groundtruth_boxes` tensor.
"""
unbatched_tensor_dict = {
key: tf.unstack(tensor) for key, tensor in tensor_dict.items()
}
if unpad_groundtruth_tensors:
if (fields.InputDataFields.num_groundtruth_boxes not in
unbatched_tensor_dict):
raise ValueError('`num_groundtruth_boxes` not found in tensor_dict. '
'Keys available: {}'.format(
unbatched_tensor_dict.keys()))
unbatched_unpadded_tensor_dict = {}
unpad_keys = set([
# List of input data fields that are padded along the num_boxes
# dimension. This list has to be kept in sync with InputDataFields in
# standard_fields.py.
fields.InputDataFields.groundtruth_instance_masks,
fields.InputDataFields.groundtruth_instance_mask_weights,
fields.InputDataFields.groundtruth_classes,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_keypoints,
fields.InputDataFields.groundtruth_keypoint_depths,
fields.InputDataFields.groundtruth_keypoint_depth_weights,
fields.InputDataFields.groundtruth_keypoint_visibilities,
fields.InputDataFields.groundtruth_dp_num_points,
fields.InputDataFields.groundtruth_dp_part_ids,
fields.InputDataFields.groundtruth_dp_surface_coords,
fields.InputDataFields.groundtruth_track_ids,
fields.InputDataFields.groundtruth_group_of,
fields.InputDataFields.groundtruth_difficult,
fields.InputDataFields.groundtruth_is_crowd,
fields.InputDataFields.groundtruth_area,
fields.InputDataFields.groundtruth_weights
]).intersection(set(unbatched_tensor_dict.keys()))
for key in unpad_keys:
unpadded_tensor_list = []
for num_gt, padded_tensor in zip(
unbatched_tensor_dict[fields.InputDataFields.num_groundtruth_boxes],
unbatched_tensor_dict[key]):
tensor_shape = shape_utils.combined_static_and_dynamic_shape(
padded_tensor)
slice_begin = tf.zeros([len(tensor_shape)], dtype=tf.int32)
slice_size = tf.stack(
[num_gt] + [-1 if dim is None else dim for dim in tensor_shape[1:]])
unpadded_tensor = tf.slice(padded_tensor, slice_begin, slice_size)
unpadded_tensor_list.append(unpadded_tensor)
unbatched_unpadded_tensor_dict[key] = unpadded_tensor_list
unbatched_tensor_dict.update(unbatched_unpadded_tensor_dict)
return unbatched_tensor_dict
def provide_groundtruth(model, labels):
"""Provides the labels to a model as groundtruth.
This helper function extracts the corresponding boxes, classes,
keypoints, weights, masks, etc. from the labels, and provides it
as groundtruth to the models.
Args:
model: The detection model to provide groundtruth to.
labels: The labels for the training or evaluation inputs.
"""
gt_boxes_list = labels[fields.InputDataFields.groundtruth_boxes]
gt_classes_list = labels[fields.InputDataFields.groundtruth_classes]
gt_masks_list = None
if fields.InputDataFields.groundtruth_instance_masks in labels:
gt_masks_list = labels[
fields.InputDataFields.groundtruth_instance_masks]
gt_mask_weights_list = None
if fields.InputDataFields.groundtruth_instance_mask_weights in labels:
gt_mask_weights_list = labels[
fields.InputDataFields.groundtruth_instance_mask_weights]
gt_keypoints_list = None
if fields.InputDataFields.groundtruth_keypoints in labels:
gt_keypoints_list = labels[fields.InputDataFields.groundtruth_keypoints]
gt_keypoint_depths_list = None
gt_keypoint_depth_weights_list = None
if fields.InputDataFields.groundtruth_keypoint_depths in labels:
gt_keypoint_depths_list = (
labels[fields.InputDataFields.groundtruth_keypoint_depths])
gt_keypoint_depth_weights_list = (
labels[fields.InputDataFields.groundtruth_keypoint_depth_weights])
gt_keypoint_visibilities_list = None
if fields.InputDataFields.groundtruth_keypoint_visibilities in labels:
gt_keypoint_visibilities_list = labels[
fields.InputDataFields.groundtruth_keypoint_visibilities]
gt_dp_num_points_list = None
if fields.InputDataFields.groundtruth_dp_num_points in labels:
gt_dp_num_points_list = labels[
fields.InputDataFields.groundtruth_dp_num_points]
gt_dp_part_ids_list = None
if fields.InputDataFields.groundtruth_dp_part_ids in labels:
gt_dp_part_ids_list = labels[
fields.InputDataFields.groundtruth_dp_part_ids]
gt_dp_surface_coords_list = None
if fields.InputDataFields.groundtruth_dp_surface_coords in labels:
gt_dp_surface_coords_list = labels[
fields.InputDataFields.groundtruth_dp_surface_coords]
gt_track_ids_list = None
if fields.InputDataFields.groundtruth_track_ids in labels:
gt_track_ids_list = labels[
fields.InputDataFields.groundtruth_track_ids]
gt_weights_list = None
if fields.InputDataFields.groundtruth_weights in labels:
gt_weights_list = labels[fields.InputDataFields.groundtruth_weights]
gt_confidences_list = None
if fields.InputDataFields.groundtruth_confidences in labels:
gt_confidences_list = labels[
fields.InputDataFields.groundtruth_confidences]
gt_is_crowd_list = None
if fields.InputDataFields.groundtruth_is_crowd in labels:
gt_is_crowd_list = labels[fields.InputDataFields.groundtruth_is_crowd]
gt_group_of_list = None
if fields.InputDataFields.groundtruth_group_of in labels:
gt_group_of_list = labels[fields.InputDataFields.groundtruth_group_of]
gt_area_list = None
if fields.InputDataFields.groundtruth_area in labels:
gt_area_list = labels[fields.InputDataFields.groundtruth_area]
gt_labeled_classes = None
if fields.InputDataFields.groundtruth_labeled_classes in labels:
gt_labeled_classes = labels[
fields.InputDataFields.groundtruth_labeled_classes]
gt_verified_neg_classes = None
if fields.InputDataFields.groundtruth_verified_neg_classes in labels:
gt_verified_neg_classes = labels[
fields.InputDataFields.groundtruth_verified_neg_classes]
gt_not_exhaustive_classes = None
if fields.InputDataFields.groundtruth_not_exhaustive_classes in labels:
gt_not_exhaustive_classes = labels[
fields.InputDataFields.groundtruth_not_exhaustive_classes]
model.provide_groundtruth(
groundtruth_boxes_list=gt_boxes_list,
groundtruth_classes_list=gt_classes_list,
groundtruth_confidences_list=gt_confidences_list,
groundtruth_labeled_classes=gt_labeled_classes,
groundtruth_masks_list=gt_masks_list,
groundtruth_mask_weights_list=gt_mask_weights_list,
groundtruth_keypoints_list=gt_keypoints_list,
groundtruth_keypoint_visibilities_list=gt_keypoint_visibilities_list,
groundtruth_dp_num_points_list=gt_dp_num_points_list,
groundtruth_dp_part_ids_list=gt_dp_part_ids_list,
groundtruth_dp_surface_coords_list=gt_dp_surface_coords_list,
groundtruth_weights_list=gt_weights_list,
groundtruth_is_crowd_list=gt_is_crowd_list,
groundtruth_group_of_list=gt_group_of_list,
groundtruth_area_list=gt_area_list,
groundtruth_track_ids_list=gt_track_ids_list,
groundtruth_verified_neg_classes=gt_verified_neg_classes,
groundtruth_not_exhaustive_classes=gt_not_exhaustive_classes,
groundtruth_keypoint_depths_list=gt_keypoint_depths_list,
groundtruth_keypoint_depth_weights_list=gt_keypoint_depth_weights_list)
def create_model_fn(detection_model_fn, configs, hparams=None, use_tpu=False,
postprocess_on_cpu=False):
"""Creates a model function for `Estimator`.
Args:
detection_model_fn: Function that returns a `DetectionModel` instance.
configs: Dictionary of pipeline config objects.
hparams: `HParams` object.
use_tpu: Boolean indicating whether model should be constructed for
use on TPU.
postprocess_on_cpu: When use_tpu and postprocess_on_cpu is true, postprocess
is scheduled on the host cpu.
Returns:
`model_fn` for `Estimator`.
"""
train_config = configs['train_config']
eval_input_config = configs['eval_input_config']
eval_config = configs['eval_config']
def model_fn(features, labels, mode, params=None):
"""Constructs the object detection model.
Args:
features: Dictionary of feature tensors, returned from `input_fn`.
labels: Dictionary of groundtruth tensors if mode is TRAIN or EVAL,
otherwise None.
mode: Mode key from tf.estimator.ModeKeys.
params: Parameter dictionary passed from the estimator.
Returns:
An `EstimatorSpec` that encapsulates the model and its serving
configurations.
"""
params = params or {}
total_loss, train_op, detections, export_outputs = None, None, None, None
is_training = mode == tf.estimator.ModeKeys.TRAIN
# Make sure to set the Keras learning phase. True during training,
# False for inference.
tf.keras.backend.set_learning_phase(is_training)
# Set policy for mixed-precision training with Keras-based models.
if use_tpu and train_config.use_bfloat16:
from tensorflow.python.keras.engine import base_layer_utils # pylint: disable=g-import-not-at-top
# Enable v2 behavior, as `mixed_bfloat16` is only supported in TF 2.0.
base_layer_utils.enable_v2_dtype_behavior()
tf2.keras.mixed_precision.set_global_policy('mixed_bfloat16')
detection_model = detection_model_fn(
is_training=is_training, add_summaries=(not use_tpu))
scaffold_fn = None
if mode == tf.estimator.ModeKeys.TRAIN:
labels = unstack_batch(
labels,
unpad_groundtruth_tensors=train_config.unpad_groundtruth_tensors)
elif mode == tf.estimator.ModeKeys.EVAL:
# For evaling on train data, it is necessary to check whether groundtruth
# must be unpadded.
boxes_shape = (
labels[fields.InputDataFields.groundtruth_boxes].get_shape()
.as_list())
unpad_groundtruth_tensors = boxes_shape[1] is not None and not use_tpu
labels = unstack_batch(
labels, unpad_groundtruth_tensors=unpad_groundtruth_tensors)
if mode in (tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL):
provide_groundtruth(detection_model, labels)
preprocessed_images = features[fields.InputDataFields.image]
side_inputs = detection_model.get_side_inputs(features)
if use_tpu and train_config.use_bfloat16:
with tf.tpu.bfloat16_scope():
prediction_dict = detection_model.predict(
preprocessed_images,
features[fields.InputDataFields.true_image_shape], **side_inputs)
prediction_dict = ops.bfloat16_to_float32_nested(prediction_dict)
else:
prediction_dict = detection_model.predict(
preprocessed_images,
features[fields.InputDataFields.true_image_shape], **side_inputs)
def postprocess_wrapper(args):
return detection_model.postprocess(args[0], args[1])
if mode in (tf.estimator.ModeKeys.EVAL, tf.estimator.ModeKeys.PREDICT):
if use_tpu and postprocess_on_cpu:
detections = tf.tpu.outside_compilation(
postprocess_wrapper,
(prediction_dict,
features[fields.InputDataFields.true_image_shape]))
else:
detections = postprocess_wrapper((
prediction_dict,
features[fields.InputDataFields.true_image_shape]))
if mode == tf.estimator.ModeKeys.TRAIN:
load_pretrained = hparams.load_pretrained if hparams else False
if train_config.fine_tune_checkpoint and load_pretrained:
if not train_config.fine_tune_checkpoint_type:
# train_config.from_detection_checkpoint field is deprecated. For
# backward compatibility, set train_config.fine_tune_checkpoint_type
# based on train_config.from_detection_checkpoint.
if train_config.from_detection_checkpoint:
train_config.fine_tune_checkpoint_type = 'detection'
else:
train_config.fine_tune_checkpoint_type = 'classification'
asg_map = detection_model.restore_map(
fine_tune_checkpoint_type=train_config.fine_tune_checkpoint_type,
load_all_detection_checkpoint_vars=(
train_config.load_all_detection_checkpoint_vars))
available_var_map = (
variables_helper.get_variables_available_in_checkpoint(
asg_map,
train_config.fine_tune_checkpoint,
include_global_step=False))
if use_tpu:
def tpu_scaffold():
tf.train.init_from_checkpoint(train_config.fine_tune_checkpoint,
available_var_map)
return tf.train.Scaffold()
scaffold_fn = tpu_scaffold
else:
tf.train.init_from_checkpoint(train_config.fine_tune_checkpoint,
available_var_map)
if mode in (tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL):
if (mode == tf.estimator.ModeKeys.EVAL and
eval_config.use_dummy_loss_in_eval):
total_loss = tf.constant(1.0)
losses_dict = {'Loss/total_loss': total_loss}
else:
losses_dict = detection_model.loss(
prediction_dict, features[fields.InputDataFields.true_image_shape])
losses = [loss_tensor for loss_tensor in losses_dict.values()]
if train_config.add_regularization_loss:
regularization_losses = detection_model.regularization_losses()
if use_tpu and train_config.use_bfloat16:
regularization_losses = ops.bfloat16_to_float32_nested(
regularization_losses)
if regularization_losses:
regularization_loss = tf.add_n(
regularization_losses, name='regularization_loss')
losses.append(regularization_loss)
losses_dict['Loss/regularization_loss'] = regularization_loss
total_loss = tf.add_n(losses, name='total_loss')
losses_dict['Loss/total_loss'] = total_loss
if 'graph_rewriter_config' in configs:
graph_rewriter_fn = graph_rewriter_builder.build(
configs['graph_rewriter_config'], is_training=is_training)
graph_rewriter_fn()
# TODO(rathodv): Stop creating optimizer summary vars in EVAL mode once we
# can write learning rate summaries on TPU without host calls.
global_step = tf.train.get_or_create_global_step()
training_optimizer, optimizer_summary_vars = optimizer_builder.build(
train_config.optimizer)
if mode == tf.estimator.ModeKeys.TRAIN:
if use_tpu:
training_optimizer = tf.tpu.CrossShardOptimizer(training_optimizer)
# Optionally freeze some layers by setting their gradients to be zero.
trainable_variables = None
include_variables = (
train_config.update_trainable_variables
if train_config.update_trainable_variables else None)
exclude_variables = (
train_config.freeze_variables
if train_config.freeze_variables else None)
trainable_variables = slim.filter_variables(
tf.trainable_variables(),
include_patterns=include_variables,
exclude_patterns=exclude_variables)
clip_gradients_value = None
if train_config.gradient_clipping_by_norm > 0:
clip_gradients_value = train_config.gradient_clipping_by_norm
if not use_tpu:
for var in optimizer_summary_vars:
tf.summary.scalar(var.op.name, var)
summaries = [] if use_tpu else None
if train_config.summarize_gradients:
summaries = ['gradients', 'gradient_norm', 'global_gradient_norm']
train_op = slim.optimizers.optimize_loss(
loss=total_loss,
global_step=global_step,
learning_rate=None,
clip_gradients=clip_gradients_value,
optimizer=training_optimizer,
update_ops=detection_model.updates(),
variables=trainable_variables,
summaries=summaries,
name='') # Preventing scope prefix on all variables.
if mode == tf.estimator.ModeKeys.PREDICT:
exported_output = exporter_lib.add_output_tensor_nodes(detections)
export_outputs = {
tf.saved_model.signature_constants.PREDICT_METHOD_NAME:
tf.estimator.export.PredictOutput(exported_output)
}
eval_metric_ops = None
scaffold = None
if mode == tf.estimator.ModeKeys.EVAL:
class_agnostic = (
fields.DetectionResultFields.detection_classes not in detections)
groundtruth = _prepare_groundtruth_for_eval(
detection_model, class_agnostic,
eval_input_config.max_number_of_boxes)
use_original_images = fields.InputDataFields.original_image in features
if use_original_images:
eval_images = features[fields.InputDataFields.original_image]
true_image_shapes = tf.slice(
features[fields.InputDataFields.true_image_shape], [0, 0], [-1, 3])
original_image_spatial_shapes = features[fields.InputDataFields
.original_image_spatial_shape]
else:
eval_images = features[fields.InputDataFields.image]
true_image_shapes = None
original_image_spatial_shapes = None
eval_dict = eval_util.result_dict_for_batched_example(
eval_images,
features[inputs.HASH_KEY],
detections,
groundtruth,
class_agnostic=class_agnostic,
scale_to_absolute=True,
original_image_spatial_shapes=original_image_spatial_shapes,
true_image_shapes=true_image_shapes)
if fields.InputDataFields.image_additional_channels in features:
eval_dict[fields.InputDataFields.image_additional_channels] = features[
fields.InputDataFields.image_additional_channels]
if class_agnostic:
category_index = label_map_util.create_class_agnostic_category_index()
else:
category_index = label_map_util.create_category_index_from_labelmap(
eval_input_config.label_map_path)
vis_metric_ops = None
if not use_tpu and use_original_images:
keypoint_edges = [
(kp.start, kp.end) for kp in eval_config.keypoint_edge]
eval_metric_op_vis = vis_utils.VisualizeSingleFrameDetections(
category_index,
max_examples_to_draw=eval_config.num_visualizations,
max_boxes_to_draw=eval_config.max_num_boxes_to_visualize,
min_score_thresh=eval_config.min_score_threshold,
use_normalized_coordinates=False,
keypoint_edges=keypoint_edges or None)
vis_metric_ops = eval_metric_op_vis.get_estimator_eval_metric_ops(
eval_dict)
# Eval metrics on a single example.
eval_metric_ops = eval_util.get_eval_metric_ops_for_evaluators(
eval_config, list(category_index.values()), eval_dict)
for loss_key, loss_tensor in iter(losses_dict.items()):
eval_metric_ops[loss_key] = tf.metrics.mean(loss_tensor)
for var in optimizer_summary_vars:
eval_metric_ops[var.op.name] = (var, tf.no_op())
if vis_metric_ops is not None:
eval_metric_ops.update(vis_metric_ops)
eval_metric_ops = {str(k): v for k, v in eval_metric_ops.items()}
if eval_config.use_moving_averages:
variable_averages = tf.train.ExponentialMovingAverage(0.0)
variables_to_restore = variable_averages.variables_to_restore()
keep_checkpoint_every_n_hours = (
train_config.keep_checkpoint_every_n_hours)
saver = tf.train.Saver(
variables_to_restore,
keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours)
scaffold = tf.train.Scaffold(saver=saver)
# EVAL executes on CPU, so use regular non-TPU EstimatorSpec.
if use_tpu and mode != tf.estimator.ModeKeys.EVAL:
return tf.estimator.tpu.TPUEstimatorSpec(
mode=mode,
scaffold_fn=scaffold_fn,
predictions=detections,
loss=total_loss,
train_op=train_op,
eval_metrics=eval_metric_ops,
export_outputs=export_outputs)
else:
if scaffold is None:
keep_checkpoint_every_n_hours = (
train_config.keep_checkpoint_every_n_hours)
saver = tf.train.Saver(
sharded=True,
keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours,
save_relative_paths=True)
tf.add_to_collection(tf.GraphKeys.SAVERS, saver)
scaffold = tf.train.Scaffold(saver=saver)
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=detections,
loss=total_loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops,
export_outputs=export_outputs,
scaffold=scaffold)
return model_fn
def create_estimator_and_inputs(run_config,
hparams=None,
pipeline_config_path=None,
config_override=None,
train_steps=None,
sample_1_of_n_eval_examples=1,
sample_1_of_n_eval_on_train_examples=1,
model_fn_creator=create_model_fn,
use_tpu_estimator=False,
use_tpu=False,
num_shards=1,
params=None,
override_eval_num_epochs=True,
save_final_config=False,
postprocess_on_cpu=False,
export_to_tpu=None,
**kwargs):
"""Creates `Estimator`, input functions, and steps.
Args:
run_config: A `RunConfig`.
hparams: (optional) A `HParams`.
pipeline_config_path: A path to a pipeline config file.
config_override: A pipeline_pb2.TrainEvalPipelineConfig text proto to
override the config from `pipeline_config_path`.
train_steps: Number of training steps. If None, the number of training steps
is set from the `TrainConfig` proto.
sample_1_of_n_eval_examples: Integer representing how often an eval example
should be sampled. If 1, will sample all examples.
sample_1_of_n_eval_on_train_examples: Similar to
`sample_1_of_n_eval_examples`, except controls the sampling of training
data for evaluation.
model_fn_creator: A function that creates a `model_fn` for `Estimator`.
Follows the signature:
* Args:
* `detection_model_fn`: Function that returns `DetectionModel` instance.
* `configs`: Dictionary of pipeline config objects.
* `hparams`: `HParams` object.
* Returns:
`model_fn` for `Estimator`.
use_tpu_estimator: Whether a `TPUEstimator` should be returned. If False,
an `Estimator` will be returned.
use_tpu: Boolean, whether training and evaluation should run on TPU. Only
used if `use_tpu_estimator` is True.
num_shards: Number of shards (TPU cores). Only used if `use_tpu_estimator`
is True.
params: Parameter dictionary passed from the estimator. Only used if
`use_tpu_estimator` is True.
override_eval_num_epochs: Whether to overwrite the number of epochs to 1 for
eval_input.
save_final_config: Whether to save final config (obtained after applying
overrides) to `estimator.model_dir`.
postprocess_on_cpu: When use_tpu and postprocess_on_cpu are true,
postprocess is scheduled on the host cpu.
export_to_tpu: When use_tpu and export_to_tpu are true,
`export_savedmodel()` exports a metagraph for serving on TPU besides the
one on CPU.
**kwargs: Additional keyword arguments for configuration override.
Returns:
A dictionary with the following fields:
'estimator': An `Estimator` or `TPUEstimator`.
'train_input_fn': A training input function.
'eval_input_fns': A list of all evaluation input functions.
'eval_input_names': A list of names for each evaluation input.
'eval_on_train_input_fn': An evaluation-on-train input function.
'predict_input_fn': A prediction input function.
'train_steps': Number of training steps. Either directly from input or from
configuration.
"""
get_configs_from_pipeline_file = MODEL_BUILD_UTIL_MAP[
'get_configs_from_pipeline_file']
merge_external_params_with_configs = MODEL_BUILD_UTIL_MAP[
'merge_external_params_with_configs']
create_pipeline_proto_from_configs = MODEL_BUILD_UTIL_MAP[
'create_pipeline_proto_from_configs']
create_train_input_fn = MODEL_BUILD_UTIL_MAP['create_train_input_fn']
create_eval_input_fn = MODEL_BUILD_UTIL_MAP['create_eval_input_fn']
create_predict_input_fn = MODEL_BUILD_UTIL_MAP['create_predict_input_fn']
detection_model_fn_base = MODEL_BUILD_UTIL_MAP['detection_model_fn_base']
configs = get_configs_from_pipeline_file(
pipeline_config_path, config_override=config_override)
kwargs.update({
'train_steps': train_steps,
'use_bfloat16': configs['train_config'].use_bfloat16 and use_tpu
})
if sample_1_of_n_eval_examples >= 1:
kwargs.update({
'sample_1_of_n_eval_examples': sample_1_of_n_eval_examples
})
if override_eval_num_epochs:
kwargs.update({'eval_num_epochs': 1})
tf.logging.warning(
'Forced number of epochs for all eval validations to be 1.')
configs = merge_external_params_with_configs(
configs, hparams, kwargs_dict=kwargs)
model_config = configs['model']
train_config = configs['train_config']
train_input_config = configs['train_input_config']
eval_config = configs['eval_config']
eval_input_configs = configs['eval_input_configs']
eval_on_train_input_config = copy.deepcopy(train_input_config)
eval_on_train_input_config.sample_1_of_n_examples = (
sample_1_of_n_eval_on_train_examples)
if override_eval_num_epochs and eval_on_train_input_config.num_epochs != 1:
tf.logging.warning('Expected number of evaluation epochs is 1, but '
'instead encountered `eval_on_train_input_config'
'.num_epochs` = '
'{}. Overwriting `num_epochs` to 1.'.format(
eval_on_train_input_config.num_epochs))
eval_on_train_input_config.num_epochs = 1
# update train_steps from config but only when non-zero value is provided
if train_steps is None and train_config.num_steps != 0:
train_steps = train_config.num_steps
detection_model_fn = functools.partial(
detection_model_fn_base, model_config=model_config)
# Create the input functions for TRAIN/EVAL/PREDICT.
train_input_fn = create_train_input_fn(
train_config=train_config,
train_input_config=train_input_config,
model_config=model_config)
eval_input_fns = []
for eval_input_config in eval_input_configs:
eval_input_fns.append(
create_eval_input_fn(
eval_config=eval_config,
eval_input_config=eval_input_config,
model_config=model_config))
eval_input_names = [
eval_input_config.name for eval_input_config in eval_input_configs
]
eval_on_train_input_fn = create_eval_input_fn(
eval_config=eval_config,
eval_input_config=eval_on_train_input_config,
model_config=model_config)
predict_input_fn = create_predict_input_fn(
model_config=model_config, predict_input_config=eval_input_configs[0])
# Read export_to_tpu from hparams if not passed.
if export_to_tpu is None and hparams is not None:
export_to_tpu = hparams.get('export_to_tpu', False)
tf.logging.info('create_estimator_and_inputs: use_tpu %s, export_to_tpu %s',
use_tpu, export_to_tpu)
model_fn = model_fn_creator(detection_model_fn, configs, hparams, use_tpu,
postprocess_on_cpu)
if use_tpu_estimator:
estimator = tf.estimator.tpu.TPUEstimator(
model_fn=model_fn,
train_batch_size=train_config.batch_size,
# For each core, only batch size 1 is supported for eval.
eval_batch_size=num_shards * 1 if use_tpu else 1,
use_tpu=use_tpu,
config=run_config,
export_to_tpu=export_to_tpu,
eval_on_tpu=False, # Eval runs on CPU, so disable eval on TPU
params=params if params else {})
else:
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
# Write the as-run pipeline config to disk.
if run_config.is_chief and save_final_config:
pipeline_config_final = create_pipeline_proto_from_configs(configs)
config_util.save_pipeline_config(pipeline_config_final, estimator.model_dir)
return dict(
estimator=estimator,
train_input_fn=train_input_fn,
eval_input_fns=eval_input_fns,
eval_input_names=eval_input_names,
eval_on_train_input_fn=eval_on_train_input_fn,
predict_input_fn=predict_input_fn,
train_steps=train_steps)
def create_train_and_eval_specs(train_input_fn,
eval_input_fns,
eval_on_train_input_fn,
predict_input_fn,
train_steps,
eval_on_train_data=False,
final_exporter_name='Servo',
eval_spec_names=None):
"""Creates a `TrainSpec` and `EvalSpec`s.
Args:
train_input_fn: Function that produces features and labels on train data.
eval_input_fns: A list of functions that produce features and labels on eval
data.
eval_on_train_input_fn: Function that produces features and labels for
evaluation on train data.
predict_input_fn: Function that produces features for inference.
train_steps: Number of training steps.
eval_on_train_data: Whether to evaluate model on training data. Default is
False.
final_exporter_name: String name given to `FinalExporter`.
eval_spec_names: A list of string names for each `EvalSpec`.
Returns:
Tuple of `TrainSpec` and list of `EvalSpecs`. If `eval_on_train_data` is
True, the last `EvalSpec` in the list will correspond to training data. The
rest EvalSpecs in the list are evaluation datas.
"""
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn, max_steps=train_steps)
if eval_spec_names is None:
eval_spec_names = [str(i) for i in range(len(eval_input_fns))]
eval_specs = []
for index, (eval_spec_name, eval_input_fn) in enumerate(
zip(eval_spec_names, eval_input_fns)):
# Uses final_exporter_name as exporter_name for the first eval spec for
# backward compatibility.
if index == 0:
exporter_name = final_exporter_name
else:
exporter_name = '{}_{}'.format(final_exporter_name, eval_spec_name)
exporter = tf.estimator.FinalExporter(
name=exporter_name, serving_input_receiver_fn=predict_input_fn)
eval_specs.append(
tf.estimator.EvalSpec(
name=eval_spec_name,
input_fn=eval_input_fn,
steps=None,
exporters=exporter))
if eval_on_train_data:
eval_specs.append(
tf.estimator.EvalSpec(
name='eval_on_train', input_fn=eval_on_train_input_fn, steps=None))
return train_spec, eval_specs
def _evaluate_checkpoint(estimator,
input_fn,
checkpoint_path,
name,
max_retries=0):
"""Evaluates a checkpoint.
Args:
estimator: Estimator object to use for evaluation.
input_fn: Input function to use for evaluation.
checkpoint_path: Path of the checkpoint to evaluate.
name: Namescope for eval summary.
max_retries: Maximum number of times to retry the evaluation on encountering
a tf.errors.InvalidArgumentError. If negative, will always retry the
evaluation.
Returns:
Estimator evaluation results.
"""
always_retry = True if max_retries < 0 else False
retries = 0
while always_retry or retries <= max_retries:
try:
return estimator.evaluate(
input_fn=input_fn,
steps=None,
checkpoint_path=checkpoint_path,
name=name)
except tf.errors.InvalidArgumentError as e:
if always_retry or retries < max_retries:
tf.logging.info('Retrying checkpoint evaluation after exception: %s', e)
retries += 1
else:
raise e
def continuous_eval_generator(estimator,
model_dir,
input_fn,
train_steps,
name,
max_retries=0):
"""Perform continuous evaluation on checkpoints written to a model directory.
Args:
estimator: Estimator object to use for evaluation.
model_dir: Model directory to read checkpoints for continuous evaluation.
input_fn: Input function to use for evaluation.
train_steps: Number of training steps. This is used to infer the last
checkpoint and stop evaluation loop.
name: Namescope for eval summary.
max_retries: Maximum number of times to retry the evaluation on encountering
a tf.errors.InvalidArgumentError. If negative, will always retry the
evaluation.
Yields:
Pair of current step and eval_results.
"""
def terminate_eval():
tf.logging.info('Terminating eval after 180 seconds of no checkpoints')
return True
for ckpt in tf.train.checkpoints_iterator(
model_dir, min_interval_secs=180, timeout=None,
timeout_fn=terminate_eval):
tf.logging.info('Starting Evaluation.')
try:
eval_results = _evaluate_checkpoint(
estimator=estimator,
input_fn=input_fn,
checkpoint_path=ckpt,
name=name,
max_retries=max_retries)
tf.logging.info('Eval results: %s' % eval_results)
# Terminate eval job when final checkpoint is reached
current_step = int(os.path.basename(ckpt).split('-')[1])
yield (current_step, eval_results)
if current_step >= train_steps:
tf.logging.info(
'Evaluation finished after training step %d' % current_step)
break
except tf.errors.NotFoundError:
tf.logging.info(
'Checkpoint %s no longer exists, skipping checkpoint' % ckpt)
def continuous_eval(estimator,
model_dir,
input_fn,
train_steps,
name,
max_retries=0):
"""Performs continuous evaluation on checkpoints written to a model directory.
Args:
estimator: Estimator object to use for evaluation.
model_dir: Model directory to read checkpoints for continuous evaluation.
input_fn: Input function to use for evaluation.
train_steps: Number of training steps. This is used to infer the last
checkpoint and stop evaluation loop.
name: Namescope for eval summary.
max_retries: Maximum number of times to retry the evaluation on encountering
a tf.errors.InvalidArgumentError. If negative, will always retry the
evaluation.
"""
for current_step, eval_results in continuous_eval_generator(
estimator, model_dir, input_fn, train_steps, name, max_retries):
tf.logging.info('Step %s, Eval results: %s', current_step, eval_results)
def populate_experiment(run_config,
hparams,
pipeline_config_path,
train_steps=None,
eval_steps=None,
model_fn_creator=create_model_fn,
**kwargs):
"""Populates an `Experiment` object.
EXPERIMENT CLASS IS DEPRECATED. Please switch to
tf.estimator.train_and_evaluate. As an example, see model_main.py.
Args:
run_config: A `RunConfig`.
hparams: A `HParams`.
pipeline_config_path: A path to a pipeline config file.
train_steps: Number of training steps. If None, the number of training steps
is set from the `TrainConfig` proto.
eval_steps: Number of evaluation steps per evaluation cycle. If None, the
number of evaluation steps is set from the `EvalConfig` proto.
model_fn_creator: A function that creates a `model_fn` for `Estimator`.
Follows the signature:
* Args:
* `detection_model_fn`: Function that returns `DetectionModel` instance.
* `configs`: Dictionary of pipeline config objects.
* `hparams`: `HParams` object.
* Returns:
`model_fn` for `Estimator`.
**kwargs: Additional keyword arguments for configuration override.
Returns:
An `Experiment` that defines all aspects of training, evaluation, and
export.
"""
tf.logging.warning('Experiment is being deprecated. Please use '
'tf.estimator.train_and_evaluate(). See model_main.py for '
'an example.')
train_and_eval_dict = create_estimator_and_inputs(
run_config,
hparams,
pipeline_config_path,
train_steps=train_steps,
eval_steps=eval_steps,
model_fn_creator=model_fn_creator,
save_final_config=True,
**kwargs)
estimator = train_and_eval_dict['estimator']
train_input_fn = train_and_eval_dict['train_input_fn']
eval_input_fns = train_and_eval_dict['eval_input_fns']
predict_input_fn = train_and_eval_dict['predict_input_fn']
train_steps = train_and_eval_dict['train_steps']
export_strategies = [
contrib_learn.utils.saved_model_export_utils.make_export_strategy(
serving_input_fn=predict_input_fn)
]
return contrib_learn.Experiment(
estimator=estimator,
train_input_fn=train_input_fn,
eval_input_fn=eval_input_fns[0],
train_steps=train_steps,
eval_steps=None,
export_strategies=export_strategies,
eval_delay_secs=120,
) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/model_lib.py | model_lib.py |
r"""Constructs model, inputs, and training environment."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import os
import pprint
import time
import numpy as np
import tensorflow.compat.v1 as tf
from object_detection import eval_util
from object_detection import inputs
from object_detection import model_lib
from object_detection.builders import optimizer_builder
from object_detection.core import standard_fields as fields
from object_detection.protos import train_pb2
from object_detection.utils import config_util
from object_detection.utils import label_map_util
from object_detection.utils import ops
from object_detection.utils import variables_helper
from object_detection.utils import visualization_utils as vutils
MODEL_BUILD_UTIL_MAP = model_lib.MODEL_BUILD_UTIL_MAP
NUM_STEPS_PER_ITERATION = 100
RESTORE_MAP_ERROR_TEMPLATE = (
'Since we are restoring a v2 style checkpoint'
' restore_map was expected to return a (str -> Model) mapping,'
' but we received a ({} -> {}) mapping instead.'
)
def _compute_losses_and_predictions_dicts(
model, features, labels,
add_regularization_loss=True):
"""Computes the losses dict and predictions dict for a model on inputs.
Args:
model: a DetectionModel (based on Keras).
features: Dictionary of feature tensors from the input dataset.
Should be in the format output by `inputs.train_input` and
`inputs.eval_input`.
features[fields.InputDataFields.image] is a [batch_size, H, W, C]
float32 tensor with preprocessed images.
features[HASH_KEY] is a [batch_size] int32 tensor representing unique
identifiers for the images.
features[fields.InputDataFields.true_image_shape] is a [batch_size, 3]
int32 tensor representing the true image shapes, as preprocessed
images could be padded.
features[fields.InputDataFields.original_image] (optional) is a
[batch_size, H, W, C] float32 tensor with original images.
labels: A dictionary of groundtruth tensors post-unstacking. The original
labels are of the form returned by `inputs.train_input` and
`inputs.eval_input`. The shapes may have been modified by unstacking with
`model_lib.unstack_batch`. However, the dictionary includes the following
fields.
labels[fields.InputDataFields.num_groundtruth_boxes] is a
int32 tensor indicating the number of valid groundtruth boxes
per image.
labels[fields.InputDataFields.groundtruth_boxes] is a float32 tensor
containing the corners of the groundtruth boxes.
labels[fields.InputDataFields.groundtruth_classes] is a float32
one-hot tensor of classes.
labels[fields.InputDataFields.groundtruth_weights] is a float32 tensor
containing groundtruth weights for the boxes.
-- Optional --
labels[fields.InputDataFields.groundtruth_instance_masks] is a
float32 tensor containing only binary values, which represent
instance masks for objects.
labels[fields.InputDataFields.groundtruth_instance_mask_weights] is a
float32 tensor containing weights for the instance masks.
labels[fields.InputDataFields.groundtruth_keypoints] is a
float32 tensor containing keypoints for each box.
labels[fields.InputDataFields.groundtruth_dp_num_points] is an int32
tensor with the number of sampled DensePose points per object.
labels[fields.InputDataFields.groundtruth_dp_part_ids] is an int32
tensor with the DensePose part ids (0-indexed) per object.
labels[fields.InputDataFields.groundtruth_dp_surface_coords] is a
float32 tensor with the DensePose surface coordinates.
labels[fields.InputDataFields.groundtruth_group_of] is a tf.bool tensor
containing group_of annotations.
labels[fields.InputDataFields.groundtruth_labeled_classes] is a float32
k-hot tensor of classes.
labels[fields.InputDataFields.groundtruth_track_ids] is a int32
tensor of track IDs.
labels[fields.InputDataFields.groundtruth_keypoint_depths] is a
float32 tensor containing keypoint depths information.
labels[fields.InputDataFields.groundtruth_keypoint_depth_weights] is a
float32 tensor containing the weights of the keypoint depth feature.
add_regularization_loss: Whether or not to include the model's
regularization loss in the losses dictionary.
Returns:
A tuple containing the losses dictionary (with the total loss under
the key 'Loss/total_loss'), and the predictions dictionary produced by
`model.predict`.
"""
model_lib.provide_groundtruth(model, labels)
preprocessed_images = features[fields.InputDataFields.image]
prediction_dict = model.predict(
preprocessed_images,
features[fields.InputDataFields.true_image_shape],
**model.get_side_inputs(features))
prediction_dict = ops.bfloat16_to_float32_nested(prediction_dict)
losses_dict = model.loss(
prediction_dict, features[fields.InputDataFields.true_image_shape])
losses = [loss_tensor for loss_tensor in losses_dict.values()]
if add_regularization_loss:
# TODO(kaftan): As we figure out mixed precision & bfloat 16, we may
## need to convert these regularization losses from bfloat16 to float32
## as well.
regularization_losses = model.regularization_losses()
if regularization_losses:
regularization_losses = ops.bfloat16_to_float32_nested(
regularization_losses)
regularization_loss = tf.add_n(
regularization_losses, name='regularization_loss')
losses.append(regularization_loss)
losses_dict['Loss/regularization_loss'] = regularization_loss
total_loss = tf.add_n(losses, name='total_loss')
losses_dict['Loss/total_loss'] = total_loss
return losses_dict, prediction_dict
def _ensure_model_is_built(model, input_dataset, unpad_groundtruth_tensors):
"""Ensures that model variables are all built, by running on a dummy input.
Args:
model: A DetectionModel to be built.
input_dataset: The tf.data Dataset the model is being trained on. Needed to
get the shapes for the dummy loss computation.
unpad_groundtruth_tensors: A parameter passed to unstack_batch.
"""
features, labels = iter(input_dataset).next()
@tf.function
def _dummy_computation_fn(features, labels):
model._is_training = False # pylint: disable=protected-access
tf.keras.backend.set_learning_phase(False)
labels = model_lib.unstack_batch(
labels, unpad_groundtruth_tensors=unpad_groundtruth_tensors)
return _compute_losses_and_predictions_dicts(model, features, labels)
strategy = tf.compat.v2.distribute.get_strategy()
if hasattr(tf.distribute.Strategy, 'run'):
strategy.run(
_dummy_computation_fn, args=(
features,
labels,
))
else:
strategy.experimental_run_v2(
_dummy_computation_fn, args=(
features,
labels,
))
def normalize_dict(values_dict, num_replicas):
num_replicas = tf.constant(num_replicas, dtype=tf.float32)
return {key: tf.math.divide(loss, num_replicas) for key, loss
in values_dict.items()}
def reduce_dict(strategy, reduction_dict, reduction_op):
# TODO(anjalisridhar): explore if it is safe to remove the # num_replicas
# scaling of the loss and switch this to a ReduceOp.Mean
return {
name: strategy.reduce(reduction_op, loss, axis=None)
for name, loss in reduction_dict.items()
}
# TODO(kaftan): Explore removing learning_rate from this method & returning
## The full losses dict instead of just total_loss, then doing all summaries
## saving in a utility method called by the outer training loop.
# TODO(kaftan): Explore adding gradient summaries
def eager_train_step(detection_model,
features,
labels,
unpad_groundtruth_tensors,
optimizer,
add_regularization_loss=True,
clip_gradients_value=None,
num_replicas=1.0):
"""Process a single training batch.
This method computes the loss for the model on a single training batch,
while tracking the gradients with a gradient tape. It then updates the
model variables with the optimizer, clipping the gradients if
clip_gradients_value is present.
This method can run eagerly or inside a tf.function.
Args:
detection_model: A DetectionModel (based on Keras) to train.
features: Dictionary of feature tensors from the input dataset.
Should be in the format output by `inputs.train_input.
features[fields.InputDataFields.image] is a [batch_size, H, W, C]
float32 tensor with preprocessed images.
features[HASH_KEY] is a [batch_size] int32 tensor representing unique
identifiers for the images.
features[fields.InputDataFields.true_image_shape] is a [batch_size, 3]
int32 tensor representing the true image shapes, as preprocessed
images could be padded.
features[fields.InputDataFields.original_image] (optional, not used
during training) is a
[batch_size, H, W, C] float32 tensor with original images.
labels: A dictionary of groundtruth tensors. This method unstacks
these labels using model_lib.unstack_batch. The stacked labels are of
the form returned by `inputs.train_input` and `inputs.eval_input`.
labels[fields.InputDataFields.num_groundtruth_boxes] is a [batch_size]
int32 tensor indicating the number of valid groundtruth boxes
per image.
labels[fields.InputDataFields.groundtruth_boxes] is a
[batch_size, num_boxes, 4] float32 tensor containing the corners of
the groundtruth boxes.
labels[fields.InputDataFields.groundtruth_classes] is a
[batch_size, num_boxes, num_classes] float32 one-hot tensor of
classes. num_classes includes the background class.
labels[fields.InputDataFields.groundtruth_weights] is a
[batch_size, num_boxes] float32 tensor containing groundtruth weights
for the boxes.
-- Optional --
labels[fields.InputDataFields.groundtruth_instance_masks] is a
[batch_size, num_boxes, H, W] float32 tensor containing only binary
values, which represent instance masks for objects.
labels[fields.InputDataFields.groundtruth_instance_mask_weights] is a
[batch_size, num_boxes] float32 tensor containing weights for the
instance masks.
labels[fields.InputDataFields.groundtruth_keypoints] is a
[batch_size, num_boxes, num_keypoints, 2] float32 tensor containing
keypoints for each box.
labels[fields.InputDataFields.groundtruth_dp_num_points] is a
[batch_size, num_boxes] int32 tensor with the number of DensePose
sampled points per instance.
labels[fields.InputDataFields.groundtruth_dp_part_ids] is a
[batch_size, num_boxes, max_sampled_points] int32 tensor with the
part ids (0-indexed) for each instance.
labels[fields.InputDataFields.groundtruth_dp_surface_coords] is a
[batch_size, num_boxes, max_sampled_points, 4] float32 tensor with the
surface coordinates for each point. Each surface coordinate is of the
form (y, x, v, u) where (y, x) are normalized image locations and
(v, u) are part-relative normalized surface coordinates.
labels[fields.InputDataFields.groundtruth_labeled_classes] is a float32
k-hot tensor of classes.
labels[fields.InputDataFields.groundtruth_track_ids] is a int32
tensor of track IDs.
labels[fields.InputDataFields.groundtruth_keypoint_depths] is a
float32 tensor containing keypoint depths information.
labels[fields.InputDataFields.groundtruth_keypoint_depth_weights] is a
float32 tensor containing the weights of the keypoint depth feature.
unpad_groundtruth_tensors: A parameter passed to unstack_batch.
optimizer: The training optimizer that will update the variables.
add_regularization_loss: Whether or not to include the model's
regularization loss in the losses dictionary.
clip_gradients_value: If this is present, clip the gradients global norm
at this value using `tf.clip_by_global_norm`.
num_replicas: The number of replicas in the current distribution strategy.
This is used to scale the total loss so that training in a distribution
strategy works correctly.
Returns:
The total loss observed at this training step
"""
# """Execute a single training step in the TF v2 style loop."""
is_training = True
detection_model._is_training = is_training # pylint: disable=protected-access
tf.keras.backend.set_learning_phase(is_training)
labels = model_lib.unstack_batch(
labels, unpad_groundtruth_tensors=unpad_groundtruth_tensors)
with tf.GradientTape() as tape:
losses_dict, _ = _compute_losses_and_predictions_dicts(
detection_model, features, labels, add_regularization_loss)
losses_dict = normalize_dict(losses_dict, num_replicas)
trainable_variables = detection_model.trainable_variables
total_loss = losses_dict['Loss/total_loss']
gradients = tape.gradient(total_loss, trainable_variables)
if clip_gradients_value:
gradients, _ = tf.clip_by_global_norm(gradients, clip_gradients_value)
optimizer.apply_gradients(zip(gradients, trainable_variables))
return losses_dict
def validate_tf_v2_checkpoint_restore_map(checkpoint_restore_map):
"""Ensure that given dict is a valid TF v2 style restore map.
Args:
checkpoint_restore_map: A nested dict mapping strings to
tf.keras.Model objects.
Raises:
ValueError: If they keys in checkpoint_restore_map are not strings or if
the values are not keras Model objects.
"""
for key, value in checkpoint_restore_map.items():
if not (isinstance(key, str) and
(isinstance(value, tf.Module)
or isinstance(value, tf.train.Checkpoint))):
if isinstance(key, str) and isinstance(value, dict):
validate_tf_v2_checkpoint_restore_map(value)
else:
raise TypeError(
RESTORE_MAP_ERROR_TEMPLATE.format(key.__class__.__name__,
value.__class__.__name__))
def is_object_based_checkpoint(checkpoint_path):
"""Returns true if `checkpoint_path` points to an object-based checkpoint."""
var_names = [var[0] for var in tf.train.list_variables(checkpoint_path)]
return '_CHECKPOINTABLE_OBJECT_GRAPH' in var_names
def load_fine_tune_checkpoint(model, checkpoint_path, checkpoint_type,
checkpoint_version, run_model_on_dummy_input,
input_dataset, unpad_groundtruth_tensors):
"""Load a fine tuning classification or detection checkpoint.
To make sure the model variables are all built, this method first executes
the model by computing a dummy loss. (Models might not have built their
variables before their first execution)
It then loads an object-based classification or detection checkpoint.
This method updates the model in-place and does not return a value.
Args:
model: A DetectionModel (based on Keras) to load a fine-tuning
checkpoint for.
checkpoint_path: Directory with checkpoints file or path to checkpoint.
checkpoint_type: Whether to restore from a full detection
checkpoint (with compatible variable names) or to restore from a
classification checkpoint for initialization prior to training.
Valid values: `detection`, `classification`.
checkpoint_version: train_pb2.CheckpointVersion.V1 or V2 enum indicating
whether to load checkpoints in V1 style or V2 style. In this binary
we only support V2 style (object-based) checkpoints.
run_model_on_dummy_input: Whether to run the model on a dummy input in order
to ensure that all model variables have been built successfully before
loading the fine_tune_checkpoint.
input_dataset: The tf.data Dataset the model is being trained on. Needed
to get the shapes for the dummy loss computation.
unpad_groundtruth_tensors: A parameter passed to unstack_batch.
Raises:
IOError: if `checkpoint_path` does not point at a valid object-based
checkpoint
ValueError: if `checkpoint_version` is not train_pb2.CheckpointVersion.V2
"""
if not is_object_based_checkpoint(checkpoint_path):
raise IOError('Checkpoint is expected to be an object-based checkpoint.')
if checkpoint_version == train_pb2.CheckpointVersion.V1:
raise ValueError('Checkpoint version should be V2')
if run_model_on_dummy_input:
_ensure_model_is_built(model, input_dataset, unpad_groundtruth_tensors)
restore_from_objects_dict = model.restore_from_objects(
fine_tune_checkpoint_type=checkpoint_type)
validate_tf_v2_checkpoint_restore_map(restore_from_objects_dict)
ckpt = tf.train.Checkpoint(**restore_from_objects_dict)
ckpt.restore(
checkpoint_path).expect_partial().assert_existing_objects_matched()
def get_filepath(strategy, filepath):
"""Get appropriate filepath for worker.
Args:
strategy: A tf.distribute.Strategy object.
filepath: A path to where the Checkpoint object is stored.
Returns:
A temporary filepath for non-chief workers to use or the original filepath
for the chief.
"""
if strategy.extended.should_checkpoint:
return filepath
else:
# TODO(vighneshb) Replace with the public API when TF exposes it.
task_id = strategy.extended._task_id # pylint:disable=protected-access
return os.path.join(filepath, 'temp_worker_{:03d}'.format(task_id))
def clean_temporary_directories(strategy, filepath):
"""Temporary directory clean up for MultiWorker Mirrored Strategy.
This is needed for all non-chief workers.
Args:
strategy: A tf.distribute.Strategy object.
filepath: The filepath for the temporary directory.
"""
if not strategy.extended.should_checkpoint:
if tf.io.gfile.exists(filepath) and tf.io.gfile.isdir(filepath):
tf.io.gfile.rmtree(filepath)
def train_loop(
pipeline_config_path,
model_dir,
config_override=None,
train_steps=None,
use_tpu=False,
save_final_config=False,
checkpoint_every_n=1000,
checkpoint_max_to_keep=7,
record_summaries=True,
performance_summary_exporter=None,
num_steps_per_iteration=NUM_STEPS_PER_ITERATION,
**kwargs):
"""Trains a model using eager + functions.
This method:
1. Processes the pipeline configs
2. (Optionally) saves the as-run config
3. Builds the model & optimizer
4. Gets the training input data
5. Loads a fine-tuning detection or classification checkpoint if requested
6. Loops over the train data, executing distributed training steps inside
tf.functions.
7. Checkpoints the model every `checkpoint_every_n` training steps.
8. Logs the training metrics as TensorBoard summaries.
Args:
pipeline_config_path: A path to a pipeline config file.
model_dir:
The directory to save checkpoints and summaries to.
config_override: A pipeline_pb2.TrainEvalPipelineConfig text proto to
override the config from `pipeline_config_path`.
train_steps: Number of training steps. If None, the number of training steps
is set from the `TrainConfig` proto.
use_tpu: Boolean, whether training and evaluation should run on TPU.
save_final_config: Whether to save final config (obtained after applying
overrides) to `model_dir`.
checkpoint_every_n:
Checkpoint every n training steps.
checkpoint_max_to_keep:
int, the number of most recent checkpoints to keep in the model directory.
record_summaries: Boolean, whether or not to record summaries defined by
the model or the training pipeline. This does not impact the summaries
of the loss values which are always recorded. Examples of summaries
that are controlled by this flag include:
- Image summaries of training images.
- Intermediate tensors which maybe logged by meta architectures.
performance_summary_exporter: function for exporting performance metrics.
num_steps_per_iteration: int, The number of training steps to perform
in each iteration.
**kwargs: Additional keyword arguments for configuration override.
"""
## Parse the configs
get_configs_from_pipeline_file = MODEL_BUILD_UTIL_MAP[
'get_configs_from_pipeline_file']
merge_external_params_with_configs = MODEL_BUILD_UTIL_MAP[
'merge_external_params_with_configs']
create_pipeline_proto_from_configs = MODEL_BUILD_UTIL_MAP[
'create_pipeline_proto_from_configs']
steps_per_sec_list = []
configs = get_configs_from_pipeline_file(
pipeline_config_path, config_override=config_override)
kwargs.update({
'train_steps': train_steps,
'use_bfloat16': configs['train_config'].use_bfloat16 and use_tpu
})
configs = merge_external_params_with_configs(
configs, None, kwargs_dict=kwargs)
model_config = configs['model']
train_config = configs['train_config']
train_input_config = configs['train_input_config']
unpad_groundtruth_tensors = train_config.unpad_groundtruth_tensors
add_regularization_loss = train_config.add_regularization_loss
clip_gradients_value = None
if train_config.gradient_clipping_by_norm > 0:
clip_gradients_value = train_config.gradient_clipping_by_norm
# update train_steps from config but only when non-zero value is provided
if train_steps is None and train_config.num_steps != 0:
train_steps = train_config.num_steps
if kwargs['use_bfloat16']:
tf.compat.v2.keras.mixed_precision.set_global_policy('mixed_bfloat16')
if train_config.load_all_detection_checkpoint_vars:
raise ValueError('train_pb2.load_all_detection_checkpoint_vars '
'unsupported in TF2')
config_util.update_fine_tune_checkpoint_type(train_config)
fine_tune_checkpoint_type = train_config.fine_tune_checkpoint_type
fine_tune_checkpoint_version = train_config.fine_tune_checkpoint_version
# Write the as-run pipeline config to disk.
if save_final_config:
tf.logging.info('Saving pipeline config file to directory {}'.format(
model_dir))
pipeline_config_final = create_pipeline_proto_from_configs(configs)
config_util.save_pipeline_config(pipeline_config_final, model_dir)
# Build the model, optimizer, and training input
strategy = tf.compat.v2.distribute.get_strategy()
with strategy.scope():
detection_model = MODEL_BUILD_UTIL_MAP['detection_model_fn_base'](
model_config=model_config, is_training=True,
add_summaries=record_summaries)
def train_dataset_fn(input_context):
"""Callable to create train input."""
# Create the inputs.
train_input = inputs.train_input(
train_config=train_config,
train_input_config=train_input_config,
model_config=model_config,
model=detection_model,
input_context=input_context)
train_input = train_input.repeat()
return train_input
train_input = strategy.experimental_distribute_datasets_from_function(
train_dataset_fn)
global_step = tf.Variable(
0, trainable=False, dtype=tf.compat.v2.dtypes.int64, name='global_step',
aggregation=tf.compat.v2.VariableAggregation.ONLY_FIRST_REPLICA)
optimizer, (learning_rate,) = optimizer_builder.build(
train_config.optimizer, global_step=global_step)
# We run the detection_model on dummy inputs in order to ensure that the
# model and all its variables have been properly constructed. Specifically,
# this is currently necessary prior to (potentially) creating shadow copies
# of the model variables for the EMA optimizer.
if train_config.optimizer.use_moving_average:
_ensure_model_is_built(detection_model, train_input,
unpad_groundtruth_tensors)
optimizer.shadow_copy(detection_model)
if callable(learning_rate):
learning_rate_fn = learning_rate
else:
learning_rate_fn = lambda: learning_rate
## Train the model
# Get the appropriate filepath (temporary or not) based on whether the worker
# is the chief.
summary_writer_filepath = get_filepath(strategy,
os.path.join(model_dir, 'train'))
summary_writer = tf.compat.v2.summary.create_file_writer(
summary_writer_filepath)
with summary_writer.as_default():
with strategy.scope():
with tf.compat.v2.summary.record_if(
lambda: global_step % num_steps_per_iteration == 0):
# Load a fine-tuning checkpoint.
if train_config.fine_tune_checkpoint:
variables_helper.ensure_checkpoint_supported(
train_config.fine_tune_checkpoint, fine_tune_checkpoint_type,
model_dir)
load_fine_tune_checkpoint(
detection_model, train_config.fine_tune_checkpoint,
fine_tune_checkpoint_type, fine_tune_checkpoint_version,
train_config.run_fine_tune_checkpoint_dummy_computation,
train_input, unpad_groundtruth_tensors)
ckpt = tf.compat.v2.train.Checkpoint(
step=global_step, model=detection_model, optimizer=optimizer)
manager_dir = get_filepath(strategy, model_dir)
if not strategy.extended.should_checkpoint:
checkpoint_max_to_keep = 1
manager = tf.compat.v2.train.CheckpointManager(
ckpt, manager_dir, max_to_keep=checkpoint_max_to_keep)
# We use the following instead of manager.latest_checkpoint because
# manager_dir does not point to the model directory when we are running
# in a worker.
latest_checkpoint = tf.train.latest_checkpoint(model_dir)
ckpt.restore(latest_checkpoint)
def train_step_fn(features, labels):
"""Single train step."""
if record_summaries:
tf.compat.v2.summary.image(
name='train_input_images',
step=global_step,
data=features[fields.InputDataFields.image],
max_outputs=3)
losses_dict = eager_train_step(
detection_model,
features,
labels,
unpad_groundtruth_tensors,
optimizer,
add_regularization_loss=add_regularization_loss,
clip_gradients_value=clip_gradients_value,
num_replicas=strategy.num_replicas_in_sync)
global_step.assign_add(1)
return losses_dict
def _sample_and_train(strategy, train_step_fn, data_iterator):
features, labels = data_iterator.next()
if hasattr(tf.distribute.Strategy, 'run'):
per_replica_losses_dict = strategy.run(
train_step_fn, args=(features, labels))
else:
per_replica_losses_dict = (
strategy.experimental_run_v2(
train_step_fn, args=(features, labels)))
return reduce_dict(
strategy, per_replica_losses_dict, tf.distribute.ReduceOp.SUM)
@tf.function
def _dist_train_step(data_iterator):
"""A distributed train step."""
if num_steps_per_iteration > 1:
for _ in tf.range(num_steps_per_iteration - 1):
# Following suggestion on yaqs/5402607292645376
with tf.name_scope(''):
_sample_and_train(strategy, train_step_fn, data_iterator)
return _sample_and_train(strategy, train_step_fn, data_iterator)
train_input_iter = iter(train_input)
if int(global_step.value()) == 0:
manager.save()
checkpointed_step = int(global_step.value())
logged_step = global_step.value()
last_step_time = time.time()
for _ in range(global_step.value(), train_steps,
num_steps_per_iteration):
losses_dict = _dist_train_step(train_input_iter)
time_taken = time.time() - last_step_time
last_step_time = time.time()
steps_per_sec = num_steps_per_iteration * 1.0 / time_taken
tf.compat.v2.summary.scalar(
'steps_per_sec', steps_per_sec, step=global_step)
steps_per_sec_list.append(steps_per_sec)
logged_dict = losses_dict.copy()
logged_dict['learning_rate'] = learning_rate_fn()
for key, val in logged_dict.items():
tf.compat.v2.summary.scalar(key, val, step=global_step)
if global_step.value() - logged_step >= 100:
logged_dict_np = {name: value.numpy() for name, value in
logged_dict.items()}
tf.logging.info(
'Step {} per-step time {:.3f}s'.format(
global_step.value(), time_taken / num_steps_per_iteration))
tf.logging.info(pprint.pformat(logged_dict_np, width=40))
logged_step = global_step.value()
if ((int(global_step.value()) - checkpointed_step) >=
checkpoint_every_n):
manager.save()
checkpointed_step = int(global_step.value())
# Remove the checkpoint directories of the non-chief workers that
# MultiWorkerMirroredStrategy forces us to save during sync distributed
# training.
clean_temporary_directories(strategy, manager_dir)
clean_temporary_directories(strategy, summary_writer_filepath)
# TODO(pkanwar): add accuracy metrics.
if performance_summary_exporter is not None:
metrics = {
'steps_per_sec': np.mean(steps_per_sec_list),
'steps_per_sec_p50': np.median(steps_per_sec_list),
'steps_per_sec_max': max(steps_per_sec_list),
'last_batch_loss': float(losses_dict['Loss/total_loss'])
}
mixed_precision = 'bf16' if kwargs['use_bfloat16'] else 'fp32'
performance_summary_exporter(metrics, mixed_precision)
def prepare_eval_dict(detections, groundtruth, features):
"""Prepares eval dictionary containing detections and groundtruth.
Takes in `detections` from the model, `groundtruth` and `features` returned
from the eval tf.data.dataset and creates a dictionary of tensors suitable
for detection eval modules.
Args:
detections: A dictionary of tensors returned by `model.postprocess`.
groundtruth: `inputs.eval_input` returns an eval dataset of (features,
labels) tuple. `groundtruth` must be set to `labels`.
Please note that:
* fields.InputDataFields.groundtruth_classes must be 0-indexed and
in its 1-hot representation.
* fields.InputDataFields.groundtruth_verified_neg_classes must be
0-indexed and in its multi-hot repesentation.
* fields.InputDataFields.groundtruth_not_exhaustive_classes must be
0-indexed and in its multi-hot repesentation.
* fields.InputDataFields.groundtruth_labeled_classes must be
0-indexed and in its multi-hot repesentation.
features: `inputs.eval_input` returns an eval dataset of (features, labels)
tuple. This argument must be set to a dictionary containing the following
keys and their corresponding values from `features` --
* fields.InputDataFields.image
* fields.InputDataFields.original_image
* fields.InputDataFields.original_image_spatial_shape
* fields.InputDataFields.true_image_shape
* inputs.HASH_KEY
Returns:
eval_dict: A dictionary of tensors to pass to eval module.
class_agnostic: Whether to evaluate detection in class agnostic mode.
"""
groundtruth_boxes = groundtruth[fields.InputDataFields.groundtruth_boxes]
groundtruth_boxes_shape = tf.shape(groundtruth_boxes)
# For class-agnostic models, groundtruth one-hot encodings collapse to all
# ones.
class_agnostic = (
fields.DetectionResultFields.detection_classes not in detections)
if class_agnostic:
groundtruth_classes_one_hot = tf.ones(
[groundtruth_boxes_shape[0], groundtruth_boxes_shape[1], 1])
else:
groundtruth_classes_one_hot = groundtruth[
fields.InputDataFields.groundtruth_classes]
label_id_offset = 1 # Applying label id offset (b/63711816)
groundtruth_classes = (
tf.argmax(groundtruth_classes_one_hot, axis=2) + label_id_offset)
groundtruth[fields.InputDataFields.groundtruth_classes] = groundtruth_classes
label_id_offset_paddings = tf.constant([[0, 0], [1, 0]])
if fields.InputDataFields.groundtruth_verified_neg_classes in groundtruth:
groundtruth[
fields.InputDataFields.groundtruth_verified_neg_classes] = tf.pad(
groundtruth[
fields.InputDataFields.groundtruth_verified_neg_classes],
label_id_offset_paddings)
if fields.InputDataFields.groundtruth_not_exhaustive_classes in groundtruth:
groundtruth[
fields.InputDataFields.groundtruth_not_exhaustive_classes] = tf.pad(
groundtruth[
fields.InputDataFields.groundtruth_not_exhaustive_classes],
label_id_offset_paddings)
if fields.InputDataFields.groundtruth_labeled_classes in groundtruth:
groundtruth[fields.InputDataFields.groundtruth_labeled_classes] = tf.pad(
groundtruth[fields.InputDataFields.groundtruth_labeled_classes],
label_id_offset_paddings)
use_original_images = fields.InputDataFields.original_image in features
if use_original_images:
eval_images = features[fields.InputDataFields.original_image]
true_image_shapes = features[fields.InputDataFields.true_image_shape][:, :3]
original_image_spatial_shapes = features[
fields.InputDataFields.original_image_spatial_shape]
else:
eval_images = features[fields.InputDataFields.image]
true_image_shapes = None
original_image_spatial_shapes = None
eval_dict = eval_util.result_dict_for_batched_example(
eval_images,
features[inputs.HASH_KEY],
detections,
groundtruth,
class_agnostic=class_agnostic,
scale_to_absolute=True,
original_image_spatial_shapes=original_image_spatial_shapes,
true_image_shapes=true_image_shapes)
return eval_dict, class_agnostic
def concat_replica_results(tensor_dict):
new_tensor_dict = {}
for key, values in tensor_dict.items():
new_tensor_dict[key] = tf.concat(values, axis=0)
return new_tensor_dict
def eager_eval_loop(
detection_model,
configs,
eval_dataset,
use_tpu=False,
postprocess_on_cpu=False,
global_step=None,
):
"""Evaluate the model eagerly on the evaluation dataset.
This method will compute the evaluation metrics specified in the configs on
the entire evaluation dataset, then return the metrics. It will also log
the metrics to TensorBoard.
Args:
detection_model: A DetectionModel (based on Keras) to evaluate.
configs: Object detection configs that specify the evaluators that should
be used, as well as whether regularization loss should be included and
if bfloat16 should be used on TPUs.
eval_dataset: Dataset containing evaluation data.
use_tpu: Whether a TPU is being used to execute the model for evaluation.
postprocess_on_cpu: Whether model postprocessing should happen on
the CPU when using a TPU to execute the model.
global_step: A variable containing the training step this model was trained
to. Used for logging purposes.
Returns:
A dict of evaluation metrics representing the results of this evaluation.
"""
del postprocess_on_cpu
train_config = configs['train_config']
eval_input_config = configs['eval_input_config']
eval_config = configs['eval_config']
add_regularization_loss = train_config.add_regularization_loss
is_training = False
detection_model._is_training = is_training # pylint: disable=protected-access
tf.keras.backend.set_learning_phase(is_training)
evaluator_options = eval_util.evaluator_options_from_eval_config(
eval_config)
batch_size = eval_config.batch_size
class_agnostic_category_index = (
label_map_util.create_class_agnostic_category_index())
class_agnostic_evaluators = eval_util.get_evaluators(
eval_config,
list(class_agnostic_category_index.values()),
evaluator_options)
class_aware_evaluators = None
if eval_input_config.label_map_path:
class_aware_category_index = (
label_map_util.create_category_index_from_labelmap(
eval_input_config.label_map_path))
class_aware_evaluators = eval_util.get_evaluators(
eval_config,
list(class_aware_category_index.values()),
evaluator_options)
evaluators = None
loss_metrics = {}
@tf.function
def compute_eval_dict(features, labels):
"""Compute the evaluation result on an image."""
# For evaling on train data, it is necessary to check whether groundtruth
# must be unpadded.
boxes_shape = (
labels[fields.InputDataFields.groundtruth_boxes].get_shape().as_list())
unpad_groundtruth_tensors = (boxes_shape[1] is not None
and not use_tpu
and batch_size == 1)
groundtruth_dict = labels
labels = model_lib.unstack_batch(
labels, unpad_groundtruth_tensors=unpad_groundtruth_tensors)
losses_dict, prediction_dict = _compute_losses_and_predictions_dicts(
detection_model, features, labels, add_regularization_loss)
prediction_dict = detection_model.postprocess(
prediction_dict, features[fields.InputDataFields.true_image_shape])
eval_features = {
fields.InputDataFields.image:
features[fields.InputDataFields.image],
fields.InputDataFields.original_image:
features[fields.InputDataFields.original_image],
fields.InputDataFields.original_image_spatial_shape:
features[fields.InputDataFields.original_image_spatial_shape],
fields.InputDataFields.true_image_shape:
features[fields.InputDataFields.true_image_shape],
inputs.HASH_KEY: features[inputs.HASH_KEY],
}
return losses_dict, prediction_dict, groundtruth_dict, eval_features
agnostic_categories = label_map_util.create_class_agnostic_category_index()
per_class_categories = label_map_util.create_category_index_from_labelmap(
eval_input_config.label_map_path)
keypoint_edges = [
(kp.start, kp.end) for kp in eval_config.keypoint_edge]
strategy = tf.compat.v2.distribute.get_strategy()
for i, (features, labels) in enumerate(eval_dataset):
try:
(losses_dict, prediction_dict, groundtruth_dict,
eval_features) = strategy.run(
compute_eval_dict, args=(features, labels))
except Exception as exc: # pylint:disable=broad-except
tf.logging.info('Encountered %s exception.', exc)
tf.logging.info('A replica probably exhausted all examples. Skipping '
'pending examples on other replicas.')
break
(local_prediction_dict, local_groundtruth_dict,
local_eval_features) = tf.nest.map_structure(
strategy.experimental_local_results,
[prediction_dict, groundtruth_dict, eval_features])
local_prediction_dict = concat_replica_results(local_prediction_dict)
local_groundtruth_dict = concat_replica_results(local_groundtruth_dict)
local_eval_features = concat_replica_results(local_eval_features)
eval_dict, class_agnostic = prepare_eval_dict(local_prediction_dict,
local_groundtruth_dict,
local_eval_features)
for loss_key, loss_tensor in iter(losses_dict.items()):
losses_dict[loss_key] = strategy.reduce(tf.distribute.ReduceOp.MEAN,
loss_tensor, None)
if class_agnostic:
category_index = agnostic_categories
else:
category_index = per_class_categories
if i % 100 == 0:
tf.logging.info('Finished eval step %d', i)
use_original_images = fields.InputDataFields.original_image in features
if (use_original_images and i < eval_config.num_visualizations):
sbys_image_list = vutils.draw_side_by_side_evaluation_image(
eval_dict,
category_index=category_index,
max_boxes_to_draw=eval_config.max_num_boxes_to_visualize,
min_score_thresh=eval_config.min_score_threshold,
use_normalized_coordinates=False,
keypoint_edges=keypoint_edges or None)
for j, sbys_image in enumerate(sbys_image_list):
tf.compat.v2.summary.image(
name='eval_side_by_side_{}_{}'.format(i, j),
step=global_step,
data=sbys_image,
max_outputs=eval_config.num_visualizations)
if eval_util.has_densepose(eval_dict):
dp_image_list = vutils.draw_densepose_visualizations(
eval_dict)
for j, dp_image in enumerate(dp_image_list):
tf.compat.v2.summary.image(
name='densepose_detections_{}_{}'.format(i, j),
step=global_step,
data=dp_image,
max_outputs=eval_config.num_visualizations)
if evaluators is None:
if class_agnostic:
evaluators = class_agnostic_evaluators
else:
evaluators = class_aware_evaluators
for evaluator in evaluators:
evaluator.add_eval_dict(eval_dict)
for loss_key, loss_tensor in iter(losses_dict.items()):
if loss_key not in loss_metrics:
loss_metrics[loss_key] = []
loss_metrics[loss_key].append(loss_tensor)
eval_metrics = {}
for evaluator in evaluators:
eval_metrics.update(evaluator.evaluate())
for loss_key in loss_metrics:
eval_metrics[loss_key] = tf.reduce_mean(loss_metrics[loss_key])
eval_metrics = {str(k): v for k, v in eval_metrics.items()}
tf.logging.info('Eval metrics at step %d', global_step.numpy())
for k in eval_metrics:
tf.compat.v2.summary.scalar(k, eval_metrics[k], step=global_step)
tf.logging.info('\t+ %s: %f', k, eval_metrics[k])
return eval_metrics
def eval_continuously(
pipeline_config_path,
config_override=None,
train_steps=None,
sample_1_of_n_eval_examples=1,
sample_1_of_n_eval_on_train_examples=1,
use_tpu=False,
override_eval_num_epochs=True,
postprocess_on_cpu=False,
model_dir=None,
checkpoint_dir=None,
wait_interval=180,
timeout=3600,
eval_index=0,
save_final_config=False,
**kwargs):
"""Run continuous evaluation of a detection model eagerly.
This method builds the model, and continously restores it from the most
recent training checkpoint in the checkpoint directory & evaluates it
on the evaluation data.
Args:
pipeline_config_path: A path to a pipeline config file.
config_override: A pipeline_pb2.TrainEvalPipelineConfig text proto to
override the config from `pipeline_config_path`.
train_steps: Number of training steps. If None, the number of training steps
is set from the `TrainConfig` proto.
sample_1_of_n_eval_examples: Integer representing how often an eval example
should be sampled. If 1, will sample all examples.
sample_1_of_n_eval_on_train_examples: Similar to
`sample_1_of_n_eval_examples`, except controls the sampling of training
data for evaluation.
use_tpu: Boolean, whether training and evaluation should run on TPU.
override_eval_num_epochs: Whether to overwrite the number of epochs to 1 for
eval_input.
postprocess_on_cpu: When use_tpu and postprocess_on_cpu are true,
postprocess is scheduled on the host cpu.
model_dir: Directory to output resulting evaluation summaries to.
checkpoint_dir: Directory that contains the training checkpoints.
wait_interval: The mimmum number of seconds to wait before checking for a
new checkpoint.
timeout: The maximum number of seconds to wait for a checkpoint. Execution
will terminate if no new checkpoints are found after these many seconds.
eval_index: int, If given, only evaluate the dataset at the given
index. By default, evaluates dataset at 0'th index.
save_final_config: Whether to save the pipeline config file to the model
directory.
**kwargs: Additional keyword arguments for configuration override.
"""
get_configs_from_pipeline_file = MODEL_BUILD_UTIL_MAP[
'get_configs_from_pipeline_file']
create_pipeline_proto_from_configs = MODEL_BUILD_UTIL_MAP[
'create_pipeline_proto_from_configs']
merge_external_params_with_configs = MODEL_BUILD_UTIL_MAP[
'merge_external_params_with_configs']
configs = get_configs_from_pipeline_file(
pipeline_config_path, config_override=config_override)
kwargs.update({
'sample_1_of_n_eval_examples': sample_1_of_n_eval_examples,
'use_bfloat16': configs['train_config'].use_bfloat16 and use_tpu
})
if train_steps is not None:
kwargs['train_steps'] = train_steps
if override_eval_num_epochs:
kwargs.update({'eval_num_epochs': 1})
tf.logging.warning(
'Forced number of epochs for all eval validations to be 1.')
configs = merge_external_params_with_configs(
configs, None, kwargs_dict=kwargs)
if model_dir and save_final_config:
tf.logging.info('Saving pipeline config file to directory {}'.format(
model_dir))
pipeline_config_final = create_pipeline_proto_from_configs(configs)
config_util.save_pipeline_config(pipeline_config_final, model_dir)
model_config = configs['model']
train_input_config = configs['train_input_config']
eval_config = configs['eval_config']
eval_input_configs = configs['eval_input_configs']
eval_on_train_input_config = copy.deepcopy(train_input_config)
eval_on_train_input_config.sample_1_of_n_examples = (
sample_1_of_n_eval_on_train_examples)
if override_eval_num_epochs and eval_on_train_input_config.num_epochs != 1:
tf.logging.warning('Expected number of evaluation epochs is 1, but '
'instead encountered `eval_on_train_input_config'
'.num_epochs` = '
'{}. Overwriting `num_epochs` to 1.'.format(
eval_on_train_input_config.num_epochs))
eval_on_train_input_config.num_epochs = 1
if kwargs['use_bfloat16']:
tf.compat.v2.keras.mixed_precision.set_global_policy('mixed_bfloat16')
eval_input_config = eval_input_configs[eval_index]
strategy = tf.compat.v2.distribute.get_strategy()
with strategy.scope():
detection_model = MODEL_BUILD_UTIL_MAP['detection_model_fn_base'](
model_config=model_config, is_training=True)
eval_input = strategy.experimental_distribute_dataset(
inputs.eval_input(
eval_config=eval_config,
eval_input_config=eval_input_config,
model_config=model_config,
model=detection_model))
global_step = tf.compat.v2.Variable(
0, trainable=False, dtype=tf.compat.v2.dtypes.int64)
optimizer, _ = optimizer_builder.build(
configs['train_config'].optimizer, global_step=global_step)
for latest_checkpoint in tf.train.checkpoints_iterator(
checkpoint_dir, timeout=timeout, min_interval_secs=wait_interval):
ckpt = tf.compat.v2.train.Checkpoint(
step=global_step, model=detection_model, optimizer=optimizer)
# We run the detection_model on dummy inputs in order to ensure that the
# model and all its variables have been properly constructed. Specifically,
# this is currently necessary prior to (potentially) creating shadow copies
# of the model variables for the EMA optimizer.
if eval_config.use_moving_averages:
unpad_groundtruth_tensors = (eval_config.batch_size == 1 and not use_tpu)
_ensure_model_is_built(detection_model, eval_input,
unpad_groundtruth_tensors)
optimizer.shadow_copy(detection_model)
ckpt.restore(latest_checkpoint).expect_partial()
if eval_config.use_moving_averages:
optimizer.swap_weights()
summary_writer = tf.compat.v2.summary.create_file_writer(
os.path.join(model_dir, 'eval', eval_input_config.name))
with summary_writer.as_default():
eager_eval_loop(
detection_model,
configs,
eval_input,
use_tpu=use_tpu,
postprocess_on_cpu=postprocess_on_cpu,
global_step=global_step,
) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/model_lib_v2.py | model_lib_v2.py |
r"""Creates and runs TF2 object detection models.
For local training/evaluation run:
PIPELINE_CONFIG_PATH=path/to/pipeline.config
MODEL_DIR=/tmp/model_outputs
NUM_TRAIN_STEPS=10000
SAMPLE_1_OF_N_EVAL_EXAMPLES=1
python model_main_tf2.py -- \
--model_dir=$MODEL_DIR --num_train_steps=$NUM_TRAIN_STEPS \
--sample_1_of_n_eval_examples=$SAMPLE_1_OF_N_EVAL_EXAMPLES \
--pipeline_config_path=$PIPELINE_CONFIG_PATH \
--alsologtostderr
"""
from absl import flags
import tensorflow.compat.v2 as tf
from object_detection import model_lib_v2
flags.DEFINE_string('pipeline_config_path', None, 'Path to pipeline config '
'file.')
flags.DEFINE_integer('num_train_steps', None, 'Number of train steps.')
flags.DEFINE_bool('eval_on_train_data', False, 'Enable evaluating on train '
'data (only supported in distributed training).')
flags.DEFINE_integer('sample_1_of_n_eval_examples', None, 'Will sample one of '
'every n eval input examples, where n is provided.')
flags.DEFINE_integer('sample_1_of_n_eval_on_train_examples', 5, 'Will sample '
'one of every n train input examples for evaluation, '
'where n is provided. This is only used if '
'`eval_training_data` is True.')
flags.DEFINE_string(
'model_dir', None, 'Path to output model directory '
'where event and checkpoint files will be written.')
flags.DEFINE_string(
'checkpoint_dir', None, 'Path to directory holding a checkpoint. If '
'`checkpoint_dir` is provided, this binary operates in eval-only mode, '
'writing resulting metrics to `model_dir`.')
flags.DEFINE_integer('eval_timeout', 3600, 'Number of seconds to wait for an'
'evaluation checkpoint before exiting.')
flags.DEFINE_bool('use_tpu', False, 'Whether the job is executing on a TPU.')
flags.DEFINE_string(
'tpu_name',
default=None,
help='Name of the Cloud TPU for Cluster Resolvers.')
flags.DEFINE_integer(
'num_workers', 1, 'When num_workers > 1, training uses '
'MultiWorkerMirroredStrategy. When num_workers = 1 it uses '
'MirroredStrategy.')
flags.DEFINE_integer(
'checkpoint_every_n', 1000, 'Integer defining how often we checkpoint.')
flags.DEFINE_boolean('record_summaries', True,
('Whether or not to record summaries defined by the model'
' or the training pipeline. This does not impact the'
' summaries of the loss values which are always'
' recorded.'))
FLAGS = flags.FLAGS
def main(unused_argv):
flags.mark_flag_as_required('model_dir')
flags.mark_flag_as_required('pipeline_config_path')
tf.config.set_soft_device_placement(True)
if FLAGS.checkpoint_dir:
model_lib_v2.eval_continuously(
pipeline_config_path=FLAGS.pipeline_config_path,
model_dir=FLAGS.model_dir,
train_steps=FLAGS.num_train_steps,
sample_1_of_n_eval_examples=FLAGS.sample_1_of_n_eval_examples,
sample_1_of_n_eval_on_train_examples=(
FLAGS.sample_1_of_n_eval_on_train_examples),
checkpoint_dir=FLAGS.checkpoint_dir,
wait_interval=300, timeout=FLAGS.eval_timeout)
else:
if FLAGS.use_tpu:
# TPU is automatically inferred if tpu_name is None and
# we are running under cloud ai-platform.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
FLAGS.tpu_name)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
elif FLAGS.num_workers > 1:
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
else:
strategy = tf.compat.v2.distribute.MirroredStrategy()
with strategy.scope():
model_lib_v2.train_loop(
pipeline_config_path=FLAGS.pipeline_config_path,
model_dir=FLAGS.model_dir,
train_steps=FLAGS.num_train_steps,
use_tpu=FLAGS.use_tpu,
checkpoint_every_n=FLAGS.checkpoint_every_n,
record_summaries=FLAGS.record_summaries)
if __name__ == '__main__':
tf.compat.v1.app.run() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/model_main_tf2.py | model_main_tf2.py |
r"""Exports an SSD detection model to use with tf-lite.
Outputs file:
* A tflite compatible frozen graph - $output_directory/tflite_graph.pb
The exported graph has the following input and output nodes.
Inputs:
'normalized_input_image_tensor': a float32 tensor of shape
[1, height, width, 3] containing the normalized input image. Note that the
height and width must be compatible with the height and width configured in
the fixed_shape_image resizer options in the pipeline config proto.
In floating point Mobilenet model, 'normalized_image_tensor' has values
between [-1,1). This typically means mapping each pixel (linearly)
to a value between [-1, 1]. Input image
values between 0 and 255 are scaled by (1/128.0) and then a value of
-1 is added to them to ensure the range is [-1,1).
In quantized Mobilenet model, 'normalized_image_tensor' has values between [0,
255].
In general, see the `preprocess` function defined in the feature extractor class
in the object_detection/models directory.
Outputs:
If add_postprocessing_op is true: frozen graph adds a
TFLite_Detection_PostProcess custom op node has four outputs:
detection_boxes: a float32 tensor of shape [1, num_boxes, 4] with box
locations
detection_classes: a float32 tensor of shape [1, num_boxes]
with class indices
detection_scores: a float32 tensor of shape [1, num_boxes]
with class scores
num_boxes: a float32 tensor of size 1 containing the number of detected boxes
else:
the graph has two outputs:
'raw_outputs/box_encodings': a float32 tensor of shape [1, num_anchors, 4]
containing the encoded box predictions.
'raw_outputs/class_predictions': a float32 tensor of shape
[1, num_anchors, num_classes] containing the class scores for each anchor
after applying score conversion.
Example Usage:
--------------
python object_detection/export_tflite_ssd_graph.py \
--pipeline_config_path path/to/ssd_mobilenet.config \
--trained_checkpoint_prefix path/to/model.ckpt \
--output_directory path/to/exported_model_directory
The expected output would be in the directory
path/to/exported_model_directory (which is created if it does not exist)
with contents:
- tflite_graph.pbtxt
- tflite_graph.pb
Config overrides (see the `config_override` flag) are text protobufs
(also of type pipeline_pb2.TrainEvalPipelineConfig) which are used to override
certain fields in the provided pipeline_config_path. These are useful for
making small changes to the inference graph that differ from the training or
eval config.
Example Usage (in which we change the NMS iou_threshold to be 0.5 and
NMS score_threshold to be 0.0):
python object_detection/export_tflite_ssd_graph.py \
--pipeline_config_path path/to/ssd_mobilenet.config \
--trained_checkpoint_prefix path/to/model.ckpt \
--output_directory path/to/exported_model_directory
--config_override " \
model{ \
ssd{ \
post_processing { \
batch_non_max_suppression { \
score_threshold: 0.0 \
iou_threshold: 0.5 \
} \
} \
} \
} \
"
"""
import tensorflow.compat.v1 as tf
from google.protobuf import text_format
from object_detection import export_tflite_ssd_graph_lib
from object_detection.protos import pipeline_pb2
flags = tf.app.flags
flags.DEFINE_string('output_directory', None, 'Path to write outputs.')
flags.DEFINE_string(
'pipeline_config_path', None,
'Path to a pipeline_pb2.TrainEvalPipelineConfig config '
'file.')
flags.DEFINE_string('trained_checkpoint_prefix', None, 'Checkpoint prefix.')
flags.DEFINE_integer('max_detections', 10,
'Maximum number of detections (boxes) to show.')
flags.DEFINE_integer('max_classes_per_detection', 1,
'Maximum number of classes to output per detection box.')
flags.DEFINE_integer(
'detections_per_class', 100,
'Number of anchors used per class in Regular Non-Max-Suppression.')
flags.DEFINE_bool('add_postprocessing_op', True,
'Add TFLite custom op for postprocessing to the graph.')
flags.DEFINE_bool(
'use_regular_nms', False,
'Flag to set postprocessing op to use Regular NMS instead of Fast NMS.')
flags.DEFINE_string(
'config_override', '', 'pipeline_pb2.TrainEvalPipelineConfig '
'text proto to override pipeline_config_path.')
FLAGS = flags.FLAGS
def main(argv):
del argv # Unused.
flags.mark_flag_as_required('output_directory')
flags.mark_flag_as_required('pipeline_config_path')
flags.mark_flag_as_required('trained_checkpoint_prefix')
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
with tf.gfile.GFile(FLAGS.pipeline_config_path, 'r') as f:
text_format.Merge(f.read(), pipeline_config)
text_format.Merge(FLAGS.config_override, pipeline_config)
export_tflite_ssd_graph_lib.export_tflite_graph(
pipeline_config, FLAGS.trained_checkpoint_prefix, FLAGS.output_directory,
FLAGS.add_postprocessing_op, FLAGS.max_detections,
FLAGS.max_classes_per_detection, use_regular_nms=FLAGS.use_regular_nms)
if __name__ == '__main__':
tf.app.run(main) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/export_tflite_ssd_graph.py | export_tflite_ssd_graph.py |
"""Library to export TFLite-compatible SavedModel from TF2 detection models."""
import os
import numpy as np
import tensorflow.compat.v1 as tf1
import tensorflow.compat.v2 as tf
from object_detection.builders import model_builder
from object_detection.builders import post_processing_builder
from object_detection.core import box_list
from object_detection.core import standard_fields as fields
_DEFAULT_NUM_CHANNELS = 3
_DEFAULT_NUM_COORD_BOX = 4
_MAX_CLASSES_PER_DETECTION = 1
_DETECTION_POSTPROCESS_FUNC = 'TFLite_Detection_PostProcess'
def get_const_center_size_encoded_anchors(anchors):
"""Exports center-size encoded anchors as a constant tensor.
Args:
anchors: a float32 tensor of shape [num_anchors, 4] containing the anchor
boxes
Returns:
encoded_anchors: a float32 constant tensor of shape [num_anchors, 4]
containing the anchor boxes.
"""
anchor_boxlist = box_list.BoxList(anchors)
y, x, h, w = anchor_boxlist.get_center_coordinates_and_sizes()
num_anchors = y.get_shape().as_list()
with tf1.Session() as sess:
y_out, x_out, h_out, w_out = sess.run([y, x, h, w])
encoded_anchors = tf1.constant(
np.transpose(np.stack((y_out, x_out, h_out, w_out))),
dtype=tf1.float32,
shape=[num_anchors[0], _DEFAULT_NUM_COORD_BOX],
name='anchors')
return num_anchors[0], encoded_anchors
class SSDModule(tf.Module):
"""Inference Module for TFLite-friendly SSD models."""
def __init__(self, pipeline_config, detection_model, max_detections,
use_regular_nms):
"""Initialization.
Args:
pipeline_config: The original pipeline_pb2.TrainEvalPipelineConfig
detection_model: The detection model to use for inference.
max_detections: Max detections desired from the TFLite model.
use_regular_nms: If True, TFLite model uses the (slower) multi-class NMS.
"""
self._process_config(pipeline_config)
self._pipeline_config = pipeline_config
self._model = detection_model
self._max_detections = max_detections
self._use_regular_nms = use_regular_nms
def _process_config(self, pipeline_config):
self._num_classes = pipeline_config.model.ssd.num_classes
self._nms_score_threshold = pipeline_config.model.ssd.post_processing.batch_non_max_suppression.score_threshold
self._nms_iou_threshold = pipeline_config.model.ssd.post_processing.batch_non_max_suppression.iou_threshold
self._scale_values = {}
self._scale_values[
'y_scale'] = pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.y_scale
self._scale_values[
'x_scale'] = pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.x_scale
self._scale_values[
'h_scale'] = pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.height_scale
self._scale_values[
'w_scale'] = pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.width_scale
image_resizer_config = pipeline_config.model.ssd.image_resizer
image_resizer = image_resizer_config.WhichOneof('image_resizer_oneof')
self._num_channels = _DEFAULT_NUM_CHANNELS
if image_resizer == 'fixed_shape_resizer':
self._height = image_resizer_config.fixed_shape_resizer.height
self._width = image_resizer_config.fixed_shape_resizer.width
if image_resizer_config.fixed_shape_resizer.convert_to_grayscale:
self._num_channels = 1
else:
raise ValueError(
'Only fixed_shape_resizer'
'is supported with tflite. Found {}'.format(
image_resizer_config.WhichOneof('image_resizer_oneof')))
def input_shape(self):
"""Returns shape of TFLite model input."""
return [1, self._height, self._width, self._num_channels]
def postprocess_implements_signature(self):
"""Returns tf.implements signature for MLIR legalization of TFLite NMS."""
implements_signature = [
'name: "%s"' % _DETECTION_POSTPROCESS_FUNC,
'attr { key: "max_detections" value { i: %d } }' % self._max_detections,
'attr { key: "max_classes_per_detection" value { i: %d } }' %
_MAX_CLASSES_PER_DETECTION,
'attr { key: "use_regular_nms" value { b: %s } }' %
str(self._use_regular_nms).lower(),
'attr { key: "nms_score_threshold" value { f: %f } }' %
self._nms_score_threshold,
'attr { key: "nms_iou_threshold" value { f: %f } }' %
self._nms_iou_threshold,
'attr { key: "y_scale" value { f: %f } }' %
self._scale_values['y_scale'],
'attr { key: "x_scale" value { f: %f } }' %
self._scale_values['x_scale'],
'attr { key: "h_scale" value { f: %f } }' %
self._scale_values['h_scale'],
'attr { key: "w_scale" value { f: %f } }' %
self._scale_values['w_scale'],
'attr { key: "num_classes" value { i: %d } }' % self._num_classes
]
implements_signature = ' '.join(implements_signature)
return implements_signature
def _get_postprocess_fn(self, num_anchors, num_classes):
# There is no TF equivalent for TFLite's custom post-processing op.
# So we add an 'empty' composite function here, that is legalized to the
# custom op with MLIR.
@tf.function(
experimental_implements=self.postprocess_implements_signature())
# pylint: disable=g-unused-argument,unused-argument
def dummy_post_processing(box_encodings, class_predictions, anchors):
boxes = tf.constant(0.0, dtype=tf.float32, name='boxes')
scores = tf.constant(0.0, dtype=tf.float32, name='scores')
classes = tf.constant(0.0, dtype=tf.float32, name='classes')
num_detections = tf.constant(0.0, dtype=tf.float32, name='num_detections')
return boxes, classes, scores, num_detections
return dummy_post_processing
@tf.function
def inference_fn(self, image):
"""Encapsulates SSD inference for TFLite conversion.
NOTE: The Args & Returns sections below indicate the TFLite model signature,
and not what the TF graph does (since the latter does not include the custom
NMS op used by TFLite)
Args:
image: a float32 tensor of shape [num_anchors, 4] containing the anchor
boxes
Returns:
num_detections: a float32 scalar denoting number of total detections.
classes: a float32 tensor denoting class ID for each detection.
scores: a float32 tensor denoting score for each detection.
boxes: a float32 tensor denoting coordinates of each detected box.
"""
predicted_tensors = self._model.predict(image, true_image_shapes=None)
# The score conversion occurs before the post-processing custom op
_, score_conversion_fn = post_processing_builder.build(
self._pipeline_config.model.ssd.post_processing)
class_predictions = score_conversion_fn(
predicted_tensors['class_predictions_with_background'])
with tf.name_scope('raw_outputs'):
# 'raw_outputs/box_encodings': a float32 tensor of shape
# [1, num_anchors, 4] containing the encoded box predictions. Note that
# these are raw predictions and no Non-Max suppression is applied on
# them and no decode center size boxes is applied to them.
box_encodings = tf.identity(
predicted_tensors['box_encodings'], name='box_encodings')
# 'raw_outputs/class_predictions': a float32 tensor of shape
# [1, num_anchors, num_classes] containing the class scores for each
# anchor after applying score conversion.
class_predictions = tf.identity(
class_predictions, name='class_predictions')
# 'anchors': a float32 tensor of shape
# [4, num_anchors] containing the anchors as a constant node.
num_anchors, anchors = get_const_center_size_encoded_anchors(
predicted_tensors['anchors'])
anchors = tf.identity(anchors, name='anchors')
# tf.function@ seems to reverse order of inputs, so reverse them here.
return self._get_postprocess_fn(num_anchors,
self._num_classes)(box_encodings,
class_predictions,
anchors)[::-1]
class CenterNetModule(tf.Module):
"""Inference Module for TFLite-friendly CenterNet models.
The exported CenterNet model includes the preprocessing and postprocessing
logics so the caller should pass in the raw image pixel values. It supports
both object detection and keypoint estimation task.
"""
def __init__(self, pipeline_config, max_detections, include_keypoints,
label_map_path=''):
"""Initialization.
Args:
pipeline_config: The original pipeline_pb2.TrainEvalPipelineConfig
max_detections: Max detections desired from the TFLite model.
include_keypoints: If set true, the output dictionary will include the
keypoint coordinates and keypoint confidence scores.
label_map_path: Path to the label map which is used by CenterNet keypoint
estimation task. If provided, the label_map_path in the configuration
will be replaced by this one.
"""
self._max_detections = max_detections
self._include_keypoints = include_keypoints
self._process_config(pipeline_config)
if include_keypoints and label_map_path:
pipeline_config.model.center_net.keypoint_label_map_path = label_map_path
self._pipeline_config = pipeline_config
self._model = model_builder.build(
self._pipeline_config.model, is_training=False)
def get_model(self):
return self._model
def _process_config(self, pipeline_config):
self._num_classes = pipeline_config.model.center_net.num_classes
center_net_config = pipeline_config.model.center_net
image_resizer_config = center_net_config.image_resizer
image_resizer = image_resizer_config.WhichOneof('image_resizer_oneof')
self._num_channels = _DEFAULT_NUM_CHANNELS
if image_resizer == 'fixed_shape_resizer':
self._height = image_resizer_config.fixed_shape_resizer.height
self._width = image_resizer_config.fixed_shape_resizer.width
if image_resizer_config.fixed_shape_resizer.convert_to_grayscale:
self._num_channels = 1
else:
raise ValueError(
'Only fixed_shape_resizer'
'is supported with tflite. Found {}'.format(image_resizer))
center_net_config.object_center_params.max_box_predictions = (
self._max_detections)
if not self._include_keypoints:
del center_net_config.keypoint_estimation_task[:]
def input_shape(self):
"""Returns shape of TFLite model input."""
return [1, self._height, self._width, self._num_channels]
@tf.function
def inference_fn(self, image):
"""Encapsulates CenterNet inference for TFLite conversion.
Args:
image: a float32 tensor of shape [1, image_height, image_width, channel]
denoting the image pixel values.
Returns:
A dictionary of predicted tensors:
classes: a float32 tensor with shape [1, max_detections] denoting class
ID for each detection.
scores: a float32 tensor with shape [1, max_detections] denoting score
for each detection.
boxes: a float32 tensor with shape [1, max_detections, 4] denoting
coordinates of each detected box.
keypoints: a float32 with shape [1, max_detections, num_keypoints, 2]
denoting the predicted keypoint coordinates (normalized in between
0-1). Note that [:, :, :, 0] represents the y coordinates and
[:, :, :, 1] represents the x coordinates.
keypoint_scores: a float32 with shape [1, max_detections, num_keypoints]
denoting keypoint confidence scores.
"""
image = tf.cast(image, tf.float32)
image, shapes = self._model.preprocess(image)
prediction_dict = self._model.predict(image, None)
detections = self._model.postprocess(
prediction_dict, true_image_shapes=shapes)
field_names = fields.DetectionResultFields
classes_field = field_names.detection_classes
classes = tf.cast(detections[classes_field], tf.float32)
num_detections = tf.cast(detections[field_names.num_detections], tf.float32)
if self._include_keypoints:
model_outputs = (detections[field_names.detection_boxes], classes,
detections[field_names.detection_scores], num_detections,
detections[field_names.detection_keypoints],
detections[field_names.detection_keypoint_scores])
else:
model_outputs = (detections[field_names.detection_boxes], classes,
detections[field_names.detection_scores], num_detections)
# tf.function@ seems to reverse order of inputs, so reverse them here.
return model_outputs[::-1]
def export_tflite_model(pipeline_config, trained_checkpoint_dir,
output_directory, max_detections, use_regular_nms,
include_keypoints=False, label_map_path=''):
"""Exports inference SavedModel for TFLite conversion.
NOTE: Only supports SSD meta-architectures for now, and the output model will
have static-shaped, single-batch input.
This function creates `output_directory` if it does not already exist,
which will hold the intermediate SavedModel that can be used with the TFLite
converter.
Args:
pipeline_config: pipeline_pb2.TrainAndEvalPipelineConfig proto.
trained_checkpoint_dir: Path to the trained checkpoint file.
output_directory: Path to write outputs.
max_detections: Max detections desired from the TFLite model.
use_regular_nms: If True, TFLite model uses the (slower) multi-class NMS.
Note that this argument is only used by the SSD model.
include_keypoints: Decides whether to also output the keypoint predictions.
Note that this argument is only used by the CenterNet model.
label_map_path: Path to the label map which is used by CenterNet keypoint
estimation task. If provided, the label_map_path in the configuration will
be replaced by this one.
Raises:
ValueError: if pipeline is invalid.
"""
output_saved_model_directory = os.path.join(output_directory, 'saved_model')
# Build the underlying model using pipeline config.
# TODO(b/162842801): Add support for other architectures.
if pipeline_config.model.WhichOneof('model') == 'ssd':
detection_model = model_builder.build(
pipeline_config.model, is_training=False)
ckpt = tf.train.Checkpoint(model=detection_model)
# The module helps build a TF SavedModel appropriate for TFLite conversion.
detection_module = SSDModule(pipeline_config, detection_model,
max_detections, use_regular_nms)
elif pipeline_config.model.WhichOneof('model') == 'center_net':
detection_module = CenterNetModule(
pipeline_config, max_detections, include_keypoints,
label_map_path=label_map_path)
ckpt = tf.train.Checkpoint(model=detection_module.get_model())
else:
raise ValueError('Only ssd or center_net models are supported in tflite. '
'Found {} in config'.format(
pipeline_config.model.WhichOneof('model')))
manager = tf.train.CheckpointManager(
ckpt, trained_checkpoint_dir, max_to_keep=1)
status = ckpt.restore(manager.latest_checkpoint).expect_partial()
# Getting the concrete function traces the graph and forces variables to
# be constructed; only after this can we save the saved model.
status.assert_existing_objects_matched()
concrete_function = detection_module.inference_fn.get_concrete_function(
tf.TensorSpec(
shape=detection_module.input_shape(), dtype=tf.float32, name='input'))
status.assert_existing_objects_matched()
# Export SavedModel.
tf.saved_model.save(
detection_module,
output_saved_model_directory,
signatures=concrete_function) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/export_tflite_graph_lib_tf2.py | export_tflite_graph_lib_tf2.py |
"""Binary to run train and evaluation on object detection model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl import flags
import tensorflow.compat.v1 as tf
from object_detection import model_lib
flags.DEFINE_string(
'model_dir', None, 'Path to output model directory '
'where event and checkpoint files will be written.')
flags.DEFINE_string('pipeline_config_path', None, 'Path to pipeline config '
'file.')
flags.DEFINE_integer('num_train_steps', None, 'Number of train steps.')
flags.DEFINE_boolean('eval_training_data', False,
'If training data should be evaluated for this job. Note '
'that one call only use this in eval-only mode, and '
'`checkpoint_dir` must be supplied.')
flags.DEFINE_integer('sample_1_of_n_eval_examples', 1, 'Will sample one of '
'every n eval input examples, where n is provided.')
flags.DEFINE_integer('sample_1_of_n_eval_on_train_examples', 5, 'Will sample '
'one of every n train input examples for evaluation, '
'where n is provided. This is only used if '
'`eval_training_data` is True.')
flags.DEFINE_string(
'checkpoint_dir', None, 'Path to directory holding a checkpoint. If '
'`checkpoint_dir` is provided, this binary operates in eval-only mode, '
'writing resulting metrics to `model_dir`.')
flags.DEFINE_boolean(
'run_once', False, 'If running in eval-only mode, whether to run just '
'one round of eval vs running continuously (default).'
)
flags.DEFINE_integer(
'max_eval_retries', 0, 'If running continuous eval, the maximum number of '
'retries upon encountering tf.errors.InvalidArgumentError. If negative, '
'will always retry the evaluation.'
)
FLAGS = flags.FLAGS
def main(unused_argv):
flags.mark_flag_as_required('model_dir')
flags.mark_flag_as_required('pipeline_config_path')
config = tf.estimator.RunConfig(model_dir=FLAGS.model_dir)
train_and_eval_dict = model_lib.create_estimator_and_inputs(
run_config=config,
pipeline_config_path=FLAGS.pipeline_config_path,
train_steps=FLAGS.num_train_steps,
sample_1_of_n_eval_examples=FLAGS.sample_1_of_n_eval_examples,
sample_1_of_n_eval_on_train_examples=(
FLAGS.sample_1_of_n_eval_on_train_examples))
estimator = train_and_eval_dict['estimator']
train_input_fn = train_and_eval_dict['train_input_fn']
eval_input_fns = train_and_eval_dict['eval_input_fns']
eval_on_train_input_fn = train_and_eval_dict['eval_on_train_input_fn']
predict_input_fn = train_and_eval_dict['predict_input_fn']
train_steps = train_and_eval_dict['train_steps']
if FLAGS.checkpoint_dir:
if FLAGS.eval_training_data:
name = 'training_data'
input_fn = eval_on_train_input_fn
else:
name = 'validation_data'
# The first eval input will be evaluated.
input_fn = eval_input_fns[0]
if FLAGS.run_once:
estimator.evaluate(input_fn,
steps=None,
checkpoint_path=tf.train.latest_checkpoint(
FLAGS.checkpoint_dir))
else:
model_lib.continuous_eval(estimator, FLAGS.checkpoint_dir, input_fn,
train_steps, name, FLAGS.max_eval_retries)
else:
train_spec, eval_specs = model_lib.create_train_and_eval_specs(
train_input_fn,
eval_input_fns,
eval_on_train_input_fn,
predict_input_fn,
train_steps,
eval_on_train_data=False)
# Currently only a single Eval Spec is allowed.
tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])
if __name__ == '__main__':
tf.app.run() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/model_main.py | model_main.py |
"""Functions to export object detection inference graph."""
import ast
import os
import tensorflow.compat.v2 as tf
from object_detection.builders import model_builder
from object_detection.core import standard_fields as fields
from object_detection.data_decoders import tf_example_decoder
from object_detection.utils import config_util
INPUT_BUILDER_UTIL_MAP = {
'model_build': model_builder.build,
}
def _decode_image(encoded_image_string_tensor):
image_tensor = tf.image.decode_image(encoded_image_string_tensor,
channels=3)
image_tensor.set_shape((None, None, 3))
return image_tensor
def _decode_tf_example(tf_example_string_tensor):
tensor_dict = tf_example_decoder.TfExampleDecoder().decode(
tf_example_string_tensor)
image_tensor = tensor_dict[fields.InputDataFields.image]
return image_tensor
def _combine_side_inputs(side_input_shapes='',
side_input_types='',
side_input_names=''):
"""Zips the side inputs together.
Args:
side_input_shapes: forward-slash-separated list of comma-separated lists
describing input shapes.
side_input_types: comma-separated list of the types of the inputs.
side_input_names: comma-separated list of the names of the inputs.
Returns:
a zipped list of side input tuples.
"""
side_input_shapes = [
ast.literal_eval('[' + x + ']') for x in side_input_shapes.split('/')
]
side_input_types = eval('[' + side_input_types + ']') # pylint: disable=eval-used
side_input_names = side_input_names.split(',')
return zip(side_input_shapes, side_input_types, side_input_names)
class DetectionInferenceModule(tf.Module):
"""Detection Inference Module."""
def __init__(self, detection_model,
use_side_inputs=False,
zipped_side_inputs=None):
"""Initializes a module for detection.
Args:
detection_model: the detection model to use for inference.
use_side_inputs: whether to use side inputs.
zipped_side_inputs: the zipped side inputs.
"""
self._model = detection_model
def _get_side_input_signature(self, zipped_side_inputs):
sig = []
side_input_names = []
for info in zipped_side_inputs:
sig.append(tf.TensorSpec(shape=info[0],
dtype=info[1],
name=info[2]))
side_input_names.append(info[2])
return sig
def _get_side_names_from_zip(self, zipped_side_inputs):
return [side[2] for side in zipped_side_inputs]
def _preprocess_input(self, batch_input, decode_fn):
# Input preprocessing happends on the CPU. We don't need to use the device
# placement as it is automatically handled by TF.
def _decode_and_preprocess(single_input):
image = decode_fn(single_input)
image = tf.cast(image, tf.float32)
image, true_shape = self._model.preprocess(image[tf.newaxis, :, :, :])
return image[0], true_shape[0]
images, true_shapes = tf.map_fn(
_decode_and_preprocess,
elems=batch_input,
parallel_iterations=32,
back_prop=False,
fn_output_signature=(tf.float32, tf.int32))
return images, true_shapes
def _run_inference_on_images(self, images, true_shapes, **kwargs):
"""Cast image to float and run inference.
Args:
images: float32 Tensor of shape [None, None, None, 3].
true_shapes: int32 Tensor of form [batch, 3]
**kwargs: additional keyword arguments.
Returns:
Tensor dictionary holding detections.
"""
label_id_offset = 1
prediction_dict = self._model.predict(images, true_shapes, **kwargs)
detections = self._model.postprocess(prediction_dict, true_shapes)
classes_field = fields.DetectionResultFields.detection_classes
detections[classes_field] = (
tf.cast(detections[classes_field], tf.float32) + label_id_offset)
for key, val in detections.items():
detections[key] = tf.cast(val, tf.float32)
return detections
class DetectionFromImageModule(DetectionInferenceModule):
"""Detection Inference Module for image inputs."""
def __init__(self, detection_model,
use_side_inputs=False,
zipped_side_inputs=None):
"""Initializes a module for detection.
Args:
detection_model: the detection model to use for inference.
use_side_inputs: whether to use side inputs.
zipped_side_inputs: the zipped side inputs.
"""
if zipped_side_inputs is None:
zipped_side_inputs = []
sig = [tf.TensorSpec(shape=[1, None, None, 3],
dtype=tf.uint8,
name='input_tensor')]
if use_side_inputs:
sig.extend(self._get_side_input_signature(zipped_side_inputs))
self._side_input_names = self._get_side_names_from_zip(zipped_side_inputs)
def call_func(input_tensor, *side_inputs):
kwargs = dict(zip(self._side_input_names, side_inputs))
images, true_shapes = self._preprocess_input(input_tensor, lambda x: x)
return self._run_inference_on_images(images, true_shapes, **kwargs)
self.__call__ = tf.function(call_func, input_signature=sig)
# TODO(kaushikshiv): Check if omitting the signature also works.
super(DetectionFromImageModule, self).__init__(detection_model,
use_side_inputs,
zipped_side_inputs)
def get_true_shapes(input_tensor):
input_shape = tf.shape(input_tensor)
batch = input_shape[0]
image_shape = input_shape[1:]
true_shapes = tf.tile(image_shape[tf.newaxis, :], [batch, 1])
return true_shapes
class DetectionFromFloatImageModule(DetectionInferenceModule):
"""Detection Inference Module for float image inputs."""
@tf.function(
input_signature=[
tf.TensorSpec(shape=[None, None, None, 3], dtype=tf.float32)])
def __call__(self, input_tensor):
images, true_shapes = self._preprocess_input(input_tensor, lambda x: x)
return self._run_inference_on_images(images,
true_shapes)
class DetectionFromEncodedImageModule(DetectionInferenceModule):
"""Detection Inference Module for encoded image string inputs."""
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
def __call__(self, input_tensor):
images, true_shapes = self._preprocess_input(input_tensor, _decode_image)
return self._run_inference_on_images(images, true_shapes)
class DetectionFromTFExampleModule(DetectionInferenceModule):
"""Detection Inference Module for TF.Example inputs."""
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
def __call__(self, input_tensor):
images, true_shapes = self._preprocess_input(input_tensor,
_decode_tf_example)
return self._run_inference_on_images(images, true_shapes)
def export_inference_graph(input_type,
pipeline_config,
trained_checkpoint_dir,
output_directory,
use_side_inputs=False,
side_input_shapes='',
side_input_types='',
side_input_names=''):
"""Exports inference graph for the model specified in the pipeline config.
This function creates `output_directory` if it does not already exist,
which will hold a copy of the pipeline config with filename `pipeline.config`,
and two subdirectories named `checkpoint` and `saved_model`
(containing the exported checkpoint and SavedModel respectively).
Args:
input_type: Type of input for the graph. Can be one of ['image_tensor',
'encoded_image_string_tensor', 'tf_example'].
pipeline_config: pipeline_pb2.TrainAndEvalPipelineConfig proto.
trained_checkpoint_dir: Path to the trained checkpoint file.
output_directory: Path to write outputs.
use_side_inputs: boolean that determines whether side inputs should be
included in the input signature.
side_input_shapes: forward-slash-separated list of comma-separated lists
describing input shapes.
side_input_types: comma-separated list of the types of the inputs.
side_input_names: comma-separated list of the names of the inputs.
Raises:
ValueError: if input_type is invalid.
"""
output_checkpoint_directory = os.path.join(output_directory, 'checkpoint')
output_saved_model_directory = os.path.join(output_directory, 'saved_model')
detection_model = INPUT_BUILDER_UTIL_MAP['model_build'](
pipeline_config.model, is_training=False)
ckpt = tf.train.Checkpoint(
model=detection_model)
manager = tf.train.CheckpointManager(
ckpt, trained_checkpoint_dir, max_to_keep=1)
status = ckpt.restore(manager.latest_checkpoint).expect_partial()
if input_type not in DETECTION_MODULE_MAP:
raise ValueError('Unrecognized `input_type`')
if use_side_inputs and input_type != 'image_tensor':
raise ValueError('Side inputs supported for image_tensor input type only.')
zipped_side_inputs = []
if use_side_inputs:
zipped_side_inputs = _combine_side_inputs(side_input_shapes,
side_input_types,
side_input_names)
detection_module = DETECTION_MODULE_MAP[input_type](detection_model,
use_side_inputs,
list(zipped_side_inputs))
# Getting the concrete function traces the graph and forces variables to
# be constructed --- only after this can we save the checkpoint and
# saved model.
concrete_function = detection_module.__call__.get_concrete_function()
status.assert_existing_objects_matched()
exported_checkpoint_manager = tf.train.CheckpointManager(
ckpt, output_checkpoint_directory, max_to_keep=1)
exported_checkpoint_manager.save(checkpoint_number=0)
tf.saved_model.save(detection_module,
output_saved_model_directory,
signatures=concrete_function)
config_util.save_pipeline_config(pipeline_config, output_directory)
class DetectionFromImageAndBoxModule(DetectionInferenceModule):
"""Detection Inference Module for image with bounding box inputs.
The saved model will require two inputs (image and normalized boxes) and run
per-box mask prediction. To be compatible with this exporter, the detection
model has to implement a called predict_masks_from_boxes(
prediction_dict, true_image_shapes, provided_boxes, **params), where
- prediciton_dict is a dict returned by the predict method.
- true_image_shapes is a tensor of size [batch_size, 3], containing the
true shape of each image in case it is padded.
- provided_boxes is a [batch_size, num_boxes, 4] size tensor containing
boxes specified in normalized coordinates.
"""
def __init__(self,
detection_model,
use_side_inputs=False,
zipped_side_inputs=None):
"""Initializes a module for detection.
Args:
detection_model: the detection model to use for inference.
use_side_inputs: whether to use side inputs.
zipped_side_inputs: the zipped side inputs.
"""
assert hasattr(detection_model, 'predict_masks_from_boxes')
super(DetectionFromImageAndBoxModule,
self).__init__(detection_model, use_side_inputs, zipped_side_inputs)
def _run_segmentation_on_images(self, image, boxes, **kwargs):
"""Run segmentation on images with provided boxes.
Args:
image: uint8 Tensor of shape [1, None, None, 3].
boxes: float32 tensor of shape [1, None, 4] containing normalized box
coordinates.
**kwargs: additional keyword arguments.
Returns:
Tensor dictionary holding detections (including masks).
"""
label_id_offset = 1
image = tf.cast(image, tf.float32)
image, shapes = self._model.preprocess(image)
prediction_dict = self._model.predict(image, shapes, **kwargs)
detections = self._model.predict_masks_from_boxes(prediction_dict, shapes,
boxes)
classes_field = fields.DetectionResultFields.detection_classes
detections[classes_field] = (
tf.cast(detections[classes_field], tf.float32) + label_id_offset)
for key, val in detections.items():
detections[key] = tf.cast(val, tf.float32)
return detections
@tf.function(input_signature=[
tf.TensorSpec(shape=[1, None, None, 3], dtype=tf.uint8),
tf.TensorSpec(shape=[1, None, 4], dtype=tf.float32)
])
def __call__(self, input_tensor, boxes):
return self._run_segmentation_on_images(input_tensor, boxes)
DETECTION_MODULE_MAP = {
'image_tensor': DetectionFromImageModule,
'encoded_image_string_tensor':
DetectionFromEncodedImageModule,
'tf_example': DetectionFromTFExampleModule,
'float_image_tensor': DetectionFromFloatImageModule,
'image_and_boxes_tensor': DetectionFromImageAndBoxModule,
} | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/exporter_lib_v2.py | exporter_lib_v2.py |
r"""Exports TF2 detection SavedModel for conversion to TensorFlow Lite.
Link to the TF2 Detection Zoo:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
The output folder will contain an intermediate SavedModel that can be used with
the TfLite converter.
NOTE: This only supports SSD meta-architectures for now.
One input:
image: a float32 tensor of shape[1, height, width, 3] containing the
*normalized* input image.
NOTE: See the `preprocess` function defined in the feature extractor class
in the object_detection/models directory.
Four Outputs:
detection_boxes: a float32 tensor of shape [1, num_boxes, 4] with box
locations
detection_classes: a float32 tensor of shape [1, num_boxes]
with class indices
detection_scores: a float32 tensor of shape [1, num_boxes]
with class scores
num_boxes: a float32 tensor of size 1 containing the number of detected boxes
Example Usage:
--------------
python object_detection/export_tflite_graph_tf2.py \
--pipeline_config_path path/to/ssd_model/pipeline.config \
--trained_checkpoint_dir path/to/ssd_model/checkpoint \
--output_directory path/to/exported_model_directory
The expected output SavedModel would be in the directory
path/to/exported_model_directory (which is created if it does not exist).
Config overrides (see the `config_override` flag) are text protobufs
(also of type pipeline_pb2.TrainEvalPipelineConfig) which are used to override
certain fields in the provided pipeline_config_path. These are useful for
making small changes to the inference graph that differ from the training or
eval config.
Example Usage 1 (in which we change the NMS iou_threshold to be 0.5 and
NMS score_threshold to be 0.0):
python object_detection/export_tflite_model_tf2.py \
--pipeline_config_path path/to/ssd_model/pipeline.config \
--trained_checkpoint_dir path/to/ssd_model/checkpoint \
--output_directory path/to/exported_model_directory
--config_override " \
model{ \
ssd{ \
post_processing { \
batch_non_max_suppression { \
score_threshold: 0.0 \
iou_threshold: 0.5 \
} \
} \
} \
} \
"
Example Usage 2 (export CenterNet model for keypoint estimation task with fixed
shape resizer and customized input resolution):
python object_detection/export_tflite_model_tf2.py \
--pipeline_config_path path/to/ssd_model/pipeline.config \
--trained_checkpoint_dir path/to/ssd_model/checkpoint \
--output_directory path/to/exported_model_directory \
--keypoint_label_map_path path/to/label_map.txt \
--max_detections 10 \
--centernet_include_keypoints true \
--config_override " \
model{ \
center_net { \
image_resizer { \
fixed_shape_resizer { \
height: 320 \
width: 320 \
} \
} \
} \
}" \
"""
from absl import app
from absl import flags
import tensorflow.compat.v2 as tf
from google.protobuf import text_format
from object_detection import export_tflite_graph_lib_tf2
from object_detection.protos import pipeline_pb2
tf.enable_v2_behavior()
FLAGS = flags.FLAGS
flags.DEFINE_string(
'pipeline_config_path', None,
'Path to a pipeline_pb2.TrainEvalPipelineConfig config '
'file.')
flags.DEFINE_string('trained_checkpoint_dir', None,
'Path to trained checkpoint directory')
flags.DEFINE_string('output_directory', None, 'Path to write outputs.')
flags.DEFINE_string(
'config_override', '', 'pipeline_pb2.TrainEvalPipelineConfig '
'text proto to override pipeline_config_path.')
flags.DEFINE_integer('max_detections', 10,
'Maximum number of detections (boxes) to return.')
# SSD-specific flags
flags.DEFINE_bool(
'ssd_use_regular_nms', False,
'Flag to set postprocessing op to use Regular NMS instead of Fast NMS '
'(Default false).')
# CenterNet-specific flags
flags.DEFINE_bool(
'centernet_include_keypoints', False,
'Whether to export the predicted keypoint tensors. Only CenterNet model'
' supports this flag.'
)
flags.DEFINE_string(
'keypoint_label_map_path', None,
'Path of the label map used by CenterNet keypoint estimation task. If'
' provided, the label map path in the pipeline config will be replaced by'
' this one. Note that it is only used when exporting CenterNet model for'
' keypoint estimation task.'
)
def main(argv):
del argv # Unused.
flags.mark_flag_as_required('pipeline_config_path')
flags.mark_flag_as_required('trained_checkpoint_dir')
flags.mark_flag_as_required('output_directory')
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
with tf.io.gfile.GFile(FLAGS.pipeline_config_path, 'r') as f:
text_format.Parse(f.read(), pipeline_config)
override_config = pipeline_pb2.TrainEvalPipelineConfig()
text_format.Parse(FLAGS.config_override, override_config)
pipeline_config.MergeFrom(override_config)
export_tflite_graph_lib_tf2.export_tflite_model(
pipeline_config, FLAGS.trained_checkpoint_dir, FLAGS.output_directory,
FLAGS.max_detections, FLAGS.ssd_use_regular_nms,
FLAGS.centernet_include_keypoints, FLAGS.keypoint_label_map_path)
if __name__ == '__main__':
app.run(main) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/export_tflite_graph_tf2.py | export_tflite_graph_tf2.py |
import tensorflow.compat.v1 as tf
from object_detection.core import matcher
from object_detection.utils import shape_utils
class ArgMaxMatcher(matcher.Matcher):
"""Matcher based on highest value.
This class computes matches from a similarity matrix. Each column is matched
to a single row.
To support object detection target assignment this class enables setting both
matched_threshold (upper threshold) and unmatched_threshold (lower thresholds)
defining three categories of similarity which define whether examples are
positive, negative, or ignored:
(1) similarity >= matched_threshold: Highest similarity. Matched/Positive!
(2) matched_threshold > similarity >= unmatched_threshold: Medium similarity.
Depending on negatives_lower_than_unmatched, this is either
Unmatched/Negative OR Ignore.
(3) unmatched_threshold > similarity: Lowest similarity. Depending on flag
negatives_lower_than_unmatched, either Unmatched/Negative OR Ignore.
For ignored matches this class sets the values in the Match object to -2.
"""
def __init__(self,
matched_threshold,
unmatched_threshold=None,
negatives_lower_than_unmatched=True,
force_match_for_each_row=False,
use_matmul_gather=False):
"""Construct ArgMaxMatcher.
Args:
matched_threshold: Threshold for positive matches. Positive if
sim >= matched_threshold, where sim is the maximum value of the
similarity matrix for a given column. Set to None for no threshold.
unmatched_threshold: Threshold for negative matches. Negative if
sim < unmatched_threshold. Defaults to matched_threshold
when set to None.
negatives_lower_than_unmatched: Boolean which defaults to True. If True
then negative matches are the ones below the unmatched_threshold,
whereas ignored matches are in between the matched and umatched
threshold. If False, then negative matches are in between the matched
and unmatched threshold, and everything lower than unmatched is ignored.
force_match_for_each_row: If True, ensures that each row is matched to
at least one column (which is not guaranteed otherwise if the
matched_threshold is high). Defaults to False. See
argmax_matcher_test.testMatcherForceMatch() for an example.
use_matmul_gather: Force constructed match objects to use matrix
multiplication based gather instead of standard tf.gather.
(Default: False).
Raises:
ValueError: if unmatched_threshold is set but matched_threshold is not set
or if unmatched_threshold > matched_threshold.
"""
super(ArgMaxMatcher, self).__init__(use_matmul_gather=use_matmul_gather)
if (matched_threshold is None) and (unmatched_threshold is not None):
raise ValueError('Need to also define matched_threshold when'
'unmatched_threshold is defined')
self._matched_threshold = matched_threshold
if unmatched_threshold is None:
self._unmatched_threshold = matched_threshold
else:
if unmatched_threshold > matched_threshold:
raise ValueError('unmatched_threshold needs to be smaller or equal'
'to matched_threshold')
self._unmatched_threshold = unmatched_threshold
if not negatives_lower_than_unmatched:
if self._unmatched_threshold == self._matched_threshold:
raise ValueError('When negatives are in between matched and '
'unmatched thresholds, these cannot be of equal '
'value. matched: {}, unmatched: {}'.format(
self._matched_threshold,
self._unmatched_threshold))
self._force_match_for_each_row = force_match_for_each_row
self._negatives_lower_than_unmatched = negatives_lower_than_unmatched
def _match(self, similarity_matrix, valid_rows):
"""Tries to match each column of the similarity matrix to a row.
Args:
similarity_matrix: tensor of shape [N, M] representing any similarity
metric.
valid_rows: a boolean tensor of shape [N] indicating valid rows.
Returns:
Match object with corresponding matches for each of M columns.
"""
def _match_when_rows_are_empty():
"""Performs matching when the rows of similarity matrix are empty.
When the rows are empty, all detections are false positives. So we return
a tensor of -1's to indicate that the columns do not match to any rows.
Returns:
matches: int32 tensor indicating the row each column matches to.
"""
similarity_matrix_shape = shape_utils.combined_static_and_dynamic_shape(
similarity_matrix)
return -1 * tf.ones([similarity_matrix_shape[1]], dtype=tf.int32)
def _match_when_rows_are_non_empty():
"""Performs matching when the rows of similarity matrix are non empty.
Returns:
matches: int32 tensor indicating the row each column matches to.
"""
# Matches for each column
matches = tf.argmax(similarity_matrix, 0, output_type=tf.int32)
# Deal with matched and unmatched threshold
if self._matched_threshold is not None:
# Get logical indices of ignored and unmatched columns as tf.int64
matched_vals = tf.reduce_max(similarity_matrix, 0)
below_unmatched_threshold = tf.greater(self._unmatched_threshold,
matched_vals)
between_thresholds = tf.logical_and(
tf.greater_equal(matched_vals, self._unmatched_threshold),
tf.greater(self._matched_threshold, matched_vals))
if self._negatives_lower_than_unmatched:
matches = self._set_values_using_indicator(matches,
below_unmatched_threshold,
-1)
matches = self._set_values_using_indicator(matches,
between_thresholds,
-2)
else:
matches = self._set_values_using_indicator(matches,
below_unmatched_threshold,
-2)
matches = self._set_values_using_indicator(matches,
between_thresholds,
-1)
if self._force_match_for_each_row:
similarity_matrix_shape = shape_utils.combined_static_and_dynamic_shape(
similarity_matrix)
force_match_column_ids = tf.argmax(similarity_matrix, 1,
output_type=tf.int32)
force_match_column_indicators = (
tf.one_hot(
force_match_column_ids, depth=similarity_matrix_shape[1]) *
tf.cast(tf.expand_dims(valid_rows, axis=-1), dtype=tf.float32))
force_match_row_ids = tf.argmax(force_match_column_indicators, 0,
output_type=tf.int32)
force_match_column_mask = tf.cast(
tf.reduce_max(force_match_column_indicators, 0), tf.bool)
final_matches = tf.where(force_match_column_mask,
force_match_row_ids, matches)
return final_matches
else:
return matches
if similarity_matrix.shape.is_fully_defined():
if shape_utils.get_dim_as_int(similarity_matrix.shape[0]) == 0:
return _match_when_rows_are_empty()
else:
return _match_when_rows_are_non_empty()
else:
return tf.cond(
tf.greater(tf.shape(similarity_matrix)[0], 0),
_match_when_rows_are_non_empty, _match_when_rows_are_empty)
def _set_values_using_indicator(self, x, indicator, val):
"""Set the indicated fields of x to val.
Args:
x: tensor.
indicator: boolean with same shape as x.
val: scalar with value to set.
Returns:
modified tensor.
"""
indicator = tf.cast(indicator, x.dtype)
return tf.add(tf.multiply(x, 1 - indicator), val * indicator) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/matchers/argmax_matcher.py | argmax_matcher.py |
import tensorflow.compat.v1 as tf
from tensorflow.contrib.image.python.ops import image_ops
from object_detection.core import matcher
class GreedyBipartiteMatcher(matcher.Matcher):
"""Wraps a Tensorflow greedy bipartite matcher."""
def __init__(self, use_matmul_gather=False):
"""Constructs a Matcher.
Args:
use_matmul_gather: Force constructed match objects to use matrix
multiplication based gather instead of standard tf.gather.
(Default: False).
"""
super(GreedyBipartiteMatcher, self).__init__(
use_matmul_gather=use_matmul_gather)
def _match(self, similarity_matrix, valid_rows):
"""Bipartite matches a collection rows and columns. A greedy bi-partite.
TODO(rathodv): Add num_valid_columns options to match only that many columns
with all the rows.
Args:
similarity_matrix: Float tensor of shape [N, M] with pairwise similarity
where higher values mean more similar.
valid_rows: A boolean tensor of shape [N] indicating the rows that are
valid.
Returns:
match_results: int32 tensor of shape [M] with match_results[i]=-1
meaning that column i is not matched and otherwise that it is matched to
row match_results[i].
"""
valid_row_sim_matrix = tf.gather(similarity_matrix,
tf.squeeze(tf.where(valid_rows), axis=-1))
invalid_row_sim_matrix = tf.gather(
similarity_matrix,
tf.squeeze(tf.where(tf.logical_not(valid_rows)), axis=-1))
similarity_matrix = tf.concat(
[valid_row_sim_matrix, invalid_row_sim_matrix], axis=0)
# Convert similarity matrix to distance matrix as tf.image.bipartite tries
# to find minimum distance matches.
distance_matrix = -1 * similarity_matrix
num_valid_rows = tf.reduce_sum(tf.cast(valid_rows, dtype=tf.float32))
_, match_results = image_ops.bipartite_match(
distance_matrix, num_valid_rows=num_valid_rows)
match_results = tf.reshape(match_results, [-1])
match_results = tf.cast(match_results, tf.int32)
return match_results | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/matchers/bipartite_matcher.py | bipartite_matcher.py |
r"""Utilities for creating TFRecords of TF examples for the Open Images dataset.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import six
import tensorflow.compat.v1 as tf
from object_detection.core import standard_fields
from object_detection.utils import dataset_util
def tf_example_from_annotations_data_frame(annotations_data_frame, label_map,
encoded_image):
"""Populates a TF Example message with image annotations from a data frame.
Args:
annotations_data_frame: Data frame containing the annotations for a single
image.
label_map: String to integer label map.
encoded_image: The encoded image string
Returns:
The populated TF Example, if the label of at least one object is present in
label_map. Otherwise, returns None.
"""
filtered_data_frame = annotations_data_frame[
annotations_data_frame.LabelName.isin(label_map)]
filtered_data_frame_boxes = filtered_data_frame[
~filtered_data_frame.YMin.isnull()]
filtered_data_frame_labels = filtered_data_frame[
filtered_data_frame.YMin.isnull()]
image_id = annotations_data_frame.ImageID.iloc[0]
feature_map = {
standard_fields.TfExampleFields.object_bbox_ymin:
dataset_util.float_list_feature(
filtered_data_frame_boxes.YMin.to_numpy()),
standard_fields.TfExampleFields.object_bbox_xmin:
dataset_util.float_list_feature(
filtered_data_frame_boxes.XMin.to_numpy()),
standard_fields.TfExampleFields.object_bbox_ymax:
dataset_util.float_list_feature(
filtered_data_frame_boxes.YMax.to_numpy()),
standard_fields.TfExampleFields.object_bbox_xmax:
dataset_util.float_list_feature(
filtered_data_frame_boxes.XMax.to_numpy()),
standard_fields.TfExampleFields.object_class_text:
dataset_util.bytes_list_feature([
six.ensure_binary(label_text)
for label_text in filtered_data_frame_boxes.LabelName.to_numpy()
]),
standard_fields.TfExampleFields.object_class_label:
dataset_util.int64_list_feature(
filtered_data_frame_boxes.LabelName.map(
lambda x: label_map[x]).to_numpy()),
standard_fields.TfExampleFields.filename:
dataset_util.bytes_feature(
six.ensure_binary('{}.jpg'.format(image_id))),
standard_fields.TfExampleFields.source_id:
dataset_util.bytes_feature(six.ensure_binary(image_id)),
standard_fields.TfExampleFields.image_encoded:
dataset_util.bytes_feature(six.ensure_binary(encoded_image)),
}
if 'IsGroupOf' in filtered_data_frame.columns:
feature_map[standard_fields.TfExampleFields.
object_group_of] = dataset_util.int64_list_feature(
filtered_data_frame_boxes.IsGroupOf.to_numpy().astype(int))
if 'IsOccluded' in filtered_data_frame.columns:
feature_map[standard_fields.TfExampleFields.
object_occluded] = dataset_util.int64_list_feature(
filtered_data_frame_boxes.IsOccluded.to_numpy().astype(
int))
if 'IsTruncated' in filtered_data_frame.columns:
feature_map[standard_fields.TfExampleFields.
object_truncated] = dataset_util.int64_list_feature(
filtered_data_frame_boxes.IsTruncated.to_numpy().astype(
int))
if 'IsDepiction' in filtered_data_frame.columns:
feature_map[standard_fields.TfExampleFields.
object_depiction] = dataset_util.int64_list_feature(
filtered_data_frame_boxes.IsDepiction.to_numpy().astype(
int))
if 'ConfidenceImageLabel' in filtered_data_frame_labels.columns:
feature_map[standard_fields.TfExampleFields.
image_class_label] = dataset_util.int64_list_feature(
filtered_data_frame_labels.LabelName.map(
lambda x: label_map[x]).to_numpy())
feature_map[standard_fields.TfExampleFields
.image_class_text] = dataset_util.bytes_list_feature([
six.ensure_binary(label_text) for label_text in
filtered_data_frame_labels.LabelName.to_numpy()
]),
return tf.train.Example(features=tf.train.Features(feature=feature_map)) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/dataset_tools/oid_tfrecord_creation.py | oid_tfrecord_creation.py |
r"""Convert the Oxford pet dataset to TFRecord for object_detection.
See: O. M. Parkhi, A. Vedaldi, A. Zisserman, C. V. Jawahar
Cats and Dogs
IEEE Conference on Computer Vision and Pattern Recognition, 2012
http://www.robots.ox.ac.uk/~vgg/data/pets/
Example usage:
python object_detection/dataset_tools/create_pet_tf_record.py \
--data_dir=/home/user/pet \
--output_dir=/home/user/pet/output
"""
import hashlib
import io
import logging
import os
import random
import re
import contextlib2
from lxml import etree
import numpy as np
import PIL.Image
import tensorflow.compat.v1 as tf
from object_detection.dataset_tools import tf_record_creation_util
from object_detection.utils import dataset_util
from object_detection.utils import label_map_util
flags = tf.app.flags
flags.DEFINE_string('data_dir', '', 'Root directory to raw pet dataset.')
flags.DEFINE_string('output_dir', '', 'Path to directory to output TFRecords.')
flags.DEFINE_string('label_map_path', 'data/pet_label_map.pbtxt',
'Path to label map proto')
flags.DEFINE_boolean('faces_only', True, 'If True, generates bounding boxes '
'for pet faces. Otherwise generates bounding boxes (as '
'well as segmentations for full pet bodies). Note that '
'in the latter case, the resulting files are much larger.')
flags.DEFINE_string('mask_type', 'png', 'How to represent instance '
'segmentation masks. Options are "png" or "numerical".')
flags.DEFINE_integer('num_shards', 10, 'Number of TFRecord shards')
FLAGS = flags.FLAGS
def get_class_name_from_filename(file_name):
"""Gets the class name from a file.
Args:
file_name: The file name to get the class name from.
ie. "american_pit_bull_terrier_105.jpg"
Returns:
A string of the class name.
"""
match = re.match(r'([A-Za-z_]+)(_[0-9]+\.jpg)', file_name, re.I)
return match.groups()[0]
def dict_to_tf_example(data,
mask_path,
label_map_dict,
image_subdirectory,
ignore_difficult_instances=False,
faces_only=True,
mask_type='png'):
"""Convert XML derived dict to tf.Example proto.
Notice that this function normalizes the bounding box coordinates provided
by the raw data.
Args:
data: dict holding PASCAL XML fields for a single image (obtained by
running dataset_util.recursive_parse_xml_to_dict)
mask_path: String path to PNG encoded mask.
label_map_dict: A map from string label names to integers ids.
image_subdirectory: String specifying subdirectory within the
Pascal dataset directory holding the actual image data.
ignore_difficult_instances: Whether to skip difficult instances in the
dataset (default: False).
faces_only: If True, generates bounding boxes for pet faces. Otherwise
generates bounding boxes (as well as segmentations for full pet bodies).
mask_type: 'numerical' or 'png'. 'png' is recommended because it leads to
smaller file sizes.
Returns:
example: The converted tf.Example.
Raises:
ValueError: if the image pointed to by data['filename'] is not a valid JPEG
"""
img_path = os.path.join(image_subdirectory, data['filename'])
with tf.gfile.GFile(img_path, 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = PIL.Image.open(encoded_jpg_io)
if image.format != 'JPEG':
raise ValueError('Image format not JPEG')
key = hashlib.sha256(encoded_jpg).hexdigest()
with tf.gfile.GFile(mask_path, 'rb') as fid:
encoded_mask_png = fid.read()
encoded_png_io = io.BytesIO(encoded_mask_png)
mask = PIL.Image.open(encoded_png_io)
if mask.format != 'PNG':
raise ValueError('Mask format not PNG')
mask_np = np.asarray(mask)
nonbackground_indices_x = np.any(mask_np != 2, axis=0)
nonbackground_indices_y = np.any(mask_np != 2, axis=1)
nonzero_x_indices = np.where(nonbackground_indices_x)
nonzero_y_indices = np.where(nonbackground_indices_y)
width = int(data['size']['width'])
height = int(data['size']['height'])
xmins = []
ymins = []
xmaxs = []
ymaxs = []
classes = []
classes_text = []
truncated = []
poses = []
difficult_obj = []
masks = []
if 'object' in data:
for obj in data['object']:
difficult = bool(int(obj['difficult']))
if ignore_difficult_instances and difficult:
continue
difficult_obj.append(int(difficult))
if faces_only:
xmin = float(obj['bndbox']['xmin'])
xmax = float(obj['bndbox']['xmax'])
ymin = float(obj['bndbox']['ymin'])
ymax = float(obj['bndbox']['ymax'])
else:
xmin = float(np.min(nonzero_x_indices))
xmax = float(np.max(nonzero_x_indices))
ymin = float(np.min(nonzero_y_indices))
ymax = float(np.max(nonzero_y_indices))
xmins.append(xmin / width)
ymins.append(ymin / height)
xmaxs.append(xmax / width)
ymaxs.append(ymax / height)
class_name = get_class_name_from_filename(data['filename'])
classes_text.append(class_name.encode('utf8'))
classes.append(label_map_dict[class_name])
truncated.append(int(obj['truncated']))
poses.append(obj['pose'].encode('utf8'))
if not faces_only:
mask_remapped = (mask_np != 2).astype(np.uint8)
masks.append(mask_remapped)
feature_dict = {
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(
data['filename'].encode('utf8')),
'image/source_id': dataset_util.bytes_feature(
data['filename'].encode('utf8')),
'image/key/sha256': dataset_util.bytes_feature(key.encode('utf8')),
'image/encoded': dataset_util.bytes_feature(encoded_jpg),
'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
'image/object/class/label': dataset_util.int64_list_feature(classes),
'image/object/difficult': dataset_util.int64_list_feature(difficult_obj),
'image/object/truncated': dataset_util.int64_list_feature(truncated),
'image/object/view': dataset_util.bytes_list_feature(poses),
}
if not faces_only:
if mask_type == 'numerical':
mask_stack = np.stack(masks).astype(np.float32)
masks_flattened = np.reshape(mask_stack, [-1])
feature_dict['image/object/mask'] = (
dataset_util.float_list_feature(masks_flattened.tolist()))
elif mask_type == 'png':
encoded_mask_png_list = []
for mask in masks:
img = PIL.Image.fromarray(mask)
output = io.BytesIO()
img.save(output, format='PNG')
encoded_mask_png_list.append(output.getvalue())
feature_dict['image/object/mask'] = (
dataset_util.bytes_list_feature(encoded_mask_png_list))
example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
return example
def create_tf_record(output_filename,
num_shards,
label_map_dict,
annotations_dir,
image_dir,
examples,
faces_only=True,
mask_type='png'):
"""Creates a TFRecord file from examples.
Args:
output_filename: Path to where output file is saved.
num_shards: Number of shards for output file.
label_map_dict: The label map dictionary.
annotations_dir: Directory where annotation files are stored.
image_dir: Directory where image files are stored.
examples: Examples to parse and save to tf record.
faces_only: If True, generates bounding boxes for pet faces. Otherwise
generates bounding boxes (as well as segmentations for full pet bodies).
mask_type: 'numerical' or 'png'. 'png' is recommended because it leads to
smaller file sizes.
"""
with contextlib2.ExitStack() as tf_record_close_stack:
output_tfrecords = tf_record_creation_util.open_sharded_output_tfrecords(
tf_record_close_stack, output_filename, num_shards)
for idx, example in enumerate(examples):
if idx % 100 == 0:
logging.info('On image %d of %d', idx, len(examples))
xml_path = os.path.join(annotations_dir, 'xmls', example + '.xml')
mask_path = os.path.join(annotations_dir, 'trimaps', example + '.png')
if not os.path.exists(xml_path):
logging.warning('Could not find %s, ignoring example.', xml_path)
continue
with tf.gfile.GFile(xml_path, 'r') as fid:
xml_str = fid.read()
xml = etree.fromstring(xml_str)
data = dataset_util.recursive_parse_xml_to_dict(xml)['annotation']
try:
tf_example = dict_to_tf_example(
data,
mask_path,
label_map_dict,
image_dir,
faces_only=faces_only,
mask_type=mask_type)
if tf_example:
shard_idx = idx % num_shards
output_tfrecords[shard_idx].write(tf_example.SerializeToString())
except ValueError:
logging.warning('Invalid example: %s, ignoring.', xml_path)
# TODO(derekjchow): Add test for pet/PASCAL main files.
def main(_):
data_dir = FLAGS.data_dir
label_map_dict = label_map_util.get_label_map_dict(FLAGS.label_map_path)
logging.info('Reading from Pet dataset.')
image_dir = os.path.join(data_dir, 'images')
annotations_dir = os.path.join(data_dir, 'annotations')
examples_path = os.path.join(annotations_dir, 'trainval.txt')
examples_list = dataset_util.read_examples_list(examples_path)
# Test images are not included in the downloaded data set, so we shall perform
# our own split.
random.seed(42)
random.shuffle(examples_list)
num_examples = len(examples_list)
num_train = int(0.7 * num_examples)
train_examples = examples_list[:num_train]
val_examples = examples_list[num_train:]
logging.info('%d training and %d validation examples.',
len(train_examples), len(val_examples))
train_output_path = os.path.join(FLAGS.output_dir, 'pet_faces_train.record')
val_output_path = os.path.join(FLAGS.output_dir, 'pet_faces_val.record')
if not FLAGS.faces_only:
train_output_path = os.path.join(FLAGS.output_dir,
'pets_fullbody_with_masks_train.record')
val_output_path = os.path.join(FLAGS.output_dir,
'pets_fullbody_with_masks_val.record')
create_tf_record(
train_output_path,
FLAGS.num_shards,
label_map_dict,
annotations_dir,
image_dir,
train_examples,
faces_only=FLAGS.faces_only,
mask_type=FLAGS.mask_type)
create_tf_record(
val_output_path,
FLAGS.num_shards,
label_map_dict,
annotations_dir,
image_dir,
val_examples,
faces_only=FLAGS.faces_only,
mask_type=FLAGS.mask_type)
if __name__ == '__main__':
tf.app.run() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/dataset_tools/create_pet_tf_record.py | create_pet_tf_record.py |
r"""Code to download and parse the AVA Actions dataset for TensorFlow models.
The [AVA Actions data set](
https://research.google.com/ava/index.html)
is a dataset for human action recognition.
This script downloads the annotations and prepares data from similar annotations
if local video files are available. The video files can be downloaded
from the following website:
https://github.com/cvdfoundation/ava-dataset
Prior to running this script, please run download_and_preprocess_ava.sh to
download input videos.
Running this code as a module generates the data set on disk. First, the
required files are downloaded (_download_data) which enables constructing the
label map. Then (in generate_examples), for each split in the data set, the
metadata and image frames are generated from the annotations for each sequence
example (_generate_examples). The data set is written to disk as a set of
numbered TFRecord files.
Generating the data on disk can take considerable time and disk space.
(Image compression quality is the primary determiner of disk usage.
If using the Tensorflow Object Detection API, set the input_type field
in the input_reader to TF_SEQUENCE_EXAMPLE. If using this script to generate
data for Context R-CNN scripts, the --examples_for_context flag should be
set to true, so that properly-formatted tf.example objects are written to disk.
This data is structured for per-clip action classification where images is
the sequence of images and labels are a one-hot encoded value. See
as_dataset() for more details.
Note that the number of videos changes in the data set over time, so it will
likely be necessary to change the expected number of examples.
The argument video_path_format_string expects a value as such:
'/path/to/videos/{0}'
"""
import collections
import contextlib
import csv
import glob
import hashlib
import os
import random
import sys
import zipfile
from absl import app
from absl import flags
from absl import logging
import cv2
from six.moves import range
from six.moves import urllib
import tensorflow.compat.v1 as tf
from object_detection.dataset_tools import seq_example_util
from object_detection.utils import dataset_util
from object_detection.utils import label_map_util
POSSIBLE_TIMESTAMPS = range(902, 1798)
ANNOTATION_URL = 'https://research.google.com/ava/download/ava_v2.2.zip'
SECONDS_TO_MILLI = 1000
FILEPATTERN = 'ava_actions_%s_1fps_rgb'
SPLITS = {
'train': {
'shards': 1000,
'examples': 862663,
'csv': '',
'excluded-csv': ''
},
'val': {
'shards': 100,
'examples': 243029,
'csv': '',
'excluded-csv': ''
},
# Test doesn't have ground truth, so TF Records can't be created
'test': {
'shards': 100,
'examples': 0,
'csv': '',
'excluded-csv': ''
}
}
NUM_CLASSES = 80
def feature_list_feature(value):
return tf.train.FeatureList(feature=value)
class Ava(object):
"""Generates and loads the AVA Actions 2.2 data set."""
def __init__(self, path_to_output_dir, path_to_data_download):
if not path_to_output_dir:
raise ValueError('You must supply the path to the data directory.')
self.path_to_data_download = path_to_data_download
self.path_to_output_dir = path_to_output_dir
def generate_and_write_records(self,
splits_to_process='train,val,test',
video_path_format_string=None,
seconds_per_sequence=10,
hop_between_sequences=10,
examples_for_context=False):
"""Downloads data and generates sharded TFRecords.
Downloads the data files, generates metadata, and processes the metadata
with MediaPipe to produce tf.SequenceExamples for training. The resulting
files can be read with as_dataset(). After running this function the
original data files can be deleted.
Args:
splits_to_process: csv string of which splits to process. Allows
providing a custom CSV with the CSV flag. The original data is still
downloaded to generate the label_map.
video_path_format_string: The format string for the path to local files.
seconds_per_sequence: The length of each sequence, in seconds.
hop_between_sequences: The gap between the centers of
successive sequences.
examples_for_context: Whether to generate sequence examples with context
for context R-CNN.
"""
example_function = self._generate_sequence_examples
if examples_for_context:
example_function = self._generate_examples
logging.info('Downloading data.')
download_output = self._download_data()
for key in splits_to_process.split(','):
logging.info('Generating examples for split: %s', key)
all_metadata = list(example_function(
download_output[0][key][0], download_output[0][key][1],
download_output[1], seconds_per_sequence, hop_between_sequences,
video_path_format_string))
logging.info('An example of the metadata: ')
logging.info(all_metadata[0])
random.seed(47)
random.shuffle(all_metadata)
shards = SPLITS[key]['shards']
shard_names = [os.path.join(
self.path_to_output_dir, FILEPATTERN % key + '-%05d-of-%05d' % (
i, shards)) for i in range(shards)]
writers = [tf.io.TFRecordWriter(shard) for shard in shard_names]
with _close_on_exit(writers) as writers:
for i, seq_ex in enumerate(all_metadata):
writers[i % len(writers)].write(seq_ex.SerializeToString())
logging.info('Data extraction complete.')
def _generate_sequence_examples(self, annotation_file, excluded_file,
label_map, seconds_per_sequence,
hop_between_sequences,
video_path_format_string):
"""For each row in the annotation CSV, generates corresponding examples.
When iterating through frames for a single sequence example, skips over
excluded frames. When moving to the next sequence example, also skips over
excluded frames as if they don't exist. Generates equal-length sequence
examples, each with length seconds_per_sequence (1 fps) and gaps of
hop_between_sequences frames (and seconds) between them, possible greater
due to excluded frames.
Args:
annotation_file: path to the file of AVA CSV annotations.
excluded_file: path to a CSV file of excluded timestamps for each video.
label_map: an {int: string} label map.
seconds_per_sequence: The number of seconds per example in each example.
hop_between_sequences: The hop between sequences. If less than
seconds_per_sequence, will overlap.
video_path_format_string: File path format to glob video files.
Yields:
Each prepared tf.SequenceExample of metadata also containing video frames
"""
fieldnames = ['id', 'timestamp_seconds', 'xmin', 'ymin', 'xmax', 'ymax',
'action_label']
frame_excluded = {}
# create a sparse, nested map of videos and frame indices.
with open(excluded_file, 'r') as excluded:
reader = csv.reader(excluded)
for row in reader:
frame_excluded[(row[0], int(float(row[1])))] = True
with open(annotation_file, 'r') as annotations:
reader = csv.DictReader(annotations, fieldnames)
frame_annotations = collections.defaultdict(list)
ids = set()
# aggreggate by video and timestamp:
for row in reader:
ids.add(row['id'])
key = (row['id'], int(float(row['timestamp_seconds'])))
frame_annotations[key].append(row)
# for each video, find aggregates near each sampled frame.:
logging.info('Generating metadata...')
media_num = 1
for media_id in ids:
logging.info('%d/%d, ignore warnings.\n', media_num, len(ids))
media_num += 1
filepath = glob.glob(
video_path_format_string.format(media_id) + '*')[0]
cur_vid = cv2.VideoCapture(filepath)
width = cur_vid.get(cv2.CAP_PROP_FRAME_WIDTH)
height = cur_vid.get(cv2.CAP_PROP_FRAME_HEIGHT)
middle_frame_time = POSSIBLE_TIMESTAMPS[0]
while middle_frame_time < POSSIBLE_TIMESTAMPS[-1]:
start_time = middle_frame_time - seconds_per_sequence // 2 - (
0 if seconds_per_sequence % 2 == 0 else 1)
end_time = middle_frame_time + (seconds_per_sequence // 2)
total_boxes = []
total_labels = []
total_label_strings = []
total_images = []
total_source_ids = []
total_confidences = []
total_is_annotated = []
windowed_timestamp = start_time
while windowed_timestamp < end_time:
if (media_id, windowed_timestamp) in frame_excluded:
end_time += 1
windowed_timestamp += 1
logging.info('Ignoring and skipping excluded frame.')
continue
cur_vid.set(cv2.CAP_PROP_POS_MSEC,
(windowed_timestamp) * SECONDS_TO_MILLI)
_, image = cur_vid.read()
_, buffer = cv2.imencode('.jpg', image)
bufstring = buffer.tostring()
total_images.append(bufstring)
source_id = str(windowed_timestamp) + '_' + media_id
total_source_ids.append(source_id)
total_is_annotated.append(1)
boxes = []
labels = []
label_strings = []
confidences = []
for row in frame_annotations[(media_id, windowed_timestamp)]:
if len(row) > 2 and int(row['action_label']) in label_map:
boxes.append([float(row['ymin']), float(row['xmin']),
float(row['ymax']), float(row['xmax'])])
labels.append(int(row['action_label']))
label_strings.append(label_map[int(row['action_label'])])
confidences.append(1)
else:
logging.warning('Unknown label: %s', row['action_label'])
total_boxes.append(boxes)
total_labels.append(labels)
total_label_strings.append(label_strings)
total_confidences.append(confidences)
windowed_timestamp += 1
if total_boxes:
yield seq_example_util.make_sequence_example(
'AVA', media_id, total_images, int(height), int(width), 'jpeg',
total_source_ids, None, total_is_annotated, total_boxes,
total_label_strings, use_strs_for_source_id=True)
# Move middle_time_frame, skipping excluded frames
frames_mv = 0
frames_excluded_count = 0
while (frames_mv < hop_between_sequences + frames_excluded_count
and middle_frame_time + frames_mv < POSSIBLE_TIMESTAMPS[-1]):
frames_mv += 1
if (media_id, windowed_timestamp + frames_mv) in frame_excluded:
frames_excluded_count += 1
middle_frame_time += frames_mv
cur_vid.release()
def _generate_examples(self, annotation_file, excluded_file, label_map,
seconds_per_sequence, hop_between_sequences,
video_path_format_string):
"""For each row in the annotation CSV, generates examples.
When iterating through frames for a single example, skips
over excluded frames. Generates equal-length sequence examples, each with
length seconds_per_sequence (1 fps) and gaps of hop_between_sequences
frames (and seconds) between them, possible greater due to excluded frames.
Args:
annotation_file: path to the file of AVA CSV annotations.
excluded_file: path to a CSV file of excluded timestamps for each video.
label_map: an {int: string} label map.
seconds_per_sequence: The number of seconds per example in each example.
hop_between_sequences: The hop between sequences. If less than
seconds_per_sequence, will overlap.
video_path_format_string: File path format to glob video files.
Yields:
Each prepared tf.Example of metadata also containing video frames
"""
del seconds_per_sequence
del hop_between_sequences
fieldnames = ['id', 'timestamp_seconds', 'xmin', 'ymin', 'xmax', 'ymax',
'action_label']
frame_excluded = {}
# create a sparse, nested map of videos and frame indices.
with open(excluded_file, 'r') as excluded:
reader = csv.reader(excluded)
for row in reader:
frame_excluded[(row[0], int(float(row[1])))] = True
with open(annotation_file, 'r') as annotations:
reader = csv.DictReader(annotations, fieldnames)
frame_annotations = collections.defaultdict(list)
ids = set()
# aggreggate by video and timestamp:
for row in reader:
ids.add(row['id'])
key = (row['id'], int(float(row['timestamp_seconds'])))
frame_annotations[key].append(row)
# for each video, find aggreggates near each sampled frame.:
logging.info('Generating metadata...')
media_num = 1
for media_id in ids:
logging.info('%d/%d, ignore warnings.\n', media_num, len(ids))
media_num += 1
filepath = glob.glob(
video_path_format_string.format(media_id) + '*')[0]
cur_vid = cv2.VideoCapture(filepath)
width = cur_vid.get(cv2.CAP_PROP_FRAME_WIDTH)
height = cur_vid.get(cv2.CAP_PROP_FRAME_HEIGHT)
middle_frame_time = POSSIBLE_TIMESTAMPS[0]
total_non_excluded = 0
while middle_frame_time < POSSIBLE_TIMESTAMPS[-1]:
if (media_id, middle_frame_time) not in frame_excluded:
total_non_excluded += 1
middle_frame_time += 1
middle_frame_time = POSSIBLE_TIMESTAMPS[0]
cur_frame_num = 0
while middle_frame_time < POSSIBLE_TIMESTAMPS[-1]:
cur_vid.set(cv2.CAP_PROP_POS_MSEC,
middle_frame_time * SECONDS_TO_MILLI)
_, image = cur_vid.read()
_, buffer = cv2.imencode('.jpg', image)
bufstring = buffer.tostring()
if (media_id, middle_frame_time) in frame_excluded:
middle_frame_time += 1
logging.info('Ignoring and skipping excluded frame.')
continue
cur_frame_num += 1
source_id = str(middle_frame_time) + '_' + media_id
xmins = []
xmaxs = []
ymins = []
ymaxs = []
areas = []
labels = []
label_strings = []
confidences = []
for row in frame_annotations[(media_id, middle_frame_time)]:
if len(row) > 2 and int(row['action_label']) in label_map:
xmins.append(float(row['xmin']))
xmaxs.append(float(row['xmax']))
ymins.append(float(row['ymin']))
ymaxs.append(float(row['ymax']))
areas.append(float((xmaxs[-1] - xmins[-1]) *
(ymaxs[-1] - ymins[-1])) / 2)
labels.append(int(row['action_label']))
label_strings.append(label_map[int(row['action_label'])])
confidences.append(1)
else:
logging.warning('Unknown label: %s', row['action_label'])
middle_frame_time += 1/3
if abs(middle_frame_time - round(middle_frame_time) < 0.0001):
middle_frame_time = round(middle_frame_time)
key = hashlib.sha256(bufstring).hexdigest()
date_captured_feature = (
'2020-06-17 00:%02d:%02d' % ((middle_frame_time - 900)*3 // 60,
(middle_frame_time - 900)*3 % 60))
context_feature_dict = {
'image/height':
dataset_util.int64_feature(int(height)),
'image/width':
dataset_util.int64_feature(int(width)),
'image/format':
dataset_util.bytes_feature('jpeg'.encode('utf8')),
'image/source_id':
dataset_util.bytes_feature(source_id.encode('utf8')),
'image/filename':
dataset_util.bytes_feature(source_id.encode('utf8')),
'image/encoded':
dataset_util.bytes_feature(bufstring),
'image/key/sha256':
dataset_util.bytes_feature(key.encode('utf8')),
'image/object/bbox/xmin':
dataset_util.float_list_feature(xmins),
'image/object/bbox/xmax':
dataset_util.float_list_feature(xmaxs),
'image/object/bbox/ymin':
dataset_util.float_list_feature(ymins),
'image/object/bbox/ymax':
dataset_util.float_list_feature(ymaxs),
'image/object/area':
dataset_util.float_list_feature(areas),
'image/object/class/label':
dataset_util.int64_list_feature(labels),
'image/object/class/text':
dataset_util.bytes_list_feature(label_strings),
'image/location':
dataset_util.bytes_feature(media_id.encode('utf8')),
'image/date_captured':
dataset_util.bytes_feature(
date_captured_feature.encode('utf8')),
'image/seq_num_frames':
dataset_util.int64_feature(total_non_excluded),
'image/seq_frame_num':
dataset_util.int64_feature(cur_frame_num),
'image/seq_id':
dataset_util.bytes_feature(media_id.encode('utf8')),
}
yield tf.train.Example(
features=tf.train.Features(feature=context_feature_dict))
cur_vid.release()
def _download_data(self):
"""Downloads and extracts data if not already available."""
if sys.version_info >= (3, 0):
urlretrieve = urllib.request.urlretrieve
else:
urlretrieve = urllib.request.urlretrieve
logging.info('Creating data directory.')
tf.io.gfile.makedirs(self.path_to_data_download)
logging.info('Downloading annotations.')
paths = {}
zip_path = os.path.join(self.path_to_data_download,
ANNOTATION_URL.split('/')[-1])
urlretrieve(ANNOTATION_URL, zip_path)
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall(self.path_to_data_download)
for split in ['train', 'test', 'val']:
csv_path = os.path.join(self.path_to_data_download,
'ava_%s_v2.2.csv' % split)
excl_name = 'ava_%s_excluded_timestamps_v2.2.csv' % split
excluded_csv_path = os.path.join(self.path_to_data_download, excl_name)
SPLITS[split]['csv'] = csv_path
SPLITS[split]['excluded-csv'] = excluded_csv_path
paths[split] = (csv_path, excluded_csv_path)
label_map = self.get_label_map(os.path.join(
self.path_to_data_download,
'ava_action_list_v2.2_for_activitynet_2019.pbtxt'))
return paths, label_map
def get_label_map(self, path):
"""Parses a label map into {integer:string} format."""
label_map_dict = label_map_util.get_label_map_dict(path)
label_map_dict = {v: bytes(k, 'utf8') for k, v in label_map_dict.items()}
logging.info(label_map_dict)
return label_map_dict
@contextlib.contextmanager
def _close_on_exit(writers):
"""Call close on all writers on exit."""
try:
yield writers
finally:
for writer in writers:
writer.close()
def main(argv):
if len(argv) > 1:
raise app.UsageError('Too many command-line arguments.')
Ava(flags.FLAGS.path_to_output_dir,
flags.FLAGS.path_to_download_data).generate_and_write_records(
flags.FLAGS.splits_to_process,
flags.FLAGS.video_path_format_string,
flags.FLAGS.seconds_per_sequence,
flags.FLAGS.hop_between_sequences,
flags.FLAGS.examples_for_context)
if __name__ == '__main__':
flags.DEFINE_string('path_to_download_data',
'',
'Path to directory to download data to.')
flags.DEFINE_string('path_to_output_dir',
'',
'Path to directory to write data to.')
flags.DEFINE_string('splits_to_process',
'train,val',
'Process these splits. Useful for custom data splits.')
flags.DEFINE_string('video_path_format_string',
None,
'The format string for the path to local video files. '
'Uses the Python string.format() syntax with possible '
'arguments of {video}, {start}, {end}, {label_name}, and '
'{split}, corresponding to columns of the data csvs.')
flags.DEFINE_integer('seconds_per_sequence',
10,
'The number of seconds per example in each example.'
'Always 1 when examples_for_context is True.')
flags.DEFINE_integer('hop_between_sequences',
10,
'The hop between sequences. If less than '
'seconds_per_sequence, will overlap. Always 1 when '
'examples_for_context is True.')
flags.DEFINE_boolean('examples_for_context',
False,
'Whether to generate examples instead of sequence '
'examples. If true, will generate tf.Example objects '
'for use in Context R-CNN.')
app.run(main) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/dataset_tools/create_ava_actions_tf_record.py | create_ava_actions_tf_record.py |
r"""Convert raw KITTI detection dataset to TFRecord for object_detection.
Converts KITTI detection dataset to TFRecords with a standard format allowing
to use this dataset to train object detectors. The raw dataset can be
downloaded from:
http://kitti.is.tue.mpg.de/kitti/data_object_image_2.zip.
http://kitti.is.tue.mpg.de/kitti/data_object_label_2.zip
Permission can be requested at the main website.
KITTI detection dataset contains 7481 training images. Using this code with
the default settings will set aside the first 500 images as a validation set.
This can be altered using the flags, see details below.
Example usage:
python object_detection/dataset_tools/create_kitti_tf_record.py \
--data_dir=/home/user/kitti \
--output_path=/home/user/kitti.record
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import hashlib
import io
import os
import numpy as np
import PIL.Image as pil
import tensorflow.compat.v1 as tf
from object_detection.utils import dataset_util
from object_detection.utils import label_map_util
from object_detection.utils.np_box_ops import iou
tf.app.flags.DEFINE_string('data_dir', '', 'Location of root directory for the '
'data. Folder structure is assumed to be:'
'<data_dir>/training/label_2 (annotations) and'
'<data_dir>/data_object_image_2/training/image_2'
'(images).')
tf.app.flags.DEFINE_string('output_path', '', 'Path to which TFRecord files'
'will be written. The TFRecord with the training set'
'will be located at: <output_path>_train.tfrecord.'
'And the TFRecord with the validation set will be'
'located at: <output_path>_val.tfrecord')
tf.app.flags.DEFINE_string('classes_to_use', 'car,pedestrian,dontcare',
'Comma separated list of class names that will be'
'used. Adding the dontcare class will remove all'
'bboxs in the dontcare regions.')
tf.app.flags.DEFINE_string('label_map_path', 'data/kitti_label_map.pbtxt',
'Path to label map proto.')
tf.app.flags.DEFINE_integer('validation_set_size', '500', 'Number of images to'
'be used as a validation set.')
FLAGS = tf.app.flags.FLAGS
def convert_kitti_to_tfrecords(data_dir, output_path, classes_to_use,
label_map_path, validation_set_size):
"""Convert the KITTI detection dataset to TFRecords.
Args:
data_dir: The full path to the unzipped folder containing the unzipped data
from data_object_image_2 and data_object_label_2.zip.
Folder structure is assumed to be: data_dir/training/label_2 (annotations)
and data_dir/data_object_image_2/training/image_2 (images).
output_path: The path to which TFRecord files will be written. The TFRecord
with the training set will be located at: <output_path>_train.tfrecord
And the TFRecord with the validation set will be located at:
<output_path>_val.tfrecord
classes_to_use: List of strings naming the classes for which data should be
converted. Use the same names as presented in the KIITI README file.
Adding dontcare class will remove all other bounding boxes that overlap
with areas marked as dontcare regions.
label_map_path: Path to label map proto
validation_set_size: How many images should be left as the validation set.
(Ffirst `validation_set_size` examples are selected to be in the
validation set).
"""
label_map_dict = label_map_util.get_label_map_dict(label_map_path)
train_count = 0
val_count = 0
annotation_dir = os.path.join(data_dir,
'training',
'label_2')
image_dir = os.path.join(data_dir,
'data_object_image_2',
'training',
'image_2')
train_writer = tf.python_io.TFRecordWriter('%s_train.tfrecord'%
output_path)
val_writer = tf.python_io.TFRecordWriter('%s_val.tfrecord'%
output_path)
images = sorted(tf.gfile.ListDirectory(image_dir))
for img_name in images:
img_num = int(img_name.split('.')[0])
is_validation_img = img_num < validation_set_size
img_anno = read_annotation_file(os.path.join(annotation_dir,
str(img_num).zfill(6)+'.txt'))
image_path = os.path.join(image_dir, img_name)
# Filter all bounding boxes of this frame that are of a legal class, and
# don't overlap with a dontcare region.
# TODO(talremez) filter out targets that are truncated or heavily occluded.
annotation_for_image = filter_annotations(img_anno, classes_to_use)
example = prepare_example(image_path, annotation_for_image, label_map_dict)
if is_validation_img:
val_writer.write(example.SerializeToString())
val_count += 1
else:
train_writer.write(example.SerializeToString())
train_count += 1
train_writer.close()
val_writer.close()
def prepare_example(image_path, annotations, label_map_dict):
"""Converts a dictionary with annotations for an image to tf.Example proto.
Args:
image_path: The complete path to image.
annotations: A dictionary representing the annotation of a single object
that appears in the image.
label_map_dict: A map from string label names to integer ids.
Returns:
example: The converted tf.Example.
"""
with tf.gfile.GFile(image_path, 'rb') as fid:
encoded_png = fid.read()
encoded_png_io = io.BytesIO(encoded_png)
image = pil.open(encoded_png_io)
image = np.asarray(image)
key = hashlib.sha256(encoded_png).hexdigest()
width = int(image.shape[1])
height = int(image.shape[0])
xmin_norm = annotations['2d_bbox_left'] / float(width)
ymin_norm = annotations['2d_bbox_top'] / float(height)
xmax_norm = annotations['2d_bbox_right'] / float(width)
ymax_norm = annotations['2d_bbox_bottom'] / float(height)
difficult_obj = [0]*len(xmin_norm)
example = tf.train.Example(features=tf.train.Features(feature={
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(image_path.encode('utf8')),
'image/source_id': dataset_util.bytes_feature(image_path.encode('utf8')),
'image/key/sha256': dataset_util.bytes_feature(key.encode('utf8')),
'image/encoded': dataset_util.bytes_feature(encoded_png),
'image/format': dataset_util.bytes_feature('png'.encode('utf8')),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmin_norm),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmax_norm),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymin_norm),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymax_norm),
'image/object/class/text': dataset_util.bytes_list_feature(
[x.encode('utf8') for x in annotations['type']]),
'image/object/class/label': dataset_util.int64_list_feature(
[label_map_dict[x] for x in annotations['type']]),
'image/object/difficult': dataset_util.int64_list_feature(difficult_obj),
'image/object/truncated': dataset_util.float_list_feature(
annotations['truncated']),
'image/object/alpha': dataset_util.float_list_feature(
annotations['alpha']),
'image/object/3d_bbox/height': dataset_util.float_list_feature(
annotations['3d_bbox_height']),
'image/object/3d_bbox/width': dataset_util.float_list_feature(
annotations['3d_bbox_width']),
'image/object/3d_bbox/length': dataset_util.float_list_feature(
annotations['3d_bbox_length']),
'image/object/3d_bbox/x': dataset_util.float_list_feature(
annotations['3d_bbox_x']),
'image/object/3d_bbox/y': dataset_util.float_list_feature(
annotations['3d_bbox_y']),
'image/object/3d_bbox/z': dataset_util.float_list_feature(
annotations['3d_bbox_z']),
'image/object/3d_bbox/rot_y': dataset_util.float_list_feature(
annotations['3d_bbox_rot_y']),
}))
return example
def filter_annotations(img_all_annotations, used_classes):
"""Filters out annotations from the unused classes and dontcare regions.
Filters out the annotations that belong to classes we do now wish to use and
(optionally) also removes all boxes that overlap with dontcare regions.
Args:
img_all_annotations: A list of annotation dictionaries. See documentation of
read_annotation_file for more details about the format of the annotations.
used_classes: A list of strings listing the classes we want to keep, if the
list contains "dontcare", all bounding boxes with overlapping with dont
care regions will also be filtered out.
Returns:
img_filtered_annotations: A list of annotation dictionaries that have passed
the filtering.
"""
img_filtered_annotations = {}
# Filter the type of the objects.
relevant_annotation_indices = [
i for i, x in enumerate(img_all_annotations['type']) if x in used_classes
]
for key in img_all_annotations.keys():
img_filtered_annotations[key] = (
img_all_annotations[key][relevant_annotation_indices])
if 'dontcare' in used_classes:
dont_care_indices = [i for i,
x in enumerate(img_filtered_annotations['type'])
if x == 'dontcare']
# bounding box format [y_min, x_min, y_max, x_max]
all_boxes = np.stack([img_filtered_annotations['2d_bbox_top'],
img_filtered_annotations['2d_bbox_left'],
img_filtered_annotations['2d_bbox_bottom'],
img_filtered_annotations['2d_bbox_right']],
axis=1)
ious = iou(boxes1=all_boxes,
boxes2=all_boxes[dont_care_indices])
# Remove all bounding boxes that overlap with a dontcare region.
if ious.size > 0:
boxes_to_remove = np.amax(ious, axis=1) > 0.0
for key in img_all_annotations.keys():
img_filtered_annotations[key] = (
img_filtered_annotations[key][np.logical_not(boxes_to_remove)])
return img_filtered_annotations
def read_annotation_file(filename):
"""Reads a KITTI annotation file.
Converts a KITTI annotation file into a dictionary containing all the
relevant information.
Args:
filename: the path to the annotataion text file.
Returns:
anno: A dictionary with the converted annotation information. See annotation
README file for details on the different fields.
"""
with open(filename) as f:
content = f.readlines()
content = [x.strip().split(' ') for x in content]
anno = {}
anno['type'] = np.array([x[0].lower() for x in content])
anno['truncated'] = np.array([float(x[1]) for x in content])
anno['occluded'] = np.array([int(x[2]) for x in content])
anno['alpha'] = np.array([float(x[3]) for x in content])
anno['2d_bbox_left'] = np.array([float(x[4]) for x in content])
anno['2d_bbox_top'] = np.array([float(x[5]) for x in content])
anno['2d_bbox_right'] = np.array([float(x[6]) for x in content])
anno['2d_bbox_bottom'] = np.array([float(x[7]) for x in content])
anno['3d_bbox_height'] = np.array([float(x[8]) for x in content])
anno['3d_bbox_width'] = np.array([float(x[9]) for x in content])
anno['3d_bbox_length'] = np.array([float(x[10]) for x in content])
anno['3d_bbox_x'] = np.array([float(x[11]) for x in content])
anno['3d_bbox_y'] = np.array([float(x[12]) for x in content])
anno['3d_bbox_z'] = np.array([float(x[13]) for x in content])
anno['3d_bbox_rot_y'] = np.array([float(x[14]) for x in content])
return anno
def main(_):
convert_kitti_to_tfrecords(
data_dir=FLAGS.data_dir,
output_path=FLAGS.output_path,
classes_to_use=FLAGS.classes_to_use.split(','),
label_map_path=FLAGS.label_map_path,
validation_set_size=FLAGS.validation_set_size)
if __name__ == '__main__':
tf.app.run() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/dataset_tools/create_kitti_tf_record.py | create_kitti_tf_record.py |
r"""Convert raw COCO dataset to TFRecord for object_detection.
This tool supports data generation for object detection (boxes, masks),
keypoint detection, and DensePose.
Please note that this tool creates sharded output files.
Example usage:
python create_coco_tf_record.py --logtostderr \
--train_image_dir="${TRAIN_IMAGE_DIR}" \
--val_image_dir="${VAL_IMAGE_DIR}" \
--test_image_dir="${TEST_IMAGE_DIR}" \
--train_annotations_file="${TRAIN_ANNOTATIONS_FILE}" \
--val_annotations_file="${VAL_ANNOTATIONS_FILE}" \
--testdev_annotations_file="${TESTDEV_ANNOTATIONS_FILE}" \
--output_dir="${OUTPUT_DIR}"
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import hashlib
import io
import json
import logging
import os
import contextlib2
import numpy as np
import PIL.Image
from pycocotools import mask
import tensorflow.compat.v1 as tf
from object_detection.dataset_tools import tf_record_creation_util
from object_detection.utils import dataset_util
from object_detection.utils import label_map_util
flags = tf.app.flags
tf.flags.DEFINE_boolean(
'include_masks', False, 'Whether to include instance segmentations masks '
'(PNG encoded) in the result. default: False.')
tf.flags.DEFINE_string('train_image_dir', '', 'Training image directory.')
tf.flags.DEFINE_string('val_image_dir', '', 'Validation image directory.')
tf.flags.DEFINE_string('test_image_dir', '', 'Test image directory.')
tf.flags.DEFINE_string('train_annotations_file', '',
'Training annotations JSON file.')
tf.flags.DEFINE_string('val_annotations_file', '',
'Validation annotations JSON file.')
tf.flags.DEFINE_string('testdev_annotations_file', '',
'Test-dev annotations JSON file.')
tf.flags.DEFINE_string('train_keypoint_annotations_file', '',
'Training annotations JSON file.')
tf.flags.DEFINE_string('val_keypoint_annotations_file', '',
'Validation annotations JSON file.')
# DensePose is only available for coco 2014.
tf.flags.DEFINE_string('train_densepose_annotations_file', '',
'Training annotations JSON file for DensePose.')
tf.flags.DEFINE_string('val_densepose_annotations_file', '',
'Validation annotations JSON file for DensePose.')
tf.flags.DEFINE_string('output_dir', '/tmp/', 'Output data directory.')
# Whether to only produce images/annotations on person class (for keypoint /
# densepose task).
tf.flags.DEFINE_boolean('remove_non_person_annotations', False, 'Whether to '
'remove all annotations for non-person objects.')
tf.flags.DEFINE_boolean('remove_non_person_images', False, 'Whether to '
'remove all examples that do not contain a person.')
FLAGS = flags.FLAGS
logger = tf.get_logger()
logger.setLevel(logging.INFO)
_COCO_KEYPOINT_NAMES = [
b'nose', b'left_eye', b'right_eye', b'left_ear', b'right_ear',
b'left_shoulder', b'right_shoulder', b'left_elbow', b'right_elbow',
b'left_wrist', b'right_wrist', b'left_hip', b'right_hip',
b'left_knee', b'right_knee', b'left_ankle', b'right_ankle'
]
_COCO_PART_NAMES = [
b'torso_back', b'torso_front', b'right_hand', b'left_hand', b'left_foot',
b'right_foot', b'right_upper_leg_back', b'left_upper_leg_back',
b'right_upper_leg_front', b'left_upper_leg_front', b'right_lower_leg_back',
b'left_lower_leg_back', b'right_lower_leg_front', b'left_lower_leg_front',
b'left_upper_arm_back', b'right_upper_arm_back', b'left_upper_arm_front',
b'right_upper_arm_front', b'left_lower_arm_back', b'right_lower_arm_back',
b'left_lower_arm_front', b'right_lower_arm_front', b'right_face',
b'left_face',
]
_DP_PART_ID_OFFSET = 1
def clip_to_unit(x):
return min(max(x, 0.0), 1.0)
def create_tf_example(image,
annotations_list,
image_dir,
category_index,
include_masks=False,
keypoint_annotations_dict=None,
densepose_annotations_dict=None,
remove_non_person_annotations=False,
remove_non_person_images=False):
"""Converts image and annotations to a tf.Example proto.
Args:
image: dict with keys: [u'license', u'file_name', u'coco_url', u'height',
u'width', u'date_captured', u'flickr_url', u'id']
annotations_list:
list of dicts with keys: [u'segmentation', u'area', u'iscrowd',
u'image_id', u'bbox', u'category_id', u'id'] Notice that bounding box
coordinates in the official COCO dataset are given as [x, y, width,
height] tuples using absolute coordinates where x, y represent the
top-left (0-indexed) corner. This function converts to the format
expected by the Tensorflow Object Detection API (which is which is
[ymin, xmin, ymax, xmax] with coordinates normalized relative to image
size).
image_dir: directory containing the image files.
category_index: a dict containing COCO category information keyed by the
'id' field of each category. See the label_map_util.create_category_index
function.
include_masks: Whether to include instance segmentations masks
(PNG encoded) in the result. default: False.
keypoint_annotations_dict: A dictionary that maps from annotation_id to a
dictionary with keys: [u'keypoints', u'num_keypoints'] represeting the
keypoint information for this person object annotation. If None, then
no keypoint annotations will be populated.
densepose_annotations_dict: A dictionary that maps from annotation_id to a
dictionary with keys: [u'dp_I', u'dp_x', u'dp_y', 'dp_U', 'dp_V']
representing part surface coordinates. For more information see
http://densepose.org/.
remove_non_person_annotations: Whether to remove any annotations that are
not the "person" class.
remove_non_person_images: Whether to remove any images that do not contain
at least one "person" annotation.
Returns:
key: SHA256 hash of the image.
example: The converted tf.Example
num_annotations_skipped: Number of (invalid) annotations that were ignored.
num_keypoint_annotation_skipped: Number of keypoint annotations that were
skipped.
num_densepose_annotation_skipped: Number of DensePose annotations that were
skipped.
Raises:
ValueError: if the image pointed to by data['filename'] is not a valid JPEG
"""
image_height = image['height']
image_width = image['width']
filename = image['file_name']
image_id = image['id']
full_path = os.path.join(image_dir, filename)
with tf.gfile.GFile(full_path, 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = PIL.Image.open(encoded_jpg_io)
key = hashlib.sha256(encoded_jpg).hexdigest()
xmin = []
xmax = []
ymin = []
ymax = []
is_crowd = []
category_names = []
category_ids = []
area = []
encoded_mask_png = []
keypoints_x = []
keypoints_y = []
keypoints_visibility = []
keypoints_name = []
num_keypoints = []
include_keypoint = keypoint_annotations_dict is not None
num_annotations_skipped = 0
num_keypoint_annotation_used = 0
num_keypoint_annotation_skipped = 0
dp_part_index = []
dp_x = []
dp_y = []
dp_u = []
dp_v = []
dp_num_points = []
densepose_keys = ['dp_I', 'dp_U', 'dp_V', 'dp_x', 'dp_y', 'bbox']
include_densepose = densepose_annotations_dict is not None
num_densepose_annotation_used = 0
num_densepose_annotation_skipped = 0
for object_annotations in annotations_list:
(x, y, width, height) = tuple(object_annotations['bbox'])
if width <= 0 or height <= 0:
num_annotations_skipped += 1
continue
if x + width > image_width or y + height > image_height:
num_annotations_skipped += 1
continue
category_id = int(object_annotations['category_id'])
category_name = category_index[category_id]['name'].encode('utf8')
if remove_non_person_annotations and category_name != b'person':
num_annotations_skipped += 1
continue
xmin.append(float(x) / image_width)
xmax.append(float(x + width) / image_width)
ymin.append(float(y) / image_height)
ymax.append(float(y + height) / image_height)
is_crowd.append(object_annotations['iscrowd'])
category_ids.append(category_id)
category_names.append(category_name)
area.append(object_annotations['area'])
if include_masks:
run_len_encoding = mask.frPyObjects(object_annotations['segmentation'],
image_height, image_width)
binary_mask = mask.decode(run_len_encoding)
if not object_annotations['iscrowd']:
binary_mask = np.amax(binary_mask, axis=2)
pil_image = PIL.Image.fromarray(binary_mask)
output_io = io.BytesIO()
pil_image.save(output_io, format='PNG')
encoded_mask_png.append(output_io.getvalue())
if include_keypoint:
annotation_id = object_annotations['id']
if annotation_id in keypoint_annotations_dict:
num_keypoint_annotation_used += 1
keypoint_annotations = keypoint_annotations_dict[annotation_id]
keypoints = keypoint_annotations['keypoints']
num_kpts = keypoint_annotations['num_keypoints']
keypoints_x_abs = keypoints[::3]
keypoints_x.extend(
[float(x_abs) / image_width for x_abs in keypoints_x_abs])
keypoints_y_abs = keypoints[1::3]
keypoints_y.extend(
[float(y_abs) / image_height for y_abs in keypoints_y_abs])
keypoints_visibility.extend(keypoints[2::3])
keypoints_name.extend(_COCO_KEYPOINT_NAMES)
num_keypoints.append(num_kpts)
else:
keypoints_x.extend([0.0] * len(_COCO_KEYPOINT_NAMES))
keypoints_y.extend([0.0] * len(_COCO_KEYPOINT_NAMES))
keypoints_visibility.extend([0] * len(_COCO_KEYPOINT_NAMES))
keypoints_name.extend(_COCO_KEYPOINT_NAMES)
num_keypoints.append(0)
if include_densepose:
annotation_id = object_annotations['id']
if (annotation_id in densepose_annotations_dict and
all(key in densepose_annotations_dict[annotation_id]
for key in densepose_keys)):
dp_annotations = densepose_annotations_dict[annotation_id]
num_densepose_annotation_used += 1
dp_num_points.append(len(dp_annotations['dp_I']))
dp_part_index.extend([int(i - _DP_PART_ID_OFFSET)
for i in dp_annotations['dp_I']])
# DensePose surface coordinates are defined on a [256, 256] grid
# relative to each instance box (i.e. absolute coordinates in range
# [0., 256.]). The following converts the coordinates
# so that they are expressed in normalized image coordinates.
dp_x_box_rel = [
clip_to_unit(val / 256.) for val in dp_annotations['dp_x']]
dp_x_norm = [(float(x) + x_box_rel * width) / image_width
for x_box_rel in dp_x_box_rel]
dp_y_box_rel = [
clip_to_unit(val / 256.) for val in dp_annotations['dp_y']]
dp_y_norm = [(float(y) + y_box_rel * height) / image_height
for y_box_rel in dp_y_box_rel]
dp_x.extend(dp_x_norm)
dp_y.extend(dp_y_norm)
dp_u.extend(dp_annotations['dp_U'])
dp_v.extend(dp_annotations['dp_V'])
else:
dp_num_points.append(0)
if (remove_non_person_images and
not any(name == b'person' for name in category_names)):
return (key, None, num_annotations_skipped,
num_keypoint_annotation_skipped, num_densepose_annotation_skipped)
feature_dict = {
'image/height':
dataset_util.int64_feature(image_height),
'image/width':
dataset_util.int64_feature(image_width),
'image/filename':
dataset_util.bytes_feature(filename.encode('utf8')),
'image/source_id':
dataset_util.bytes_feature(str(image_id).encode('utf8')),
'image/key/sha256':
dataset_util.bytes_feature(key.encode('utf8')),
'image/encoded':
dataset_util.bytes_feature(encoded_jpg),
'image/format':
dataset_util.bytes_feature('jpeg'.encode('utf8')),
'image/object/bbox/xmin':
dataset_util.float_list_feature(xmin),
'image/object/bbox/xmax':
dataset_util.float_list_feature(xmax),
'image/object/bbox/ymin':
dataset_util.float_list_feature(ymin),
'image/object/bbox/ymax':
dataset_util.float_list_feature(ymax),
'image/object/class/text':
dataset_util.bytes_list_feature(category_names),
'image/object/is_crowd':
dataset_util.int64_list_feature(is_crowd),
'image/object/area':
dataset_util.float_list_feature(area),
}
if include_masks:
feature_dict['image/object/mask'] = (
dataset_util.bytes_list_feature(encoded_mask_png))
if include_keypoint:
feature_dict['image/object/keypoint/x'] = (
dataset_util.float_list_feature(keypoints_x))
feature_dict['image/object/keypoint/y'] = (
dataset_util.float_list_feature(keypoints_y))
feature_dict['image/object/keypoint/num'] = (
dataset_util.int64_list_feature(num_keypoints))
feature_dict['image/object/keypoint/visibility'] = (
dataset_util.int64_list_feature(keypoints_visibility))
feature_dict['image/object/keypoint/text'] = (
dataset_util.bytes_list_feature(keypoints_name))
num_keypoint_annotation_skipped = (
len(keypoint_annotations_dict) - num_keypoint_annotation_used)
if include_densepose:
feature_dict['image/object/densepose/num'] = (
dataset_util.int64_list_feature(dp_num_points))
feature_dict['image/object/densepose/part_index'] = (
dataset_util.int64_list_feature(dp_part_index))
feature_dict['image/object/densepose/x'] = (
dataset_util.float_list_feature(dp_x))
feature_dict['image/object/densepose/y'] = (
dataset_util.float_list_feature(dp_y))
feature_dict['image/object/densepose/u'] = (
dataset_util.float_list_feature(dp_u))
feature_dict['image/object/densepose/v'] = (
dataset_util.float_list_feature(dp_v))
num_densepose_annotation_skipped = (
len(densepose_annotations_dict) - num_densepose_annotation_used)
example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
return (key, example, num_annotations_skipped,
num_keypoint_annotation_skipped, num_densepose_annotation_skipped)
def _create_tf_record_from_coco_annotations(annotations_file, image_dir,
output_path, include_masks,
num_shards,
keypoint_annotations_file='',
densepose_annotations_file='',
remove_non_person_annotations=False,
remove_non_person_images=False):
"""Loads COCO annotation json files and converts to tf.Record format.
Args:
annotations_file: JSON file containing bounding box annotations.
image_dir: Directory containing the image files.
output_path: Path to output tf.Record file.
include_masks: Whether to include instance segmentations masks
(PNG encoded) in the result. default: False.
num_shards: number of output file shards.
keypoint_annotations_file: JSON file containing the person keypoint
annotations. If empty, then no person keypoint annotations will be
generated.
densepose_annotations_file: JSON file containing the DensePose annotations.
If empty, then no DensePose annotations will be generated.
remove_non_person_annotations: Whether to remove any annotations that are
not the "person" class.
remove_non_person_images: Whether to remove any images that do not contain
at least one "person" annotation.
"""
with contextlib2.ExitStack() as tf_record_close_stack, \
tf.gfile.GFile(annotations_file, 'r') as fid:
output_tfrecords = tf_record_creation_util.open_sharded_output_tfrecords(
tf_record_close_stack, output_path, num_shards)
groundtruth_data = json.load(fid)
images = groundtruth_data['images']
category_index = label_map_util.create_category_index(
groundtruth_data['categories'])
annotations_index = {}
if 'annotations' in groundtruth_data:
logging.info('Found groundtruth annotations. Building annotations index.')
for annotation in groundtruth_data['annotations']:
image_id = annotation['image_id']
if image_id not in annotations_index:
annotations_index[image_id] = []
annotations_index[image_id].append(annotation)
missing_annotation_count = 0
for image in images:
image_id = image['id']
if image_id not in annotations_index:
missing_annotation_count += 1
annotations_index[image_id] = []
logging.info('%d images are missing annotations.',
missing_annotation_count)
keypoint_annotations_index = {}
if keypoint_annotations_file:
with tf.gfile.GFile(keypoint_annotations_file, 'r') as kid:
keypoint_groundtruth_data = json.load(kid)
if 'annotations' in keypoint_groundtruth_data:
for annotation in keypoint_groundtruth_data['annotations']:
image_id = annotation['image_id']
if image_id not in keypoint_annotations_index:
keypoint_annotations_index[image_id] = {}
keypoint_annotations_index[image_id][annotation['id']] = annotation
densepose_annotations_index = {}
if densepose_annotations_file:
with tf.gfile.GFile(densepose_annotations_file, 'r') as fid:
densepose_groundtruth_data = json.load(fid)
if 'annotations' in densepose_groundtruth_data:
for annotation in densepose_groundtruth_data['annotations']:
image_id = annotation['image_id']
if image_id not in densepose_annotations_index:
densepose_annotations_index[image_id] = {}
densepose_annotations_index[image_id][annotation['id']] = annotation
total_num_annotations_skipped = 0
total_num_keypoint_annotations_skipped = 0
total_num_densepose_annotations_skipped = 0
for idx, image in enumerate(images):
if idx % 100 == 0:
logging.info('On image %d of %d', idx, len(images))
annotations_list = annotations_index[image['id']]
keypoint_annotations_dict = None
if keypoint_annotations_file:
keypoint_annotations_dict = {}
if image['id'] in keypoint_annotations_index:
keypoint_annotations_dict = keypoint_annotations_index[image['id']]
densepose_annotations_dict = None
if densepose_annotations_file:
densepose_annotations_dict = {}
if image['id'] in densepose_annotations_index:
densepose_annotations_dict = densepose_annotations_index[image['id']]
(_, tf_example, num_annotations_skipped, num_keypoint_annotations_skipped,
num_densepose_annotations_skipped) = create_tf_example(
image, annotations_list, image_dir, category_index, include_masks,
keypoint_annotations_dict, densepose_annotations_dict,
remove_non_person_annotations, remove_non_person_images)
total_num_annotations_skipped += num_annotations_skipped
total_num_keypoint_annotations_skipped += num_keypoint_annotations_skipped
total_num_densepose_annotations_skipped += (
num_densepose_annotations_skipped)
shard_idx = idx % num_shards
if tf_example:
output_tfrecords[shard_idx].write(tf_example.SerializeToString())
logging.info('Finished writing, skipped %d annotations.',
total_num_annotations_skipped)
if keypoint_annotations_file:
logging.info('Finished writing, skipped %d keypoint annotations.',
total_num_keypoint_annotations_skipped)
if densepose_annotations_file:
logging.info('Finished writing, skipped %d DensePose annotations.',
total_num_densepose_annotations_skipped)
def main(_):
assert FLAGS.train_image_dir, '`train_image_dir` missing.'
assert FLAGS.val_image_dir, '`val_image_dir` missing.'
assert FLAGS.test_image_dir, '`test_image_dir` missing.'
assert FLAGS.train_annotations_file, '`train_annotations_file` missing.'
assert FLAGS.val_annotations_file, '`val_annotations_file` missing.'
assert FLAGS.testdev_annotations_file, '`testdev_annotations_file` missing.'
if not tf.gfile.IsDirectory(FLAGS.output_dir):
tf.gfile.MakeDirs(FLAGS.output_dir)
train_output_path = os.path.join(FLAGS.output_dir, 'coco_train.record')
val_output_path = os.path.join(FLAGS.output_dir, 'coco_val.record')
testdev_output_path = os.path.join(FLAGS.output_dir, 'coco_testdev.record')
_create_tf_record_from_coco_annotations(
FLAGS.train_annotations_file,
FLAGS.train_image_dir,
train_output_path,
FLAGS.include_masks,
num_shards=100,
keypoint_annotations_file=FLAGS.train_keypoint_annotations_file,
densepose_annotations_file=FLAGS.train_densepose_annotations_file,
remove_non_person_annotations=FLAGS.remove_non_person_annotations,
remove_non_person_images=FLAGS.remove_non_person_images)
_create_tf_record_from_coco_annotations(
FLAGS.val_annotations_file,
FLAGS.val_image_dir,
val_output_path,
FLAGS.include_masks,
num_shards=50,
keypoint_annotations_file=FLAGS.val_keypoint_annotations_file,
densepose_annotations_file=FLAGS.val_densepose_annotations_file,
remove_non_person_annotations=FLAGS.remove_non_person_annotations,
remove_non_person_images=FLAGS.remove_non_person_images)
_create_tf_record_from_coco_annotations(
FLAGS.testdev_annotations_file,
FLAGS.test_image_dir,
testdev_output_path,
FLAGS.include_masks,
num_shards=50)
if __name__ == '__main__':
tf.app.run() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/dataset_tools/create_coco_tf_record.py | create_coco_tf_record.py |
r"""Creates TFRecords of Open Images dataset for object detection.
Example usage:
python object_detection/dataset_tools/create_oid_tf_record.py \
--input_box_annotations_csv=/path/to/input/annotations-human-bbox.csv \
--input_image_label_annotations_csv=/path/to/input/annotations-label.csv \
--input_images_directory=/path/to/input/image_pixels_directory \
--input_label_map=/path/to/input/labels_bbox_545.labelmap \
--output_tf_record_path_prefix=/path/to/output/prefix.tfrecord
CSVs with bounding box annotations and image metadata (including the image URLs)
can be downloaded from the Open Images GitHub repository:
https://github.com/openimages/dataset
This script will include every image found in the input_images_directory in the
output TFRecord, even if the image has no corresponding bounding box annotations
in the input_annotations_csv. If input_image_label_annotations_csv is specified,
it will add image-level labels as well. Note that the information of whether a
label is positivelly or negativelly verified is NOT added to tfrecord.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import contextlib2
import pandas as pd
import tensorflow.compat.v1 as tf
from object_detection.dataset_tools import oid_tfrecord_creation
from object_detection.dataset_tools import tf_record_creation_util
from object_detection.utils import label_map_util
tf.flags.DEFINE_string('input_box_annotations_csv', None,
'Path to CSV containing image bounding box annotations')
tf.flags.DEFINE_string('input_images_directory', None,
'Directory containing the image pixels '
'downloaded from the OpenImages GitHub repository.')
tf.flags.DEFINE_string('input_image_label_annotations_csv', None,
'Path to CSV containing image-level labels annotations')
tf.flags.DEFINE_string('input_label_map', None, 'Path to the label map proto')
tf.flags.DEFINE_string(
'output_tf_record_path_prefix', None,
'Path to the output TFRecord. The shard index and the number of shards '
'will be appended for each output shard.')
tf.flags.DEFINE_integer('num_shards', 100, 'Number of TFRecord shards')
FLAGS = tf.flags.FLAGS
def main(_):
tf.logging.set_verbosity(tf.logging.INFO)
required_flags = [
'input_box_annotations_csv', 'input_images_directory', 'input_label_map',
'output_tf_record_path_prefix'
]
for flag_name in required_flags:
if not getattr(FLAGS, flag_name):
raise ValueError('Flag --{} is required'.format(flag_name))
label_map = label_map_util.get_label_map_dict(FLAGS.input_label_map)
all_box_annotations = pd.read_csv(FLAGS.input_box_annotations_csv)
if FLAGS.input_image_label_annotations_csv:
all_label_annotations = pd.read_csv(FLAGS.input_image_label_annotations_csv)
all_label_annotations.rename(
columns={'Confidence': 'ConfidenceImageLabel'}, inplace=True)
else:
all_label_annotations = None
all_images = tf.gfile.Glob(
os.path.join(FLAGS.input_images_directory, '*.jpg'))
all_image_ids = [os.path.splitext(os.path.basename(v))[0] for v in all_images]
all_image_ids = pd.DataFrame({'ImageID': all_image_ids})
all_annotations = pd.concat(
[all_box_annotations, all_image_ids, all_label_annotations])
tf.logging.log(tf.logging.INFO, 'Found %d images...', len(all_image_ids))
with contextlib2.ExitStack() as tf_record_close_stack:
output_tfrecords = tf_record_creation_util.open_sharded_output_tfrecords(
tf_record_close_stack, FLAGS.output_tf_record_path_prefix,
FLAGS.num_shards)
for counter, image_data in enumerate(all_annotations.groupby('ImageID')):
tf.logging.log_every_n(tf.logging.INFO, 'Processed %d images...', 1000,
counter)
image_id, image_annotations = image_data
# In OID image file names are formed by appending ".jpg" to the image ID.
image_path = os.path.join(FLAGS.input_images_directory, image_id + '.jpg')
with tf.gfile.Open(image_path) as image_file:
encoded_image = image_file.read()
tf_example = oid_tfrecord_creation.tf_example_from_annotations_data_frame(
image_annotations, label_map, encoded_image)
if tf_example:
shard_idx = int(image_id, 16) % FLAGS.num_shards
output_tfrecords[shard_idx].write(tf_example.SerializeToString())
if __name__ == '__main__':
tf.app.run() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/dataset_tools/create_oid_tf_record.py | create_oid_tf_record.py |
r"""Convert raw PASCAL dataset to TFRecord for object_detection.
Example usage:
python object_detection/dataset_tools/create_pascal_tf_record.py \
--data_dir=/home/user/VOCdevkit \
--year=VOC2012 \
--output_path=/home/user/pascal.record
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import hashlib
import io
import logging
import os
from lxml import etree
import PIL.Image
import tensorflow.compat.v1 as tf
from object_detection.utils import dataset_util
from object_detection.utils import label_map_util
flags = tf.app.flags
flags.DEFINE_string('data_dir', '', 'Root directory to raw PASCAL VOC dataset.')
flags.DEFINE_string('set', 'train', 'Convert training set, validation set or '
'merged set.')
flags.DEFINE_string('annotations_dir', 'Annotations',
'(Relative) path to annotations directory.')
flags.DEFINE_string('year', 'VOC2007', 'Desired challenge year.')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
flags.DEFINE_string('label_map_path', 'data/pascal_label_map.pbtxt',
'Path to label map proto')
flags.DEFINE_boolean('ignore_difficult_instances', False, 'Whether to ignore '
'difficult instances')
FLAGS = flags.FLAGS
SETS = ['train', 'val', 'trainval', 'test']
YEARS = ['VOC2007', 'VOC2012', 'merged']
def dict_to_tf_example(data,
dataset_directory,
label_map_dict,
ignore_difficult_instances=False,
image_subdirectory='JPEGImages'):
"""Convert XML derived dict to tf.Example proto.
Notice that this function normalizes the bounding box coordinates provided
by the raw data.
Args:
data: dict holding PASCAL XML fields for a single image (obtained by
running dataset_util.recursive_parse_xml_to_dict)
dataset_directory: Path to root directory holding PASCAL dataset
label_map_dict: A map from string label names to integers ids.
ignore_difficult_instances: Whether to skip difficult instances in the
dataset (default: False).
image_subdirectory: String specifying subdirectory within the
PASCAL dataset directory holding the actual image data.
Returns:
example: The converted tf.Example.
Raises:
ValueError: if the image pointed to by data['filename'] is not a valid JPEG
"""
img_path = os.path.join(data['folder'], image_subdirectory, data['filename'])
full_path = os.path.join(dataset_directory, img_path)
with tf.gfile.GFile(full_path, 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = PIL.Image.open(encoded_jpg_io)
if image.format != 'JPEG':
raise ValueError('Image format not JPEG')
key = hashlib.sha256(encoded_jpg).hexdigest()
width = int(data['size']['width'])
height = int(data['size']['height'])
xmin = []
ymin = []
xmax = []
ymax = []
classes = []
classes_text = []
truncated = []
poses = []
difficult_obj = []
if 'object' in data:
for obj in data['object']:
difficult = bool(int(obj['difficult']))
if ignore_difficult_instances and difficult:
continue
difficult_obj.append(int(difficult))
xmin.append(float(obj['bndbox']['xmin']) / width)
ymin.append(float(obj['bndbox']['ymin']) / height)
xmax.append(float(obj['bndbox']['xmax']) / width)
ymax.append(float(obj['bndbox']['ymax']) / height)
classes_text.append(obj['name'].encode('utf8'))
classes.append(label_map_dict[obj['name']])
truncated.append(int(obj['truncated']))
poses.append(obj['pose'].encode('utf8'))
example = tf.train.Example(features=tf.train.Features(feature={
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(
data['filename'].encode('utf8')),
'image/source_id': dataset_util.bytes_feature(
data['filename'].encode('utf8')),
'image/key/sha256': dataset_util.bytes_feature(key.encode('utf8')),
'image/encoded': dataset_util.bytes_feature(encoded_jpg),
'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmin),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmax),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymin),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymax),
'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
'image/object/class/label': dataset_util.int64_list_feature(classes),
'image/object/difficult': dataset_util.int64_list_feature(difficult_obj),
'image/object/truncated': dataset_util.int64_list_feature(truncated),
'image/object/view': dataset_util.bytes_list_feature(poses),
}))
return example
def main(_):
if FLAGS.set not in SETS:
raise ValueError('set must be in : {}'.format(SETS))
if FLAGS.year not in YEARS:
raise ValueError('year must be in : {}'.format(YEARS))
data_dir = FLAGS.data_dir
years = ['VOC2007', 'VOC2012']
if FLAGS.year != 'merged':
years = [FLAGS.year]
writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
label_map_dict = label_map_util.get_label_map_dict(FLAGS.label_map_path)
for year in years:
logging.info('Reading from PASCAL %s dataset.', year)
examples_path = os.path.join(data_dir, year, 'ImageSets', 'Main',
'aeroplane_' + FLAGS.set + '.txt')
annotations_dir = os.path.join(data_dir, year, FLAGS.annotations_dir)
examples_list = dataset_util.read_examples_list(examples_path)
for idx, example in enumerate(examples_list):
if idx % 100 == 0:
logging.info('On image %d of %d', idx, len(examples_list))
path = os.path.join(annotations_dir, example + '.xml')
with tf.gfile.GFile(path, 'r') as fid:
xml_str = fid.read()
xml = etree.fromstring(xml_str)
data = dataset_util.recursive_parse_xml_to_dict(xml)['annotation']
tf_example = dict_to_tf_example(data, FLAGS.data_dir, label_map_dict,
FLAGS.ignore_difficult_instances)
writer.write(tf_example.SerializeToString())
writer.close()
if __name__ == '__main__':
tf.app.run() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/dataset_tools/create_pascal_tf_record.py | create_pascal_tf_record.py |
r"""An executable to expand image-level labels, boxes and segments.
The expansion is performed using class hierarchy, provided in JSON file.
The expected file formats are the following:
- for box and segment files: CSV file is expected to have LabelName field
- for image-level labels: CSV file is expected to have LabelName and Confidence
fields
Note, that LabelName is the only field used for expansion.
Example usage:
python models/research/object_detection/dataset_tools/\
oid_hierarchical_labels_expansion.py \
--json_hierarchy_file=<path to JSON hierarchy> \
--input_annotations=<input csv file> \
--output_annotations=<output csv file> \
--annotation_type=<1 (for boxes and segments) or 2 (for image-level labels)>
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import json
from absl import app
from absl import flags
import six
flags.DEFINE_string(
'json_hierarchy_file', None,
'Path to the file containing label hierarchy in JSON format.')
flags.DEFINE_string(
'input_annotations', None, 'Path to Open Images annotations file'
'(either bounding boxes, segments or image-level labels).')
flags.DEFINE_string('output_annotations', None, 'Path to the output file.')
flags.DEFINE_integer(
'annotation_type', None,
'Type of the input annotations: 1 - boxes or segments,'
'2 - image-level labels.'
)
FLAGS = flags.FLAGS
def _update_dict(initial_dict, update):
"""Updates dictionary with update content.
Args:
initial_dict: initial dictionary.
update: updated dictionary.
"""
for key, value_list in update.items():
if key in initial_dict:
initial_dict[key].update(value_list)
else:
initial_dict[key] = set(value_list)
def _build_plain_hierarchy(hierarchy, skip_root=False):
"""Expands tree hierarchy representation to parent-child dictionary.
Args:
hierarchy: labels hierarchy as JSON file.
skip_root: if true skips root from the processing (done for the case when all
classes under hierarchy are collected under virtual node).
Returns:
keyed_parent - dictionary of parent - all its children nodes.
keyed_child - dictionary of children - all its parent nodes
children - all children of the current node.
"""
all_children = set([])
all_keyed_parent = {}
all_keyed_child = {}
if 'Subcategory' in hierarchy:
for node in hierarchy['Subcategory']:
keyed_parent, keyed_child, children = _build_plain_hierarchy(node)
# Update is not done through dict.update() since some children have multi-
# ple parents in the hiearchy.
_update_dict(all_keyed_parent, keyed_parent)
_update_dict(all_keyed_child, keyed_child)
all_children.update(children)
if not skip_root:
all_keyed_parent[hierarchy['LabelName']] = copy.deepcopy(all_children)
all_children.add(hierarchy['LabelName'])
for child, _ in all_keyed_child.items():
all_keyed_child[child].add(hierarchy['LabelName'])
all_keyed_child[hierarchy['LabelName']] = set([])
return all_keyed_parent, all_keyed_child, all_children
class OIDHierarchicalLabelsExpansion(object):
""" Main class to perform labels hierachical expansion."""
def __init__(self, hierarchy):
"""Constructor.
Args:
hierarchy: labels hierarchy as JSON object.
"""
self._hierarchy_keyed_parent, self._hierarchy_keyed_child, _ = (
_build_plain_hierarchy(hierarchy, skip_root=True))
def expand_boxes_or_segments_from_csv(self, csv_row,
labelname_column_index=1):
"""Expands a row containing bounding boxes/segments from CSV file.
Args:
csv_row: a single row of Open Images released groundtruth file.
labelname_column_index: 0-based index of LabelName column in CSV file.
Returns:
a list of strings (including the initial row) corresponding to the ground
truth expanded to multiple annotation for evaluation with Open Images
Challenge 2018/2019 metrics.
"""
# Row header is expected to be the following for boxes:
# ImageID,LabelName,Confidence,XMin,XMax,YMin,YMax,IsGroupOf
# Row header is expected to be the following for segments:
# ImageID,LabelName,ImageWidth,ImageHeight,XMin,XMax,YMin,YMax,
# IsGroupOf,Mask
split_csv_row = six.ensure_str(csv_row).split(',')
result = [csv_row]
assert split_csv_row[
labelname_column_index] in self._hierarchy_keyed_child
parent_nodes = self._hierarchy_keyed_child[
split_csv_row[labelname_column_index]]
for parent_node in parent_nodes:
split_csv_row[labelname_column_index] = parent_node
result.append(','.join(split_csv_row))
return result
def expand_labels_from_csv(self,
csv_row,
labelname_column_index=1,
confidence_column_index=2):
"""Expands a row containing labels from CSV file.
Args:
csv_row: a single row of Open Images released groundtruth file.
labelname_column_index: 0-based index of LabelName column in CSV file.
confidence_column_index: 0-based index of Confidence column in CSV file.
Returns:
a list of strings (including the initial row) corresponding to the ground
truth expanded to multiple annotation for evaluation with Open Images
Challenge 2018/2019 metrics.
"""
# Row header is expected to be exactly:
# ImageID,Source,LabelName,Confidence
split_csv_row = six.ensure_str(csv_row).split(',')
result = [csv_row]
if int(split_csv_row[confidence_column_index]) == 1:
assert split_csv_row[
labelname_column_index] in self._hierarchy_keyed_child
parent_nodes = self._hierarchy_keyed_child[
split_csv_row[labelname_column_index]]
for parent_node in parent_nodes:
split_csv_row[labelname_column_index] = parent_node
result.append(','.join(split_csv_row))
else:
assert split_csv_row[
labelname_column_index] in self._hierarchy_keyed_parent
child_nodes = self._hierarchy_keyed_parent[
split_csv_row[labelname_column_index]]
for child_node in child_nodes:
split_csv_row[labelname_column_index] = child_node
result.append(','.join(split_csv_row))
return result
def main(unused_args):
del unused_args
with open(FLAGS.json_hierarchy_file) as f:
hierarchy = json.load(f)
expansion_generator = OIDHierarchicalLabelsExpansion(hierarchy)
labels_file = False
if FLAGS.annotation_type == 2:
labels_file = True
elif FLAGS.annotation_type != 1:
print('--annotation_type expected value is 1 or 2.')
return -1
confidence_column_index = -1
labelname_column_index = -1
with open(FLAGS.input_annotations, 'r') as source:
with open(FLAGS.output_annotations, 'w') as target:
header = source.readline()
target.writelines([header])
column_names = header.strip().split(',')
labelname_column_index = column_names.index('LabelName')
if labels_file:
confidence_column_index = column_names.index('Confidence')
for line in source:
if labels_file:
expanded_lines = expansion_generator.expand_labels_from_csv(
line, labelname_column_index, confidence_column_index)
else:
expanded_lines = (
expansion_generator.expand_boxes_or_segments_from_csv(
line, labelname_column_index))
target.writelines(expanded_lines)
if __name__ == '__main__':
flags.mark_flag_as_required('json_hierarchy_file')
flags.mark_flag_as_required('input_annotations')
flags.mark_flag_as_required('output_annotations')
flags.mark_flag_as_required('annotation_type')
app.run(main) | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/dataset_tools/oid_hierarchical_labels_expansion.py | oid_hierarchical_labels_expansion.py |
"""Common utility for object detection tf.train.SequenceExamples."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
def context_float_feature(ndarray):
"""Converts a numpy float array to a context float feature.
Args:
ndarray: A numpy float array.
Returns:
A context float feature.
"""
feature = tf.train.Feature()
for val in ndarray:
feature.float_list.value.append(val)
return feature
def context_int64_feature(ndarray):
"""Converts a numpy array to a context int64 feature.
Args:
ndarray: A numpy int64 array.
Returns:
A context int64 feature.
"""
feature = tf.train.Feature()
for val in ndarray:
feature.int64_list.value.append(val)
return feature
def context_bytes_feature(ndarray):
"""Converts a numpy bytes array to a context bytes feature.
Args:
ndarray: A numpy bytes array.
Returns:
A context bytes feature.
"""
feature = tf.train.Feature()
for val in ndarray:
if isinstance(val, np.ndarray):
val = val.tolist()
feature.bytes_list.value.append(tf.compat.as_bytes(val))
return feature
def sequence_float_feature(ndarray):
"""Converts a numpy float array to a sequence float feature.
Args:
ndarray: A numpy float array.
Returns:
A sequence float feature.
"""
feature_list = tf.train.FeatureList()
for row in ndarray:
feature = feature_list.feature.add()
if row.size:
feature.float_list.value[:] = row
return feature_list
def sequence_int64_feature(ndarray):
"""Converts a numpy int64 array to a sequence int64 feature.
Args:
ndarray: A numpy int64 array.
Returns:
A sequence int64 feature.
"""
feature_list = tf.train.FeatureList()
for row in ndarray:
feature = feature_list.feature.add()
if row.size:
feature.int64_list.value[:] = row
return feature_list
def sequence_bytes_feature(ndarray):
"""Converts a bytes float array to a sequence bytes feature.
Args:
ndarray: A numpy bytes array.
Returns:
A sequence bytes feature.
"""
feature_list = tf.train.FeatureList()
for row in ndarray:
if isinstance(row, np.ndarray):
row = row.tolist()
feature = feature_list.feature.add()
if row:
row = [tf.compat.as_bytes(val) for val in row]
feature.bytes_list.value[:] = row
return feature_list
def sequence_strings_feature(strings):
new_str_arr = []
for single_str in strings:
new_str_arr.append(tf.train.Feature(
bytes_list=tf.train.BytesList(
value=[single_str.encode('utf8')])))
return tf.train.FeatureList(feature=new_str_arr)
def boxes_to_box_components(bboxes):
"""Converts a list of numpy arrays (boxes) to box components.
Args:
bboxes: A numpy array of bounding boxes.
Returns:
Bounding box component lists.
"""
ymin_list = []
xmin_list = []
ymax_list = []
xmax_list = []
for bbox in bboxes:
if bbox != []: # pylint: disable=g-explicit-bool-comparison
bbox = np.array(bbox).astype(np.float32)
ymin, xmin, ymax, xmax = np.split(bbox, 4, axis=1)
else:
ymin, xmin, ymax, xmax = [], [], [], []
ymin_list.append(np.reshape(ymin, [-1]))
xmin_list.append(np.reshape(xmin, [-1]))
ymax_list.append(np.reshape(ymax, [-1]))
xmax_list.append(np.reshape(xmax, [-1]))
return ymin_list, xmin_list, ymax_list, xmax_list
def make_sequence_example(dataset_name,
video_id,
encoded_images,
image_height,
image_width,
image_format=None,
image_source_ids=None,
timestamps=None,
is_annotated=None,
bboxes=None,
label_strings=None,
detection_bboxes=None,
detection_classes=None,
detection_scores=None,
use_strs_for_source_id=False,
context_features=None,
context_feature_length=None,
context_features_image_id_list=None):
"""Constructs tf.SequenceExamples.
Args:
dataset_name: String with dataset name.
video_id: String with video id.
encoded_images: A [num_frames] list (or numpy array) of encoded image
frames.
image_height: Height of the images.
image_width: Width of the images.
image_format: Format of encoded images.
image_source_ids: (Optional) A [num_frames] list of unique string ids for
each image.
timestamps: (Optional) A [num_frames] list (or numpy array) array with image
timestamps.
is_annotated: (Optional) A [num_frames] list (or numpy array) array
in which each element indicates whether the frame has been annotated
(1) or not (0).
bboxes: (Optional) A list (with num_frames elements) of [num_boxes_i, 4]
numpy float32 arrays holding boxes for each frame.
label_strings: (Optional) A list (with num_frames_elements) of [num_boxes_i]
numpy string arrays holding object string labels for each frame.
detection_bboxes: (Optional) A list (with num_frames elements) of
[num_boxes_i, 4] numpy float32 arrays holding prediction boxes for each
frame.
detection_classes: (Optional) A list (with num_frames_elements) of
[num_boxes_i] numpy int64 arrays holding predicted classes for each frame.
detection_scores: (Optional) A list (with num_frames_elements) of
[num_boxes_i] numpy float32 arrays holding predicted object scores for
each frame.
use_strs_for_source_id: (Optional) Whether to write the source IDs as
strings rather than byte lists of characters.
context_features: (Optional) A list or numpy array of features to use in
Context R-CNN, of length num_context_features * context_feature_length.
context_feature_length: (Optional) The length of each context feature, used
for reshaping.
context_features_image_id_list: (Optional) A list of image ids of length
num_context_features corresponding to the context features.
Returns:
A tf.train.SequenceExample.
"""
num_frames = len(encoded_images)
image_encoded = np.expand_dims(encoded_images, axis=-1)
if timestamps is None:
timestamps = np.arange(num_frames)
image_timestamps = np.expand_dims(timestamps, axis=-1)
# Context fields.
context_dict = {
'example/dataset_name': context_bytes_feature([dataset_name]),
'clip/start/timestamp': context_int64_feature([image_timestamps[0][0]]),
'clip/end/timestamp': context_int64_feature([image_timestamps[-1][0]]),
'clip/frames': context_int64_feature([num_frames]),
'image/channels': context_int64_feature([3]),
'image/height': context_int64_feature([image_height]),
'image/width': context_int64_feature([image_width]),
'clip/media_id': context_bytes_feature([video_id])
}
# Sequence fields.
feature_list = {
'image/encoded': sequence_bytes_feature(image_encoded),
'image/timestamp': sequence_int64_feature(image_timestamps),
}
# Add optional fields.
if image_format is not None:
context_dict['image/format'] = context_bytes_feature([image_format])
if image_source_ids is not None:
if use_strs_for_source_id:
feature_list['image/source_id'] = sequence_strings_feature(
image_source_ids)
else:
feature_list['image/source_id'] = sequence_bytes_feature(image_source_ids)
if bboxes is not None:
bbox_ymin, bbox_xmin, bbox_ymax, bbox_xmax = boxes_to_box_components(bboxes)
feature_list['region/bbox/xmin'] = sequence_float_feature(bbox_xmin)
feature_list['region/bbox/xmax'] = sequence_float_feature(bbox_xmax)
feature_list['region/bbox/ymin'] = sequence_float_feature(bbox_ymin)
feature_list['region/bbox/ymax'] = sequence_float_feature(bbox_ymax)
if is_annotated is None:
is_annotated = np.ones(num_frames, dtype=np.int64)
is_annotated = np.expand_dims(is_annotated, axis=-1)
feature_list['region/is_annotated'] = sequence_int64_feature(is_annotated)
if label_strings is not None:
feature_list['region/label/string'] = sequence_bytes_feature(
label_strings)
if detection_bboxes is not None:
det_bbox_ymin, det_bbox_xmin, det_bbox_ymax, det_bbox_xmax = (
boxes_to_box_components(detection_bboxes))
feature_list['predicted/region/bbox/xmin'] = sequence_float_feature(
det_bbox_xmin)
feature_list['predicted/region/bbox/xmax'] = sequence_float_feature(
det_bbox_xmax)
feature_list['predicted/region/bbox/ymin'] = sequence_float_feature(
det_bbox_ymin)
feature_list['predicted/region/bbox/ymax'] = sequence_float_feature(
det_bbox_ymax)
if detection_classes is not None:
feature_list['predicted/region/label/index'] = sequence_int64_feature(
detection_classes)
if detection_scores is not None:
feature_list['predicted/region/label/confidence'] = sequence_float_feature(
detection_scores)
if context_features is not None:
context_dict['image/context_features'] = context_float_feature(
context_features)
if context_feature_length is not None:
context_dict['image/context_feature_length'] = context_int64_feature(
context_feature_length)
if context_features_image_id_list is not None:
context_dict['image/context_features_image_id_list'] = (
context_bytes_feature(context_features_image_id_list))
context = tf.train.Features(feature=context_dict)
feature_lists = tf.train.FeatureLists(feature_list=feature_list)
sequence_example = tf.train.SequenceExample(
context=context,
feature_lists=feature_lists)
return sequence_example | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/dataset_tools/seq_example_util.py | seq_example_util.py |
r"""A Beam job to generate detection data for camera trap images.
This tools allows to run inference with an exported Object Detection model in
`saved_model` format and produce raw detection boxes on images in tf.Examples,
with the assumption that the bounding box class label will match the image-level
class label in the tf.Example.
Steps to generate a detection dataset:
1. Use object_detection/export_inference_graph.py to get a `saved_model` for
inference. The input node must accept a tf.Example proto.
2. Run this tool with `saved_model` from step 1 and an TFRecord of tf.Example
protos containing images for inference.
Example Usage:
--------------
python tensorflow_models/object_detection/export_inference_graph.py \
--alsologtostderr \
--input_type tf_example \
--pipeline_config_path path/to/detection_model.config \
--trained_checkpoint_prefix path/to/model.ckpt \
--output_directory path/to/exported_model_directory
python generate_detection_data.py \
--alsologtostderr \
--input_tfrecord path/to/input_tfrecord@X \
--output_tfrecord path/to/output_tfrecord@X \
--model_dir path/to/exported_model_directory/saved_model
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import os
import threading
import tensorflow as tf
try:
import apache_beam as beam # pylint:disable=g-import-not-at-top
except ModuleNotFoundError:
pass
class GenerateDetectionDataFn(beam.DoFn):
"""Generates detection data for camera trap images.
This Beam DoFn performs inference with an object detection `saved_model` and
produces detection boxes for camera trap data, matched to the
object class.
"""
session_lock = threading.Lock()
def __init__(self, model_dir, confidence_threshold):
"""Initialization function.
Args:
model_dir: A directory containing saved model.
confidence_threshold: the confidence threshold for boxes to keep
"""
self._model_dir = model_dir
self._confidence_threshold = confidence_threshold
self._session = None
self._num_examples_processed = beam.metrics.Metrics.counter(
'detection_data_generation', 'num_tf_examples_processed')
def setup(self):
self._load_inference_model()
def _load_inference_model(self):
# Because initialization of the tf.Session is expensive we share
# one instance across all threads in the worker. This is possible since
# tf.Session.run() is thread safe.
with self.session_lock:
self._detect_fn = tf.saved_model.load(self._model_dir)
def process(self, tfrecord_entry):
return self._run_inference_and_generate_detections(tfrecord_entry)
def _run_inference_and_generate_detections(self, tfrecord_entry):
input_example = tf.train.Example.FromString(tfrecord_entry)
if input_example.features.feature[
'image/object/bbox/ymin'].float_list.value:
# There are already ground truth boxes for this image, just keep them.
return [input_example]
detections = self._detect_fn.signatures['serving_default'](
(tf.expand_dims(tf.convert_to_tensor(tfrecord_entry), 0)))
detection_boxes = detections['detection_boxes']
num_detections = detections['num_detections']
detection_scores = detections['detection_scores']
example = tf.train.Example()
num_detections = int(num_detections[0])
image_class_labels = input_example.features.feature[
'image/object/class/label'].int64_list.value
image_class_texts = input_example.features.feature[
'image/object/class/text'].bytes_list.value
# Ignore any images with multiple classes,
# we can't match the class to the box.
if len(image_class_labels) > 1:
return []
# Don't add boxes for images already labeled empty (for now)
if len(image_class_labels) == 1:
# Add boxes over confidence threshold.
for idx, score in enumerate(detection_scores[0]):
if score >= self._confidence_threshold and idx < num_detections:
example.features.feature[
'image/object/bbox/ymin'].float_list.value.extend([
detection_boxes[0, idx, 0]])
example.features.feature[
'image/object/bbox/xmin'].float_list.value.extend([
detection_boxes[0, idx, 1]])
example.features.feature[
'image/object/bbox/ymax'].float_list.value.extend([
detection_boxes[0, idx, 2]])
example.features.feature[
'image/object/bbox/xmax'].float_list.value.extend([
detection_boxes[0, idx, 3]])
# Add box scores and class texts and labels.
example.features.feature[
'image/object/class/score'].float_list.value.extend(
[score])
example.features.feature[
'image/object/class/label'].int64_list.value.extend(
[image_class_labels[0]])
example.features.feature[
'image/object/class/text'].bytes_list.value.extend(
[image_class_texts[0]])
# Add other essential example attributes
example.features.feature['image/encoded'].bytes_list.value.extend(
input_example.features.feature['image/encoded'].bytes_list.value)
example.features.feature['image/height'].int64_list.value.extend(
input_example.features.feature['image/height'].int64_list.value)
example.features.feature['image/width'].int64_list.value.extend(
input_example.features.feature['image/width'].int64_list.value)
example.features.feature['image/source_id'].bytes_list.value.extend(
input_example.features.feature['image/source_id'].bytes_list.value)
example.features.feature['image/location'].bytes_list.value.extend(
input_example.features.feature['image/location'].bytes_list.value)
example.features.feature['image/date_captured'].bytes_list.value.extend(
input_example.features.feature['image/date_captured'].bytes_list.value)
example.features.feature['image/class/text'].bytes_list.value.extend(
input_example.features.feature['image/class/text'].bytes_list.value)
example.features.feature['image/class/label'].int64_list.value.extend(
input_example.features.feature['image/class/label'].int64_list.value)
example.features.feature['image/seq_id'].bytes_list.value.extend(
input_example.features.feature['image/seq_id'].bytes_list.value)
example.features.feature['image/seq_num_frames'].int64_list.value.extend(
input_example.features.feature['image/seq_num_frames'].int64_list.value)
example.features.feature['image/seq_frame_num'].int64_list.value.extend(
input_example.features.feature['image/seq_frame_num'].int64_list.value)
self._num_examples_processed.inc(1)
return [example]
def construct_pipeline(pipeline, input_tfrecord, output_tfrecord, model_dir,
confidence_threshold, num_shards):
"""Returns a Beam pipeline to run object detection inference.
Args:
pipeline: Initialized beam pipeline.
input_tfrecord: A TFRecord of tf.train.Example protos containing images.
output_tfrecord: A TFRecord of tf.train.Example protos that contain images
in the input TFRecord and the detections from the model.
model_dir: Path to `saved_model` to use for inference.
confidence_threshold: Threshold to use when keeping detection results.
num_shards: The number of output shards.
"""
input_collection = (
pipeline | 'ReadInputTFRecord' >> beam.io.tfrecordio.ReadFromTFRecord(
input_tfrecord,
coder=beam.coders.BytesCoder()))
output_collection = input_collection | 'RunInference' >> beam.ParDo(
GenerateDetectionDataFn(model_dir, confidence_threshold))
output_collection = output_collection | 'Reshuffle' >> beam.Reshuffle()
_ = output_collection | 'WritetoDisk' >> beam.io.tfrecordio.WriteToTFRecord(
output_tfrecord,
num_shards=num_shards,
coder=beam.coders.ProtoCoder(tf.train.Example))
def parse_args(argv):
"""Command-line argument parser.
Args:
argv: command line arguments
Returns:
beam_args: Arguments for the beam pipeline.
pipeline_args: Arguments for the pipeline options, such as runner type.
"""
parser = argparse.ArgumentParser()
parser.add_argument(
'--detection_input_tfrecord',
dest='detection_input_tfrecord',
required=True,
help='TFRecord containing images in tf.Example format for object '
'detection.')
parser.add_argument(
'--detection_output_tfrecord',
dest='detection_output_tfrecord',
required=True,
help='TFRecord containing detections in tf.Example format.')
parser.add_argument(
'--detection_model_dir',
dest='detection_model_dir',
required=True,
help='Path to directory containing an object detection SavedModel.')
parser.add_argument(
'--confidence_threshold',
dest='confidence_threshold',
default=0.9,
help='Min confidence to keep bounding boxes.')
parser.add_argument(
'--num_shards',
dest='num_shards',
default=0,
help='Number of output shards.')
beam_args, pipeline_args = parser.parse_known_args(argv)
return beam_args, pipeline_args
def main(argv=None, save_main_session=True):
"""Runs the Beam pipeline that performs inference.
Args:
argv: Command line arguments.
save_main_session: Whether to save the main session.
"""
args, pipeline_args = parse_args(argv)
pipeline_options = beam.options.pipeline_options.PipelineOptions(
pipeline_args)
pipeline_options.view_as(
beam.options.pipeline_options.SetupOptions).save_main_session = (
save_main_session)
dirname = os.path.dirname(args.detection_output_tfrecord)
tf.io.gfile.makedirs(dirname)
p = beam.Pipeline(options=pipeline_options)
construct_pipeline(
p,
args.detection_input_tfrecord,
args.detection_output_tfrecord,
args.detection_model_dir,
args.confidence_threshold,
args.num_shards)
p.run()
if __name__ == '__main__':
main() | 123-object-detection | /123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/dataset_tools/context_rcnn/generate_detection_data.py | generate_detection_data.py |