instance_id
stringclasses 7
values | text
stringlengths 11.4k
828k
| repo
stringclasses 3
values | base_commit
stringclasses 7
values | problem_statement
stringclasses 6
values | hints_text
stringclasses 5
values | created_at
stringclasses 7
values | patch
stringclasses 7
values | test_patch
stringclasses 7
values | version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|
celery__celery-2840 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True
When using celery v3.0.24, with `CELERY_ACKS_LATE = True` , if the OOM killer kills the celery worker, then the worker acknowledges the message.
As per [this](https://github.com/celery/celery/commit/e810420c) commit. The `exc_info.internal` comes in as `false`, which means it is not a internal error, due to which the message is acknowledged.
The desirable behaviour, in such a case would be to not acknowledge the message (and be able to know, whether its a OOM error), so that some other worker can pick it up.
As a workaround, I've commented out the [code](https://github.com/siddharth96/celery/commit/427695d1b23034dadda85fd7a48f7367831be4fa), where celery acknowledges the message, because in such a case, message will be lost.
</issue>
<code>
[start of README.rst]
1 =================================
2 celery - Distributed Task Queue
3 =================================
4
5 .. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
6
7 |build-status| |coverage-status|
8
9 :Version: 3.2.0a1 (Cipater)
10 :Web: http://celeryproject.org/
11 :Download: http://pypi.python.org/pypi/celery/
12 :Source: http://github.com/celery/celery/
13 :Keywords: task queue, job queue, asynchronous, async, rabbitmq, amqp, redis,
14 python, webhooks, queue, distributed
15
16 --
17
18 What is a Task Queue?
19 =====================
20
21 Task queues are used as a mechanism to distribute work across threads or
22 machines.
23
24 A task queue's input is a unit of work, called a task, dedicated worker
25 processes then constantly monitor the queue for new work to perform.
26
27 Celery communicates via messages, usually using a broker
28 to mediate between clients and workers. To initiate a task a client puts a
29 message on the queue, the broker then delivers the message to a worker.
30
31 A Celery system can consist of multiple workers and brokers, giving way
32 to high availability and horizontal scaling.
33
34 Celery is a library written in Python, but the protocol can be implemented in
35 any language. So far there's RCelery_ for the Ruby programming language, and a
36 `PHP client`, but language interoperability can also be achieved
37 by using webhooks.
38
39 .. _RCelery: https://github.com/leapfrogonline/rcelery
40 .. _`PHP client`: https://github.com/gjedeer/celery-php
41 .. _`using webhooks`:
42 http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
43
44 What do I need?
45 ===============
46
47 Celery version 3.0 runs on,
48
49 - Python (2.6, 2.7, 3.3, 3.4)
50 - PyPy (1.8, 1.9)
51 - Jython (2.5, 2.7).
52
53 This is the last version to support Python 2.5,
54 and from Celery 3.1, Python 2.6 or later is required.
55 The last version to support Python 2.4 was Celery series 2.2.
56
57 *Celery* is usually used with a message broker to send and receive messages.
58 The RabbitMQ, Redis transports are feature complete,
59 but there's also experimental support for a myriad of other solutions, including
60 using SQLite for local development.
61
62 *Celery* can run on a single machine, on multiple machines, or even
63 across datacenters.
64
65 Get Started
66 ===========
67
68 If this is the first time you're trying to use Celery, or you are
69 new to Celery 3.0 coming from previous versions then you should read our
70 getting started tutorials:
71
72 - `First steps with Celery`_
73
74 Tutorial teaching you the bare minimum needed to get started with Celery.
75
76 - `Next steps`_
77
78 A more complete overview, showing more features.
79
80 .. _`First steps with Celery`:
81 http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
82
83 .. _`Next steps`:
84 http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
85
86 Celery is...
87 ==========
88
89 - **Simple**
90
91 Celery is easy to use and maintain, and does *not need configuration files*.
92
93 It has an active, friendly community you can talk to for support,
94 including a `mailing-list`_ and and an IRC channel.
95
96 Here's one of the simplest applications you can make::
97
98 from celery import Celery
99
100 app = Celery('hello', broker='amqp://guest@localhost//')
101
102 @app.task
103 def hello():
104 return 'hello world'
105
106 - **Highly Available**
107
108 Workers and clients will automatically retry in the event
109 of connection loss or failure, and some brokers support
110 HA in way of *Master/Master* or *Master/Slave* replication.
111
112 - **Fast**
113
114 A single Celery process can process millions of tasks a minute,
115 with sub-millisecond round-trip latency (using RabbitMQ,
116 py-librabbitmq, and optimized settings).
117
118 - **Flexible**
119
120 Almost every part of *Celery* can be extended or used on its own,
121 Custom pool implementations, serializers, compression schemes, logging,
122 schedulers, consumers, producers, autoscalers, broker transports and much more.
123
124 It supports...
125 ============
126
127 - **Message Transports**
128
129 - RabbitMQ_, Redis_,
130 - MongoDB_ (experimental), Amazon SQS (experimental),
131 - CouchDB_ (experimental), SQLAlchemy_ (experimental),
132 - Django ORM (experimental), `IronMQ`_
133 - and more...
134
135 - **Concurrency**
136
137 - Prefork, Eventlet_, gevent_, threads/single threaded
138
139 - **Result Stores**
140
141 - AMQP, Redis
142 - memcached, MongoDB
143 - SQLAlchemy, Django ORM
144 - Apache Cassandra, IronCache
145
146 - **Serialization**
147
148 - *pickle*, *json*, *yaml*, *msgpack*.
149 - *zlib*, *bzip2* compression.
150 - Cryptographic message signing.
151
152 .. _`Eventlet`: http://eventlet.net/
153 .. _`gevent`: http://gevent.org/
154
155 .. _RabbitMQ: http://rabbitmq.com
156 .. _Redis: http://redis.io
157 .. _MongoDB: http://mongodb.org
158 .. _Beanstalk: http://kr.github.com/beanstalkd
159 .. _CouchDB: http://couchdb.apache.org
160 .. _SQLAlchemy: http://sqlalchemy.org
161 .. _`IronMQ`: http://iron.io
162
163 Framework Integration
164 =====================
165
166 Celery is easy to integrate with web frameworks, some of which even have
167 integration packages:
168
169 +--------------------+----------------------------------------------------+
170 | `Django`_ | not needed |
171 +--------------------+----------------------------------------------------+
172 | `Pyramid`_ | `pyramid_celery`_ |
173 +--------------------+----------------------------------------------------+
174 | `Pylons`_ | `celery-pylons`_ |
175 +--------------------+----------------------------------------------------+
176 | `Flask`_ | not needed |
177 +--------------------+----------------------------------------------------+
178 | `web2py`_ | `web2py-celery`_ |
179 +--------------------+----------------------------------------------------+
180 | `Tornado`_ | `tornado-celery`_ | `another tornado-celery`_ |
181 +--------------------+----------------------------------------------------+
182
183 The integration packages are not strictly necessary, but they can make
184 development easier, and sometimes they add important hooks like closing
185 database connections at ``fork``.
186
187 .. _`Django`: http://djangoproject.com/
188 .. _`Pylons`: http://www.pylonsproject.org/
189 .. _`Flask`: http://flask.pocoo.org/
190 .. _`web2py`: http://web2py.com/
191 .. _`Bottle`: http://bottlepy.org/
192 .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
193 .. _`pyramid_celery`: http://pypi.python.org/pypi/pyramid_celery/
194 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
195 .. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
196 .. _`web2py-celery`: http://code.google.com/p/web2py-celery/
197 .. _`Tornado`: http://www.tornadoweb.org/
198 .. _`tornado-celery`: http://github.com/mher/tornado-celery/
199 .. _`another tornado-celery`: https://github.com/mayflaver/tornado-celery
200
201 .. _celery-documentation:
202
203 Documentation
204 =============
205
206 The `latest documentation`_ with user guides, tutorials and API reference
207 is hosted at Read The Docs.
208
209 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
210
211 .. _celery-installation:
212
213 Installation
214 ============
215
216 You can install Celery either via the Python Package Index (PyPI)
217 or from source.
218
219 To install using `pip`,::
220
221 $ pip install -U Celery
222
223 To install using `easy_install`,::
224
225 $ easy_install -U Celery
226
227 .. _bundles:
228
229 Bundles
230 -------
231
232 Celery also defines a group of bundles that can be used
233 to install Celery and the dependencies for a given feature.
234
235 You can specify these in your requirements or on the ``pip`` comand-line
236 by using brackets. Multiple bundles can be specified by separating them by
237 commas.
238 ::
239
240 $ pip install "celery[librabbitmq]"
241
242 $ pip install "celery[librabbitmq,redis,auth,msgpack]"
243
244 The following bundles are available:
245
246 Serializers
247 ~~~~~~~~~~~
248
249 :celery[auth]:
250 for using the auth serializer.
251
252 :celery[msgpack]:
253 for using the msgpack serializer.
254
255 :celery[yaml]:
256 for using the yaml serializer.
257
258 Concurrency
259 ~~~~~~~~~~~
260
261 :celery[eventlet]:
262 for using the eventlet pool.
263
264 :celery[gevent]:
265 for using the gevent pool.
266
267 :celery[threads]:
268 for using the thread pool.
269
270 Transports and Backends
271 ~~~~~~~~~~~~~~~~~~~~~~~
272
273 :celery[librabbitmq]:
274 for using the librabbitmq C library.
275
276 :celery[redis]:
277 for using Redis as a message transport or as a result backend.
278
279 :celery[mongodb]:
280 for using MongoDB as a message transport (*experimental*),
281 or as a result backend (*supported*).
282
283 :celery[sqs]:
284 for using Amazon SQS as a message transport (*experimental*).
285
286 :celery[memcache]:
287 for using memcached as a result backend.
288
289 :celery[cassandra]:
290 for using Apache Cassandra as a result backend.
291
292 :celery[couchdb]:
293 for using CouchDB as a message transport (*experimental*).
294
295 :celery[couchbase]:
296 for using CouchBase as a result backend.
297
298 :celery[beanstalk]:
299 for using Beanstalk as a message transport (*experimental*).
300
301 :celery[zookeeper]:
302 for using Zookeeper as a message transport.
303
304 :celery[zeromq]:
305 for using ZeroMQ as a message transport (*experimental*).
306
307 :celery[sqlalchemy]:
308 for using SQLAlchemy as a message transport (*experimental*),
309 or as a result backend (*supported*).
310
311 :celery[pyro]:
312 for using the Pyro4 message transport (*experimental*).
313
314 :celery[slmq]:
315 for using the SoftLayer Message Queue transport (*experimental*).
316
317 .. _celery-installing-from-source:
318
319 Downloading and installing from source
320 --------------------------------------
321
322 Download the latest version of Celery from
323 http://pypi.python.org/pypi/celery/
324
325 You can install it by doing the following,::
326
327 $ tar xvfz celery-0.0.0.tar.gz
328 $ cd celery-0.0.0
329 $ python setup.py build
330 # python setup.py install
331
332 The last command must be executed as a privileged user if
333 you are not currently using a virtualenv.
334
335 .. _celery-installing-from-git:
336
337 Using the development version
338 -----------------------------
339
340 With pip
341 ~~~~~~~~
342
343 The Celery development version also requires the development
344 versions of ``kombu``, ``amqp`` and ``billiard``.
345
346 You can install the latest snapshot of these using the following
347 pip commands::
348
349 $ pip install https://github.com/celery/celery/zipball/master#egg=celery
350 $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
351 $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
352 $ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
353
354 With git
355 ~~~~~~~~
356
357 Please the Contributing section.
358
359 .. _getting-help:
360
361 Getting Help
362 ============
363
364 .. _mailing-list:
365
366 Mailing list
367 ------------
368
369 For discussions about the usage, development, and future of celery,
370 please join the `celery-users`_ mailing list.
371
372 .. _`celery-users`: http://groups.google.com/group/celery-users/
373
374 .. _irc-channel:
375
376 IRC
377 ---
378
379 Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
380 network.
381
382 .. _`Freenode`: http://freenode.net
383
384 .. _bug-tracker:
385
386 Bug tracker
387 ===========
388
389 If you have any suggestions, bug reports or annoyances please report them
390 to our issue tracker at http://github.com/celery/celery/issues/
391
392 .. _wiki:
393
394 Wiki
395 ====
396
397 http://wiki.github.com/celery/celery/
398
399
400 .. _maintainers:
401
402 Maintainers
403 ===========
404
405 - `@ask`_ (primary maintainer)
406 - `@thedrow`_
407 - `@chrisgogreen`_
408 - `@PMickael`_
409 - `@malinoff`_
410 - And you? We really need more: https://github.com/celery/celery/issues/2534
411
412 .. _`@ask`: http://github.com/ask
413 .. _`@thedrow`: http://github.com/thedrow
414 .. _`@chrisgogreen`: http://github.com/chrisgogreen
415 .. _`@PMickael`: http://github.com/PMickael
416 .. _`@malinoff`: http://github.com/malinoff
417
418
419 .. _contributing-short:
420
421 Contributing
422 ============
423
424 Development of `celery` happens at Github: http://github.com/celery/celery
425
426 You are highly encouraged to participate in the development
427 of `celery`. If you don't like Github (for some reason) you're welcome
428 to send regular patches.
429
430 Be sure to also read the `Contributing to Celery`_ section in the
431 documentation.
432
433 .. _`Contributing to Celery`:
434 http://docs.celeryproject.org/en/master/contributing.html
435
436 .. _license:
437
438 License
439 =======
440
441 This software is licensed under the `New BSD License`. See the ``LICENSE``
442 file in the top distribution directory for the full license text.
443
444 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
445
446
447 .. image:: https://d2weczhvl823v0.cloudfront.net/celery/celery/trend.png
448 :alt: Bitdeli badge
449 :target: https://bitdeli.com/free
450
451 .. |build-status| image:: https://travis-ci.org/celery/celery.svg?branch=master
452 :target: https://travis-ci.org/celery/celery
453 .. |coverage-status| image:: https://coveralls.io/repos/celery/celery/badge.svg
454 :target: https://coveralls.io/r/celery/celery
455
[end of README.rst]
[start of celery/app/defaults.py]
...
118 'EAGER_PROPAGATES_EXCEPTIONS': Option(False, type='bool'),
119 'ENABLE_UTC': Option(True, type='bool'),
120 'ENABLE_REMOTE_CONTROL': Option(True, type='bool'),
121 'EVENT_SERIALIZER': Option('json'),
122 'EVENT_QUEUE_EXPIRES': Option(60.0, type='float'),
123 'EVENT_QUEUE_TTL': Option(5.0, type='float'),
124 'IMPORTS': Option((), type='tuple'),
125 'INCLUDE': Option((), type='tuple'),
126 'IGNORE_RESULT': Option(False, type='bool'),
127 'MAX_CACHED_RESULTS': Option(100, type='int'),
128 'MESSAGE_COMPRESSION': Option(type='string'),
129 'MONGODB_BACKEND_SETTINGS': Option(type='dict'),
130 'REDIS_HOST': Option(type='string', **_REDIS_OLD),
131 'REDIS_PORT': Option(type='int', **_REDIS_OLD),
132 'REDIS_DB': Option(type='int', **_REDIS_OLD),
133 'REDIS_PASSWORD': Option(type='string', **_REDIS_OLD),
134 'REDIS_MAX_CONNECTIONS': Option(type='int'),
135 'RESULT_BACKEND': Option(type='string'),
136 'RESULT_DB_SHORT_LIVED_SESSIONS': Option(False, type='bool'),
137 'RESULT_DB_TABLENAMES': Option(type='dict'),
138 'RESULT_DBURI': Option(),
...
[end of celery/app/defaults.py]
[start of celery/app/task.py]
...
206 #:
207 #: The application default can be overridden using the
208 #: :setting:`CELERY_TRACK_STARTED` setting.
209 track_started = None
210
211 #: When enabled messages for this task will be acknowledged **after**
212 #: the task has been executed, and not *just before* which is the
213 #: default behavior.
214 #:
215 #: Please note that this means the task may be executed twice if the
216 #: worker crashes mid execution (which may be acceptable for some
217 #: applications).
218 #:
219 #: The application default can be overridden with the
220 #: :setting:`CELERY_ACKS_LATE` setting.
221 acks_late = None
222
223 #: Tuple of expected exceptions.
224 #:
225 #: These are errors that are expected in normal operation
226 #: and that should not be regarded as a real error by the worker.
...
...
234 #: Task request stack, the current request will be the topmost.
235 request_stack = None
236
237 #: Some may expect a request to exist even if the task has not been
238 #: called. This should probably be deprecated.
239 _default_request = None
240
241 _exec_options = None
242
243 __bound__ = False
244
245 from_config = (
246 ('send_error_emails', 'CELERY_SEND_TASK_ERROR_EMAILS'),
247 ('serializer', 'CELERY_TASK_SERIALIZER'),
248 ('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
249 ('track_started', 'CELERY_TRACK_STARTED'),
250 ('acks_late', 'CELERY_ACKS_LATE'),
251 ('ignore_result', 'CELERY_IGNORE_RESULT'),
252 ('store_errors_even_if_ignored',
253 'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
254 )
...
[end of celery/app/task.py]
[start of celery/worker/request.py]
...
312 if self.task.acks_late:
313 self.acknowledge()
314
315 self.send_event('task-succeeded', result=retval, runtime=runtime)
316
317 def on_retry(self, exc_info):
318 """Handler called if the task should be retried."""
319 if self.task.acks_late:
320 self.acknowledge()
321
322 self.send_event('task-retried',
323 exception=safe_repr(exc_info.exception.exc),
324 traceback=safe_str(exc_info.traceback))
325
326 def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
327 """Handler called if the task raised an exception."""
328 task_ready(self)
329
330 if isinstance(exc_info.exception, MemoryError):
331 raise MemoryError('Process got: %s' % (exc_info.exception,))
332 elif isinstance(exc_info.exception, Reject):
333 return self.reject(requeue=exc_info.exception.requeue)
...
...
338
339 if isinstance(exc, Retry):
340 return self.on_retry(exc_info)
341
342 # These are special cases where the process would not have had
343 # time to write the result.
344 if self.store_errors:
345 if isinstance(exc, Terminated):
346 self._announce_revoked(
347 'terminated', True, string(exc), False)
348 send_failed_event = False # already sent revoked event
349 elif isinstance(exc, WorkerLostError) or not return_ok:
350 self.task.backend.mark_as_failure(
351 self.id, exc, request=self,
352 )
353 # (acks_late) acknowledge after result stored.
354 if self.task.acks_late:
355 self.acknowledge()
356
357 if send_failed_event:
358 self.send_event(
359 'task-failed',
...
[end of celery/worker/request.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| celery/celery | 045b52f1450d6d5cc500e0057a4b498250dc5692 | Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True
When using celery v3.0.24, with `CELERY_ACKS_LATE = True` , if the OOM killer kills the celery worker, then the worker acknowledges the message.
As per [this](https://github.com/celery/celery/commit/e810420c) commit. The `exc_info.internal` comes in as `false`, which means it is not a internal error, due to which the message is acknowledged.
The desirable behaviour, in such a case would be to not acknowledge the message (and be able to know, whether its a OOM error), so that some other worker can pick it up.
As a workaround, I've commented out the [code](https://github.com/siddharth96/celery/commit/427695d1b23034dadda85fd7a48f7367831be4fa), where celery acknowledges the message, because in such a case, message will be lost.
| This is deliberate as if a task is killed it may mean that the next invocation will also cause the same to happen. If the task is redelivered it may cause a loop where the same conditions occur again and again. Also, sadly you cannot distinguish processes killed by OOM from processes killed by other means, and if an administrator kills -9 a task going amok, you usually don't want that task to be called again.
There could be a configuration option for not acking terminated tasks, but I'm not sure how useful that would be.
A better solution could be to use `basic_reject(requeue=False)` instead of `basic_ack`, that way you can configure
a dead letter queue so that the killed tasks will be sent to a queue for manual inspection.
I must say, regardless of the status of this feature request, the documentation is misleading. Specifically, [this FAQ makes it seem that process failures would NOT acknowledge messages](http://celery.readthedocs.org/en/latest/faq.html#faq-acks-late-vs-retry). And [this FAQ boldface states](http://celery.readthedocs.org/en/latest/faq.html#id54) that in the event of a kill signal (9), that acks_late will allow the task to re-run (which again, is patently wrong based on this poorly documented behavior). Nowhere in the docs have I found that if the process _dies_, the message will be acknowledged, regardless of acks_late or not. (for instance, I have a set of 10k+ tasks, and some 1% of tasks wind up acknowledged but incomplete when a WorkerLostError is thrown in connection with the worker, although there are no other errors of any kind in any of my logs related to that task).
TL;DR at the least, appropriately document the current state when describing the functionality and limitations of acks_late. A work-around would be helpful -- I'm not sure I understand the solution of using `basic_reject`, although I'll keep looking into it.
The docs are referring to killing the worker process with KILL, not the child processes. The term worker will always refer to the worker instance, not the pool processes. The section within about acks_late is probably not very helpful and should be removed
| 2015-10-06T05:34:34Z | <patch>
<patch>
diff --git a/celery/app/defaults.py b/celery/app/defaults.py
--- a/celery/app/defaults.py
+++ b/celery/app/defaults.py
@@ -132,6 +132,7 @@ def __repr__(self):
'REDIS_DB': Option(type='int', **_REDIS_OLD),
'REDIS_PASSWORD': Option(type='string', **_REDIS_OLD),
'REDIS_MAX_CONNECTIONS': Option(type='int'),
+ 'REJECT_ON_WORKER_LOST': Option(type='bool'),
'RESULT_BACKEND': Option(type='string'),
'RESULT_DB_SHORT_LIVED_SESSIONS': Option(False, type='bool'),
'RESULT_DB_TABLENAMES': Option(type='dict'),
diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -220,6 +220,12 @@ class Task(object):
#: :setting:`CELERY_ACKS_LATE` setting.
acks_late = None
+ #: When CELERY_ACKS_LATE is set to True, the default behavior to
+ #: handle worker crash is to acknowledge the message. Setting
+ #: this to true allows the message to be rejected and requeued so
+ #: it will be executed again by another worker.
+ reject_on_worker_lost = None
+
#: Tuple of expected exceptions.
#:
#: These are errors that are expected in normal operation
@@ -248,6 +254,7 @@ class Task(object):
('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
('track_started', 'CELERY_TRACK_STARTED'),
('acks_late', 'CELERY_ACKS_LATE'),
+ ('reject_on_worker_lost', 'CELERY_REJECT_ON_WORKER_LOST'),
('ignore_result', 'CELERY_IGNORE_RESULT'),
('store_errors_even_if_ignored',
'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
diff --git a/celery/worker/request.py b/celery/worker/request.py
--- a/celery/worker/request.py
+++ b/celery/worker/request.py
@@ -326,7 +326,6 @@ def on_retry(self, exc_info):
def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
"""Handler called if the task raised an exception."""
task_ready(self)
-
if isinstance(exc_info.exception, MemoryError):
raise MemoryError('Process got: %s' % (exc_info.exception,))
elif isinstance(exc_info.exception, Reject):
@@ -352,7 +351,13 @@ def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
)
# (acks_late) acknowledge after result stored.
if self.task.acks_late:
- self.acknowledge()
+ reject_and_requeue = (self.task.reject_on_worker_lost and
+ isinstance(exc, WorkerLostError) and
+ self.delivery_info.get('redelivered', False) is False)
+ if reject_and_requeue:
+ self.reject(requeue=True)
+ else:
+ self.acknowledge()
if send_failed_event:
self.send_event(
</patch>
</s>
</patch> | diff --git a/celery/tests/worker/test_request.py b/celery/tests/worker/test_request.py
--- a/celery/tests/worker/test_request.py
+++ b/celery/tests/worker/test_request.py
@@ -325,6 +325,20 @@ def test_on_failure_Reject_rejects_with_requeue(self):
req_logger, req.connection_errors, True,
)
+ def test_on_failure_WrokerLostError_rejects_with_requeue(self):
+ einfo = None
+ try:
+ raise WorkerLostError()
+ except:
+ einfo = ExceptionInfo(internal=True)
+ req = self.get_request(self.add.s(2, 2))
+ req.task.acks_late = True
+ req.task.reject_on_worker_lost = True
+ req.delivery_info['redelivered'] = False
+ req.on_failure(einfo)
+ req.on_reject.assert_called_with(req_logger,
+ req.connection_errors, True)
+
def test_tzlocal_is_cached(self):
req = self.get_request(self.add.s(2, 2))
req._tzlocal = 'foo'
| 1.0 | |||
NVIDIA__NeMo-473 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Jasper Encoder Export failed
The export of Jasper Encoder is failing. I am using the core API [deployment_export](https://nvidia.github.io/NeMo/api-docs/nemo.html#nemo.core.neural_factory.NeuralModuleFactory.deployment_export) like in the script: https://github.com/NVIDIA/NeMo/blob/403238f82d26879ba5fca53fbf75b3cdc70fb49b/scripts/export_jasper_to_onnx.py#L92
I believe the issue (as shown below) is that the` input_example` provided does not match the `output_example`.
```
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
```
**What is the correct `input_example` and `output_example` to export JasperEncoder?**
The full output can be seen here:
```
adrianaf@2a520c7abb1e:/tmp/NeMo$ ! python /tmp/NeMo/scripts/export_jasper_to_onnx.py --config /raid/datasets/asr/data/config_files/WSJ-test_acoustic_quartznet15x5.yaml --nn_encoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperEncoder-STEP-247400.pt --nn_decoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperDecoderForCTC-STEP-247400.pt --onnx_encoder /raid/datasets/asr/data/models/ONNX/pre-trained_encoder.onnx --onnx_decoder /raid/datasets/asr/data/models/ONNX/pre-trained_decoder.onnx
/opt/conda/lib/python3.6/site-packages/torchvision/io/_video_opt.py:17: UserWarning: video reader based on ffmpeg c++ ops not available
warnings.warn("video reader based on ffmpeg c++ ops not available")
/tmp/NeMo/nemo/collections/asr/audio_preprocessing.py:48: UserWarning: Could not import torchaudio. Some features might not work.
warnings.warn('Could not import torchaudio. Some features might not work.')
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:48] Loading config file...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:52] Determining model shape...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:60] Num encoder input features: 64
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:61] Num decoder input features: 1024
[NeMo W 2020-02-23 19:09:42 deprecated:68] Function ``_get_trainer`` is deprecated. It is going to be removed in the future version.
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:65] Initializing models...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:76] Loading checkpoints...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:91] Exporting encoder...
[NeMo W 2020-02-23 19:09:45 neural_factory:627] Turned off 170 masked convolutions
[NeMo I 2020-02-23 19:09:45 actions:937] Module is JasperEncoder. We are removing input and output length ports since they are not needed for deployment
[NeMo W 2020-02-23 19:09:46 deprecated:68] Function ``local_parameters`` is deprecated. It is going to be removed in the 0.11 version.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:98] Exporting decoder...
graph(%encoder_output : Float(1, 1024, 128),
%1 : Float(29, 1024, 1),
%2 : Float(29)):
%3 : Float(1, 29, 128) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%encoder_output, %1, %2), scope: JasperDecoderForCTC/Sequential[decoder_layers]/Conv1d[0] # /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py:202:0
%4 : Float(1, 128, 29) = onnx::Transpose[perm=[0, 2, 1]](%3), scope: JasperDecoderForCTC # /tmp/NeMo/nemo/collections/asr/jasper.py:235:0
%output : Float(1, 128, 29) = onnx::LogSoftmax[axis=2](%4), scope: JasperDecoderForCTC # /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1317:0
return (%output)
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input encoder_output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:105] Export completed successfully.
```
</issue>
<code>
[start of README.rst]
1 .. image:: http://www.repostatus.org/badges/latest/active.svg
2 :target: http://www.repostatus.org/#active
3 :alt: Project Status: Active – The project has reached a stable, usable state and is being actively developed.
4
5 .. image:: https://img.shields.io/badge/documentation-github.io-blue.svg
6 :target: https://nvidia.github.io/NeMo/
7 :alt: NeMo documentation on GitHub pages
8
9 .. image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
10 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
11 :alt: NeMo core license and license for collections in this repo
12
13 .. image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
14 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
15 :alt: Language grade: Python
16
17 .. image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
18 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
19 :alt: Total alerts
20
21 .. image:: https://img.shields.io/badge/code%20style-black-000000.svg
22 :target: https://github.com/psf/black
23 :alt: Code style: black
24
25
26
27 NVIDIA Neural Modules: NeMo
28 ===========================
29
30 NeMo is a toolkit for defining and building `Conversational AI <https://developer.nvidia.com/conversational-ai#started>`_ applications.
31
32 Goal of the NeMo toolkit is to make it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components. Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
33
34 **Neural Modules** are conceptual blocks of neural networks that take *typed* inputs and produce *typed* outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.
35
36 The toolkit comes with extendable collections of pre-built modules for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS).
37
38 **Introduction**
39
40 * Watch `this video <https://nvidia.github.io/NeMo/>`_ for a quick walk-through.
41
42 * Documentation (latest released version): https://nvidia.github.io/NeMo/
43
44 * Read NVIDIA `Developer Blog for example applications <https://devblogs.nvidia.com/how-to-build-domain-specific-automatic-speech-recognition-models-on-gpus/>`_
45
46 * Read NVIDIA `Developer Blog for Quartznet ASR model <https://devblogs.nvidia.com/develop-smaller-speech-recognition-models-with-nvidias-nemo-framework/>`_
47
48 * Recommended version to install is **0.9.0** via ``pip install nemo-toolkit``
49
50 * Recommended NVIDIA `NGC NeMo Toolkit container <https://ngc.nvidia.com/catalog/containers/nvidia:nemo>`_
51
52 * Pretrained models are available on NVIDIA `NGC Model repository <https://ngc.nvidia.com/catalog/models?orderBy=modifiedDESC&query=nemo&quickFilter=models&filters=>`_
53
54
55 Getting started
56 ~~~~~~~~~~~~~~~
57
58 THE LATEST STABLE VERSION OF NeMo is **0.9.0** (Available via PIP).
59
60 **Requirements**
61
62 1) Python 3.6 or 3.7
63 2) PyTorch 1.4.* with GPU support
64 3) (optional, for best performance) NVIDIA APEX. Install from here: https://github.com/NVIDIA/apex
65
66 **NeMo Docker Container**
67 NVIDIA `NGC NeMo Toolkit container <https://ngc.nvidia.com/catalog/containers/nvidia:nemo>`_ is now available.
68
69 * Pull the docker: ``docker pull nvcr.io/nvidia/nemo:v0.9``
70 * Run: ``docker run --runtime=nvidia -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/nemo:v0.9``
71
72 If you are using the NVIDIA `NGC PyTorch container <https://ngc.nvidia.com/catalog/containers/nvidia:pytorch>`_ follow these instructions
73
74 * Pull the docker: ``docker pull nvcr.io/nvidia/pytorch:20.01-py3``
75 * Run: ``docker run --runtime=nvidia -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:20.01-py3``
76 * ``apt-get update && apt-get install -y libsndfile1``
77 * ``pip install nemo_toolkit`` NeMo core
78 * ``pip install nemo_asr`` NeMo ASR (Speech Recognition) collection
79 * ``pip install nemo_nlp`` NeMo NLP (Natural Language Processing) collection
80 * ``pip install nemo_tts`` NeMo TTS (Speech Synthesis) collection
81
82 See `examples/start_here` to get started with the simplest example. The folder `examples` contains several examples to get you started with various tasks in NLP and ASR.
83
84 **Tutorials**
85
86 * `Speech recognition <https://nvidia.github.io/NeMo/asr/intro.html>`_
87 * `Natural language processing <https://nvidia.github.io/NeMo/nlp/intro.html>`_
88 * `Speech Synthesis <https://nvidia.github.io/NeMo/tts/intro.html>`_
89
90
91 DEVELOPMENT
92 ~~~~~~~~~~~
93 If you'd like to use master branch and/or develop NeMo you can run "reinstall.sh" script.
94
95 `Documentation (master branch) <http://nemo-master-docs.s3-website.us-east-2.amazonaws.com/>`_.
96
97 **Installing From Github**
98
99 If you prefer to use NeMo's latest development version (from GitHub) follow the steps below:
100
101 1) Clone the repository ``git clone https://github.com/NVIDIA/NeMo.git``
102 2) Go to NeMo folder and re-install the toolkit with collections:
103
104 .. code-block:: bash
105
106 ./reinstall.sh
107
108 **Style tests**
109
110 .. code-block:: bash
111
112 python setup.py style # Checks overall project code style and output issues with diff.
113 python setup.py style --fix # Tries to fix error in-place.
114 python setup.py style --scope=tests # Operates within certain scope (dir of file).
115
116 **Unittests**
117
118 This command runs unittests:
119
120 .. code-block:: bash
121
122 ./reinstall.sh
123 python pytest tests
124
125
126 Citation
127 ~~~~~~~~
128
129 If you are using NeMo please cite the following publication
130
131 .. code-block:: tex
132
133 @misc{nemo2019,
134 title={NeMo: a toolkit for building AI applications using Neural Modules},
135 author={Oleksii Kuchaiev and Jason Li and Huyen Nguyen and Oleksii Hrinchuk and Ryan Leary and Boris Ginsburg and Samuel Kriman and Stanislav Beliaev and Vitaly Lavrukhin and Jack Cook and Patrice Castonguay and Mariya Popova and Jocelyn Huang and Jonathan M. Cohen},
136 year={2019},
137 eprint={1909.09577},
138 archivePrefix={arXiv},
139 primaryClass={cs.LG}
140 }
141
142
[end of README.rst]
[start of nemo/core/neural_modules.py]
...
379 def input_ports(self) -> Optional[Dict[str, NeuralType]]:
380 """Returns definitions of module input ports
381
382 Returns:
383 A (dict) of module's input ports names to NeuralTypes mapping
384 """
385
386 @property
387 @abstractmethod
388 def output_ports(self) -> Optional[Dict[str, NeuralType]]:
389 """Returns definitions of module output ports
390
391 Returns:
392 A (dict) of module's output ports names to NeuralTypes mapping
393 """
394
395 @property
396 def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
397 """Returns names of input ports that will not be included in an export
398
399 Returns:
400 A (set) of module's input port names that are not exportable
...
...
388 def output_ports(self) -> Optional[Dict[str, NeuralType]]:
389 """Returns definitions of module output ports
390
391 Returns:
392 A (dict) of module's output ports names to NeuralTypes mapping
393 """
394
395 @property
396 def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
397 """Returns names of input ports that will not be included in an export
398
399 Returns:
400 A (set) of module's input port names that are not exportable
401 """
402 return set([])
403
404 @property
405 def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
406 """Returns names of output ports that will not be included in an export
407
408 Returns:
409 A (set) of module's output port names that are not exportable
...
...
396 def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
397 """Returns names of input ports that will not be included in an export
398
399 Returns:
400 A (set) of module's input port names that are not exportable
401 """
402 return set([])
403
404 @property
405 def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
406 """Returns names of output ports that will not be included in an export
407
408 Returns:
409 A (set) of module's output port names that are not exportable
410 """
411 return set([])
412
413 def prepare_for_deployment(self) -> None:
414 """Patch the module if required to prepare for deployment
415
416 """
417 return
...
[end of nemo/core/neural_modules.py]
[start of nemo/backends/pytorch/actions.py]
...
923 @staticmethod
924 def __module_export(module, output, d_format: DeploymentFormat, input_example=None, output_example=None):
925 # Check if output already exists
926 destination = Path(output)
927 if destination.exists():
928 raise FileExistsError(f"Destination {output} already exists. " f"Aborting export.")
929
930 input_names = list(module.input_ports.keys())
931 output_names = list(module.output_ports.keys())
932 dynamic_axes = defaultdict(list)
933
934 def __extract_dynamic_axes(port_name: str, ntype: NeuralType, dynamic_axes: defaultdict):
935 if ntype.axes:
936 for ind, axis in enumerate(ntype.axes):
937 if axis.kind == AxisKind.Batch or axis.kind == AxisKind.Time:
938 dynamic_axes[port_name].append(ind)
939
940 # This is a hack for Jasper to Jarvis export -- need re-design for this
941 inputs_to_drop = set()
942 outputs_to_drop = set()
943 if type(module).__name__ == "JasperEncoder":
944 logging.info(
945 "Module is JasperEncoder. We are removing input and output length ports since they are not needed for "
946 "deployment"
947 )
948 inputs_to_drop.add("length")
949 outputs_to_drop.add("encoded_lengths")
950
951 # for input_ports
952 for port_name, ntype in module.input_ports.items():
953 if port_name in inputs_to_drop:
954 input_names.remove(port_name)
955 continue
956 __extract_dynamic_axes(port_name, ntype, dynamic_axes)
957 # for output_ports
958 for port_name, ntype in module.output_ports.items():
959 if port_name in outputs_to_drop:
960 output_names.remove(port_name)
961 continue
962 __extract_dynamic_axes(port_name, ntype, dynamic_axes)
963
...
[end of nemo/backends/pytorch/actions.py]
[start of nemo/collections/asr/jasper.py]
...
104 }
105
106 @property
107 @add_port_docs()
108 def output_ports(self):
109 """Returns definitions of module output ports.
110 """
111 return {
112 # "outputs": NeuralType(
113 # {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag),}
114 # ),
115 # "encoded_lengths": NeuralType({0: AxisType(BatchTag)}),
116 "outputs": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
117 "encoded_lengths": NeuralType(tuple('B'), LengthsType()),
118 }
119
120 @property
121 def disabled_deployment_input_ports(self):
122 return set(["length"])
123
124 @property
125 def disabled_deployment_output_ports(self):
126 return set(["encoded_lengths"])
127
128 def prepare_for_deployment(self):
129 m_count = 0
130 for m in self.modules():
131 if type(m).__name__ == "MaskedConv1d":
132 m.use_mask = False
...
[end of nemo/collections/asr/jasper.py]
[start of nemo/core/neural_factory.py]
...
596 raise TypeError(f"All callbacks passed to the eval action must" f"be inherited from EvaluatorCallback")
597 self.train(
598 tensors_to_optimize=None, optimizer='sgd', callbacks=callbacks, optimization_params={'num_epochs': 1},
599 )
600
601 def deployment_export(
602 self, module, output: str, d_format: DeploymentFormat, input_example=None, output_example=None
603 ):
604 """Exports Neural Module instance for deployment.
605
606 Args:
607 module: neural module to export
608 output (str): where export results should be saved
609 d_format (DeploymentFormat): which deployment format to use
610 input_example: sometimes tracing will require input examples
611 output_example: Should match inference on input_example
612 """
613 module.prepare_for_deployment()
614
615 return self._trainer.deployment_export(
616 module=module,
617 output=output,
...
[end of nemo/core/neural_factory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| NVIDIA/NeMo | ba4616f1f011d599de87f0cb3315605e715d402a | Jasper Encoder Export failed
The export of Jasper Encoder is failing. I am using the core API [deployment_export](https://nvidia.github.io/NeMo/api-docs/nemo.html#nemo.core.neural_factory.NeuralModuleFactory.deployment_export) like in the script: https://github.com/NVIDIA/NeMo/blob/403238f82d26879ba5fca53fbf75b3cdc70fb49b/scripts/export_jasper_to_onnx.py#L92
I believe the issue (as shown below) is that the` input_example` provided does not match the `output_example`.
```
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
```
**What is the correct `input_example` and `output_example` to export JasperEncoder?**
The full output can be seen here:
```
adrianaf@2a520c7abb1e:/tmp/NeMo$ ! python /tmp/NeMo/scripts/export_jasper_to_onnx.py --config /raid/datasets/asr/data/config_files/WSJ-test_acoustic_quartznet15x5.yaml --nn_encoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperEncoder-STEP-247400.pt --nn_decoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperDecoderForCTC-STEP-247400.pt --onnx_encoder /raid/datasets/asr/data/models/ONNX/pre-trained_encoder.onnx --onnx_decoder /raid/datasets/asr/data/models/ONNX/pre-trained_decoder.onnx
/opt/conda/lib/python3.6/site-packages/torchvision/io/_video_opt.py:17: UserWarning: video reader based on ffmpeg c++ ops not available
warnings.warn("video reader based on ffmpeg c++ ops not available")
/tmp/NeMo/nemo/collections/asr/audio_preprocessing.py:48: UserWarning: Could not import torchaudio. Some features might not work.
warnings.warn('Could not import torchaudio. Some features might not work.')
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:48] Loading config file...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:52] Determining model shape...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:60] Num encoder input features: 64
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:61] Num decoder input features: 1024
[NeMo W 2020-02-23 19:09:42 deprecated:68] Function ``_get_trainer`` is deprecated. It is going to be removed in the future version.
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:65] Initializing models...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:76] Loading checkpoints...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:91] Exporting encoder...
[NeMo W 2020-02-23 19:09:45 neural_factory:627] Turned off 170 masked convolutions
[NeMo I 2020-02-23 19:09:45 actions:937] Module is JasperEncoder. We are removing input and output length ports since they are not needed for deployment
[NeMo W 2020-02-23 19:09:46 deprecated:68] Function ``local_parameters`` is deprecated. It is going to be removed in the 0.11 version.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:98] Exporting decoder...
graph(%encoder_output : Float(1, 1024, 128),
%1 : Float(29, 1024, 1),
%2 : Float(29)):
%3 : Float(1, 29, 128) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%encoder_output, %1, %2), scope: JasperDecoderForCTC/Sequential[decoder_layers]/Conv1d[0] # /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py:202:0
%4 : Float(1, 128, 29) = onnx::Transpose[perm=[0, 2, 1]](%3), scope: JasperDecoderForCTC # /tmp/NeMo/nemo/collections/asr/jasper.py:235:0
%output : Float(1, 128, 29) = onnx::LogSoftmax[axis=2](%4), scope: JasperDecoderForCTC # /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1317:0
return (%output)
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input encoder_output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:105] Export completed successfully.
```
| 2020-03-10T03:03:23Z | <patch>
<patch>
diff --git a/nemo/backends/pytorch/actions.py b/nemo/backends/pytorch/actions.py
--- a/nemo/backends/pytorch/actions.py
+++ b/nemo/backends/pytorch/actions.py
@@ -937,26 +937,16 @@ def __extract_dynamic_axes(port_name: str, ntype: NeuralType, dynamic_axes: defa
if axis.kind == AxisKind.Batch or axis.kind == AxisKind.Time:
dynamic_axes[port_name].append(ind)
- # This is a hack for Jasper to Jarvis export -- need re-design for this
- inputs_to_drop = set()
- outputs_to_drop = set()
- if type(module).__name__ == "JasperEncoder":
- logging.info(
- "Module is JasperEncoder. We are removing input and output length ports since they are not needed for "
- "deployment"
- )
- inputs_to_drop.add("length")
- outputs_to_drop.add("encoded_lengths")
-
+ # extract dynamic axes and remove unnecessary inputs/outputs
# for input_ports
for port_name, ntype in module.input_ports.items():
- if port_name in inputs_to_drop:
+ if port_name in module._disabled_deployment_input_ports:
input_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
# for output_ports
for port_name, ntype in module.output_ports.items():
- if port_name in outputs_to_drop:
+ if port_name in module._disabled_deployment_output_ports:
output_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
diff --git a/nemo/collections/asr/jasper.py b/nemo/collections/asr/jasper.py
--- a/nemo/collections/asr/jasper.py
+++ b/nemo/collections/asr/jasper.py
@@ -118,14 +118,14 @@ def output_ports(self):
}
@property
- def disabled_deployment_input_ports(self):
+ def _disabled_deployment_input_ports(self):
return set(["length"])
@property
- def disabled_deployment_output_ports(self):
+ def _disabled_deployment_output_ports(self):
return set(["encoded_lengths"])
- def prepare_for_deployment(self):
+ def _prepare_for_deployment(self):
m_count = 0
for m in self.modules():
if type(m).__name__ == "MaskedConv1d":
diff --git a/nemo/core/neural_factory.py b/nemo/core/neural_factory.py
--- a/nemo/core/neural_factory.py
+++ b/nemo/core/neural_factory.py
@@ -610,7 +610,7 @@ def deployment_export(
input_example: sometimes tracing will require input examples
output_example: Should match inference on input_example
"""
- module.prepare_for_deployment()
+ module._prepare_for_deployment()
return self._trainer.deployment_export(
module=module,
diff --git a/nemo/core/neural_modules.py b/nemo/core/neural_modules.py
--- a/nemo/core/neural_modules.py
+++ b/nemo/core/neural_modules.py
@@ -393,7 +393,7 @@ def output_ports(self) -> Optional[Dict[str, NeuralType]]:
"""
@property
- def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
+ def _disabled_deployment_input_ports(self) -> Optional[Set[str]]:
"""Returns names of input ports that will not be included in an export
Returns:
@@ -402,7 +402,7 @@ def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
return set([])
@property
- def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
+ def _disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""Returns names of output ports that will not be included in an export
Returns:
@@ -410,7 +410,7 @@ def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""
return set([])
- def prepare_for_deployment(self) -> None:
+ def _prepare_for_deployment(self) -> None:
"""Patch the module if required to prepare for deployment
"""
</patch>
</s>
</patch> | diff --git a/tests/unit/core/test_deploy_export.py b/tests/unit/core/test_deploy_export.py
--- a/tests/unit/core/test_deploy_export.py
+++ b/tests/unit/core/test_deploy_export.py
@@ -46,9 +46,11 @@
import nemo.collections.nlp.nm.trainables.common.token_classification_nm
from nemo import logging
+TRT_ONNX_DISABLED = False
+
# Check if the required libraries and runtimes are installed.
+# Only initialize GPU after this runner is activated.
try:
- # Only initialize GPU after this runner is activated.
import pycuda.autoinit
# This import causes pycuda to automatically manage CUDA context creation and cleanup.
@@ -63,16 +65,17 @@
)
from .tensorrt_runner import TensorRTRunnerV2
except:
- # Skip tests.
- pytestmark = pytest.mark.skip
+ TRT_ONNX_DISABLED = True
@pytest.mark.usefixtures("neural_factory")
class TestDeployExport(TestCase):
- def setUp(self):
- logging.setLevel(logging.WARNING)
- device = nemo.core.DeviceType.GPU
- self.nf = nemo.core.NeuralModuleFactory(backend=nemo.core.Backend.PyTorch, placement=device)
+ # def setUp(self):
+ # super().setUp()
+
+ # logging.setLevel(logging.WARNING)
+ # device = nemo.core.DeviceType.GPU
+ # self.nf = nemo.core.NeuralModuleFactory(backend=nemo.core.Backend.PyTorch, placement=device)
def __test_export_route(self, module, out_name, mode, input_example=None):
out = Path(out_name)
@@ -112,7 +115,13 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
loader_cache = DataLoaderCache(data_loader)
profile_shapes = OrderedDict()
names = list(module.input_ports) + list(module.output_ports)
-
+ names = list(
+ filter(
+ lambda x: x
+ not in (module._disabled_deployment_input_ports | module._disabled_deployment_output_ports),
+ names,
+ )
+ )
if isinstance(input_example, tuple):
si = [tuple(input_example[i].shape) for i in range(len(input_example))]
elif isinstance(input_example, OrderedDict):
@@ -152,7 +161,7 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
input_names = list(input_metadata.keys())
for i in range(len(input_names)):
input_name = input_names[i]
- if input_name in module.disabled_deployment_input_ports:
+ if input_name in module._disabled_deployment_input_ports:
continue
inputs[input_name] = (
input_example[input_name].cpu().numpy()
@@ -209,8 +218,8 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
ort_inputs = ort_session.get_inputs()
for i in range(len(input_names)):
input_name = input_names[i]
- if input_name in module.disabled_deployment_input_ports:
- input_name = ort_inputs[i].name
+ if input_name in module._disabled_deployment_input_ports:
+ continue
inputs[input_name] = (
input_example[input_name].cpu().numpy()
if isinstance(input_example, OrderedDict)
@@ -263,9 +272,10 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
def __test_export_route_all(self, module, out_name, input_example=None):
if input_example is not None:
- self.__test_export_route(
- module, out_name + '.trt.onnx', nemo.core.DeploymentFormat.TRTONNX, input_example=input_example
- )
+ if not TRT_ONNX_DISABLED:
+ self.__test_export_route(
+ module, out_name + '.trt.onnx', nemo.core.DeploymentFormat.TRTONNX, input_example=input_example
+ )
self.__test_export_route(module, out_name + '.onnx', nemo.core.DeploymentFormat.ONNX, input_example)
self.__test_export_route(module, out_name + '.pt', nemo.core.DeploymentFormat.PYTORCH, input_example)
self.__test_export_route(module, out_name + '.ts', nemo.core.DeploymentFormat.TORCHSCRIPT, input_example)
@@ -323,9 +333,7 @@ def test_jasper_encoder(self):
)
self.__test_export_route_all(
- module=jasper_encoder,
- out_name="jasper_encoder",
- input_example=(torch.randn(16, 64, 256).cuda(), torch.randn(256).cuda()),
+ module=jasper_encoder, out_name="jasper_encoder", input_example=torch.randn(16, 64, 256).cuda(),
)
@pytest.mark.unit
@@ -343,7 +351,5 @@ def test_quartz_encoder(self):
)
self.__test_export_route_all(
- module=jasper_encoder,
- out_name="quartz_encoder",
- input_example=(torch.randn(16, 64, 256).cuda(), torch.randint(20, (16,)).cuda()),
+ module=jasper_encoder, out_name="quartz_encoder", input_example=torch.randn(16, 64, 256).cuda(),
)
| 1.0 | ||||
NVIDIA__NeMo-3632 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | NVIDIA/NeMo | 022f0292aecbc98d591d49423d5045235394f793 | "./reinstall.sh crashes due to not being able to uninstall llvmlite\nStarting off of `nemo:1.5.1` co(...TRUNCATED) | 2022-02-09T05:12:31Z | "<patch>\n<patch>\ndiff --git a/nemo_text_processing/text_normalization/__init__.py b/nemo_text_proc(...TRUNCATED) | "diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt b/tests/(...TRUNCATED) | 1.0 | ||||
NVIDIA__NeMo-7582 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | NVIDIA/NeMo | 8a892b86186dbdf61803d75570cb5c58471e9dda | "Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for fiel(...TRUNCATED) | "Seems to be a similar to #7002\nInteresting. The fix is easy but needs to be applied to basically e(...TRUNCATED) | 2023-09-30T01:26:50Z | "<patch>\n<patch>\ndiff --git a/examples/asr/experimental/k2/align_speech_parallel.py b/examples/asr(...TRUNCATED) | "diff --git a/tests/collections/asr/test_text_to_text_dataset.py b/tests/collections/asr/test_text_t(...TRUNCATED) | 1.0 | |||
NVIDIA__NeMo-7616 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | NVIDIA/NeMo | 15db83ec4a65e649d83b61d7a4a58d911586e853 | "Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for fiel(...TRUNCATED) | "Seems to be a similar to #7002\nInteresting. The fix is easy but needs to be applied to basically e(...TRUNCATED) | 2023-10-03T19:14:38Z | "<patch>\n<patch>\ndiff --git a/examples/asr/experimental/k2/align_speech_parallel.py b/examples/asr(...TRUNCATED) | "diff --git a/tests/collections/asr/test_text_to_text_dataset.py b/tests/collections/asr/test_text_t(...TRUNCATED) | 1.0 | |||
slackapi__python-slack-events-api-71 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | slackapi/python-slack-events-api | 0c0ce604b502508622fb14c278a0d64841fa32e3 | "Passing Flask app proxy as server\nHi Guys,\r\n\r\nI have an app factory on my setup and the app ob(...TRUNCATED) | 2020-06-12T06:58:10Z | "<patch>\n<patch>\ndiff --git a/example/current_app/main.py b/example/current_app/main.py\nnew file (...TRUNCATED) | "diff --git a/example/current_app/test_module/__init__.py b/example/current_app/test_module/__init__(...TRUNCATED) | 1.0 | ||||
celery__celery-2598 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | celery/celery | 6592ff64b6b024a4b68abcc53b151888fdf0dee3 | "CELERY_RESULT_SERIALIZER = 'json' breaks Exception marshaling\nSetting `CELERY_RESULT_SERIALIZER = (...TRUNCATED) | This is biting me as well. Any news?
| 2015-04-29T14:52:17Z | "<patch>\n<patch>\ndiff --git a/celery/backends/amqp.py b/celery/backends/amqp.py\n--- a/celery/back(...TRUNCATED) | "diff --git a/celery/tests/backends/test_amqp.py b/celery/tests/backends/test_amqp.py\n--- a/celery/(...TRUNCATED) | 1.0 | |||
celery__celery-2840 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | celery/celery | 045b52f1450d6d5cc500e0057a4b498250dc5692 | "Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True\nWhen using celery v3.0.24(...TRUNCATED) | "This is deliberate as if a task is killed it may mean that the next invocation will also cause the (...TRUNCATED) | 2015-10-06T05:34:34Z | "<patch>\n<patch>\ndiff --git a/celery/app/defaults.py b/celery/app/defaults.py\n--- a/celery/app/de(...TRUNCATED) | "diff --git a/celery/tests/worker/test_request.py b/celery/tests/worker/test_request.py\n--- a/celer(...TRUNCATED) | 1.0 | |||
NVIDIA__NeMo-473 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | NVIDIA/NeMo | ba4616f1f011d599de87f0cb3315605e715d402a | "Jasper Encoder Export failed\nThe export of Jasper Encoder is failing. I am using the core API [dep(...TRUNCATED) | 2020-03-10T03:03:23Z | "<patch>\n<patch>\ndiff --git a/nemo/backends/pytorch/actions.py b/nemo/backends/pytorch/actions.py\(...TRUNCATED) | "diff --git a/tests/unit/core/test_deploy_export.py b/tests/unit/core/test_deploy_export.py\n--- a/t(...TRUNCATED) | 1.0 | ||||
NVIDIA__NeMo-3632 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | NVIDIA/NeMo | 022f0292aecbc98d591d49423d5045235394f793 | "./reinstall.sh crashes due to not being able to uninstall llvmlite\nStarting off of `nemo:1.5.1` co(...TRUNCATED) | 2022-02-09T05:12:31Z | "<patch>\n<patch>\ndiff --git a/nemo_text_processing/text_normalization/__init__.py b/nemo_text_proc(...TRUNCATED) | "diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt b/tests/(...TRUNCATED) | 1.0 |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 0