instance_id
stringlengths 13
37
| text
stringlengths 2.59k
1.94M
| repo
stringclasses 35
values | base_commit
stringlengths 40
40
| problem_statement
stringlengths 10
256k
| hints_text
stringlengths 0
908k
| created_at
stringlengths 20
20
| patch
stringlengths 18
101M
| test_patch
stringclasses 1
value | version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Qiskit__qiskit-4465 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`initialize` and `Statevector` don't play nicely
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit Aer version**: 0.5.1
- **Python version**: 3.7.3
- **Operating system**: OSX
### What is the current behavior?
Using `initialize` in a circuit and then running with `Statevector` results in the error "Cannot apply Instruction: reset"
### Steps to reproduce the problem
```
import qiskit as qk
import qiskit.quantum_info as qi
from numpy import sqrt
n = 2
ket0 = [1/sqrt(2),0,0,1/sqrt(2)]
qc = qk.QuantumCircuit(n)
qc.initialize(ket0,range(n))
ket_qi = qi.Statevector.from_instruction(qc)
```
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [![License](https://img.shields.io/github/license/Qiskit/qiskit-terra.svg?style=popout-square)](https://opensource.org/licenses/Apache-2.0)[![Build Status](https://img.shields.io/travis/com/Qiskit/qiskit-terra/master.svg?style=popout-square)](https://travis-ci.com/Qiskit/qiskit-terra)[![](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases)[![](https://img.shields.io/pypi/dm/qiskit-terra.svg?style=popout-square)](https://pypi.org/project/qiskit-terra/)[![Coverage Status](https://coveralls.io/repos/github/Qiskit/qiskit-terra/badge.svg?branch=master)](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> result = backend_sim.run(assemble(qc)).result()
40 >>> print(result.get_counts(qc))
41 ```
42
43 In this case, the output will be:
44
45 ```python
46 {'00': 513, '11': 511}
47 ```
48
49 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
50 run the same program on a real quantum computer via IBMQ.
51
52 ### Executing your code on a real quantum chip
53
54 You can also use Qiskit to execute your code on a
55 **real quantum chip**.
56 In order to do so, you need to configure Qiskit for using the credentials in
57 your IBM Q account:
58
59 #### Configure your IBMQ credentials
60
61 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
62
63 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
64
65 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
66
67 ```python
68 >>> from qiskit import IBMQ
69 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
70 ```
71
72 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
73 Once they are stored, at any point in the future you can load and use them
74 in your program simply via:
75
76 ```python
77 >>> from qiskit import IBMQ
78 >>> IBMQ.load_account()
79 ```
80
81 Those who do not want to save their credentials to disk should use instead:
82
83 ```python
84 >>> from qiskit import IBMQ
85 >>> IBMQ.enable_account('MY_API_TOKEN')
86 ```
87
88 and the token will only be active for the session. For examples using Terra with real
89 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
90 the levels.
91
92 ## Contribution Guidelines
93
94 If you'd like to contribute to Qiskit Terra, please take a look at our
95 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
96
97 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
98 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/zt-e4sscbg2-p8NHTezPVkC3r8nV6BIUVw)
99 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
100 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
101
102 ## Next Steps
103
104 Now you're set up and ready to check out some of the other examples from our
105 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
106
107 ## Authors and Citation
108
109 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
110 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
111
112 ## Changelog and Release Notes
113
114 The changelog for a particular release is dynamically generated and gets
115 written to the release page on Github for each release. For example, you can
116 find the page for the `0.9.0` release here:
117
118 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
119
120 The changelog for the current release can be found in the releases tab:
121 ![](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)
122 The changelog provides a quick overview of noteable changes for a given
123 release.
124
125 Additionally, as part of each release detailed release notes are written to
126 document in detail what has changed as part of a release. This includes any
127 documentation on potential breaking changes on upgrade and new features.
128 For example, You can find the release notes for the `0.9.0` release in the
129 Qiskit documentation here:
130
131 https://qiskit.org/documentation/release_notes.html#terra-0-9
132
133 ## License
134
135 [Apache License 2.0](LICENSE.txt)
136
[end of README.md]
[start of qiskit/quantum_info/states/densitymatrix.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 DensityMatrix quantum state class.
17 """
18
19 import warnings
20 from numbers import Number
21 import numpy as np
22
23 from qiskit.circuit.quantumcircuit import QuantumCircuit
24 from qiskit.circuit.instruction import Instruction
25 from qiskit.exceptions import QiskitError
26 from qiskit.quantum_info.states.quantum_state import QuantumState
27 from qiskit.quantum_info.operators.operator import Operator
28 from qiskit.quantum_info.operators.scalar_op import ScalarOp
29 from qiskit.quantum_info.operators.predicates import is_hermitian_matrix
30 from qiskit.quantum_info.operators.predicates import is_positive_semidefinite_matrix
31 from qiskit.quantum_info.operators.channel.quantum_channel import QuantumChannel
32 from qiskit.quantum_info.operators.channel.superop import SuperOp
33 from qiskit.quantum_info.states.statevector import Statevector
34
35
36 class DensityMatrix(QuantumState):
37 """DensityMatrix class"""
38
39 def __init__(self, data, dims=None):
40 """Initialize a density matrix object.
41
42 Args:
43 data (matrix_like or vector_like): a density matrix or
44 statevector. If a vector the density matrix is constructed
45 as the projector of that vector.
46 dims (int or tuple or list): Optional. The subsystem dimension
47 of the state (See additional information).
48
49 Raises:
50 QiskitError: if input data is not valid.
51
52 Additional Information:
53 The ``dims`` kwarg can be None, an integer, or an iterable of
54 integers.
55
56 * ``Iterable`` -- the subsystem dimensions are the values in the list
57 with the total number of subsystems given by the length of the list.
58
59 * ``Int`` or ``None`` -- the leading dimension of the input matrix
60 specifies the total dimension of the density matrix. If it is a
61 power of two the state will be initialized as an N-qubit state.
62 If it is not a power of two the state will have a single
63 d-dimensional subsystem.
64 """
65 if isinstance(data, (list, np.ndarray)):
66 # Finally we check if the input is a raw matrix in either a
67 # python list or numpy array format.
68 self._data = np.asarray(data, dtype=complex)
69 elif hasattr(data, 'to_operator'):
70 # If the data object has a 'to_operator' attribute this is given
71 # higher preference than the 'to_matrix' method for initializing
72 # an Operator object.
73 op = data.to_operator()
74 self._data = op.data
75 if dims is None:
76 dims = op._output_dims
77 elif hasattr(data, 'to_matrix'):
78 # If no 'to_operator' attribute exists we next look for a
79 # 'to_matrix' attribute to a matrix that will be cast into
80 # a complex numpy matrix.
81 self._data = np.asarray(data.to_matrix(), dtype=complex)
82 else:
83 raise QiskitError("Invalid input data format for DensityMatrix")
84 # Convert statevector into a density matrix
85 ndim = self._data.ndim
86 shape = self._data.shape
87 if ndim == 2 and shape[0] == shape[1]:
88 pass # We good
89 elif ndim == 1:
90 self._data = np.outer(self._data, np.conj(self._data))
91 elif ndim == 2 and shape[1] == 1:
92 self._data = np.reshape(self._data, shape[0])
93 shape = self._data.shape
94 else:
95 raise QiskitError(
96 "Invalid DensityMatrix input: not a square matrix.")
97 super().__init__(self._automatic_dims(dims, shape[0]))
98
99 def __eq__(self, other):
100 return super().__eq__(other) and np.allclose(
101 self._data, other._data, rtol=self.rtol, atol=self.atol)
102
103 def __repr__(self):
104 prefix = 'DensityMatrix('
105 pad = len(prefix) * ' '
106 return '{}{},\n{}dims={})'.format(
107 prefix, np.array2string(
108 self._data, separator=', ', prefix=prefix),
109 pad, self._dims)
110
111 @property
112 def data(self):
113 """Return data."""
114 return self._data
115
116 def is_valid(self, atol=None, rtol=None):
117 """Return True if trace 1 and positive semidefinite."""
118 if atol is None:
119 atol = self.atol
120 if rtol is None:
121 rtol = self.rtol
122 # Check trace == 1
123 if not np.allclose(self.trace(), 1, rtol=rtol, atol=atol):
124 return False
125 # Check Hermitian
126 if not is_hermitian_matrix(self.data, rtol=rtol, atol=atol):
127 return False
128 # Check positive semidefinite
129 return is_positive_semidefinite_matrix(self.data, rtol=rtol, atol=atol)
130
131 def to_operator(self):
132 """Convert to Operator"""
133 dims = self.dims()
134 return Operator(self.data, input_dims=dims, output_dims=dims)
135
136 def conjugate(self):
137 """Return the conjugate of the density matrix."""
138 return DensityMatrix(np.conj(self.data), dims=self.dims())
139
140 def trace(self):
141 """Return the trace of the density matrix."""
142 return np.trace(self.data)
143
144 def purity(self):
145 """Return the purity of the quantum state."""
146 # For a valid statevector the purity is always 1, however if we simply
147 # have an arbitrary vector (not correctly normalized) then the
148 # purity is equivalent to the trace squared:
149 # P(|psi>) = Tr[|psi><psi|psi><psi|] = |<psi|psi>|^2
150 return np.trace(np.dot(self.data, self.data))
151
152 def tensor(self, other):
153 """Return the tensor product state self ⊗ other.
154
155 Args:
156 other (DensityMatrix): a quantum state object.
157
158 Returns:
159 DensityMatrix: the tensor product operator self ⊗ other.
160
161 Raises:
162 QiskitError: if other is not a quantum state.
163 """
164 if not isinstance(other, DensityMatrix):
165 other = DensityMatrix(other)
166 dims = other.dims() + self.dims()
167 data = np.kron(self._data, other._data)
168 return DensityMatrix(data, dims)
169
170 def expand(self, other):
171 """Return the tensor product state other ⊗ self.
172
173 Args:
174 other (DensityMatrix): a quantum state object.
175
176 Returns:
177 DensityMatrix: the tensor product state other ⊗ self.
178
179 Raises:
180 QiskitError: if other is not a quantum state.
181 """
182 if not isinstance(other, DensityMatrix):
183 other = DensityMatrix(other)
184 dims = self.dims() + other.dims()
185 data = np.kron(other._data, self._data)
186 return DensityMatrix(data, dims)
187
188 def _add(self, other):
189 """Return the linear combination self + other.
190
191 Args:
192 other (DensityMatrix): a quantum state object.
193
194 Returns:
195 DensityMatrix: the linear combination self + other.
196
197 Raises:
198 QiskitError: if other is not a quantum state, or has
199 incompatible dimensions.
200 """
201 if not isinstance(other, DensityMatrix):
202 other = DensityMatrix(other)
203 if self.dim != other.dim:
204 raise QiskitError("other DensityMatrix has different dimensions.")
205 return DensityMatrix(self.data + other.data, self.dims())
206
207 def _multiply(self, other):
208 """Return the scalar multiplied state other * self.
209
210 Args:
211 other (complex): a complex number.
212
213 Returns:
214 DensityMatrix: the scalar multiplied state other * self.
215
216 Raises:
217 QiskitError: if other is not a valid complex number.
218 """
219 if not isinstance(other, Number):
220 raise QiskitError("other is not a number")
221 return DensityMatrix(other * self.data, self.dims())
222
223 def evolve(self, other, qargs=None):
224 """Evolve a quantum state by an operator.
225
226 Args:
227 other (Operator or QuantumChannel
228 or Instruction or Circuit): The operator to evolve by.
229 qargs (list): a list of QuantumState subsystem positions to apply
230 the operator on.
231
232 Returns:
233 QuantumState: the output quantum state.
234
235 Raises:
236 QiskitError: if the operator dimension does not match the
237 specified QuantumState subsystem dimensions.
238 """
239 if qargs is None:
240 qargs = getattr(other, 'qargs', None)
241
242 # Evolution by a circuit or instruction
243 if isinstance(other, (QuantumCircuit, Instruction)):
244 return self._evolve_instruction(other, qargs=qargs)
245
246 # Evolution by a QuantumChannel
247 if hasattr(other, 'to_quantumchannel'):
248 return other.to_quantumchannel()._evolve(self, qargs=qargs)
249 if isinstance(other, QuantumChannel):
250 return other._evolve(self, qargs=qargs)
251
252 # Unitary evolution by an Operator
253 if not isinstance(other, Operator):
254 other = Operator(other)
255 return self._evolve_operator(other, qargs=qargs)
256
257 def probabilities(self, qargs=None, decimals=None):
258 """Return the subsystem measurement probability vector.
259
260 Measurement probabilities are with respect to measurement in the
261 computation (diagonal) basis.
262
263 Args:
264 qargs (None or list): subsystems to return probabilities for,
265 if None return for all subsystems (Default: None).
266 decimals (None or int): the number of decimal places to round
267 values. If None no rounding is done (Default: None).
268
269 Returns:
270 np.array: The Numpy vector array of probabilities.
271
272 Examples:
273
274 Consider a 2-qubit product state :math:`\\rho=\\rho_1\\otimes\\rho_0`
275 with :math:`\\rho_1=|+\\rangle\\!\\langle+|`,
276 :math:`\\rho_0=|0\\rangle\\!\\langle0|`.
277
278 .. jupyter-execute::
279
280 from qiskit.quantum_info import DensityMatrix
281
282 rho = DensityMatrix.from_label('+0')
283
284 # Probabilities for measuring both qubits
285 probs = rho.probabilities()
286 print('probs: {}'.format(probs))
287
288 # Probabilities for measuring only qubit-0
289 probs_qubit_0 = rho.probabilities([0])
290 print('Qubit-0 probs: {}'.format(probs_qubit_0))
291
292 # Probabilities for measuring only qubit-1
293 probs_qubit_1 = rho.probabilities([1])
294 print('Qubit-1 probs: {}'.format(probs_qubit_1))
295
296 We can also permute the order of qubits in the ``qargs`` list
297 to change the qubit position in the probabilities output
298
299 .. jupyter-execute::
300
301 from qiskit.quantum_info import DensityMatrix
302
303 rho = DensityMatrix.from_label('+0')
304
305 # Probabilities for measuring both qubits
306 probs = rho.probabilities([0, 1])
307 print('probs: {}'.format(probs))
308
309 # Probabilities for measuring both qubits
310 # but swapping qubits 0 and 1 in output
311 probs_swapped = rho.probabilities([1, 0])
312 print('Swapped probs: {}'.format(probs_swapped))
313 """
314 probs = self._subsystem_probabilities(
315 np.abs(self.data.diagonal()), self._dims, qargs=qargs)
316 if decimals is not None:
317 probs = probs.round(decimals=decimals)
318 return probs
319
320 def reset(self, qargs=None):
321 """Reset state or subsystems to the 0-state.
322
323 Args:
324 qargs (list or None): subsystems to reset, if None all
325 subsystems will be reset to their 0-state
326 (Default: None).
327
328 Returns:
329 DensityMatrix: the reset state.
330
331 Additional Information:
332 If all subsystems are reset this will return the ground state
333 on all subsystems. If only a some subsystems are reset this
334 function will perform evolution by the reset
335 :class:`~qiskit.quantum_info.SuperOp` of the reset subsystems.
336 """
337 if qargs is None:
338 # Resetting all qubits does not require sampling or RNG
339 state = np.zeros(2 * (self._dim, ), dtype=complex)
340 state[0, 0] = 1
341 return DensityMatrix(state, dims=self._dims)
342
343 # Reset by evolving by reset SuperOp
344 dims = self.dims(qargs)
345 reset_superop = SuperOp(ScalarOp(dims, coeff=0))
346 reset_superop.data[0] = Operator(ScalarOp(dims)).data.ravel()
347 return self.evolve(reset_superop, qargs=qargs)
348
349 @classmethod
350 def from_label(cls, label):
351 r"""Return a tensor product of Pauli X,Y,Z eigenstates.
352
353 .. list-table:: Single-qubit state labels
354 :header-rows: 1
355
356 * - Label
357 - Statevector
358 * - ``"0"``
359 - :math:`\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}`
360 * - ``"1"``
361 - :math:`\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}`
362 * - ``"+"``
363 - :math:`\frac{1}{2}\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}`
364 * - ``"-"``
365 - :math:`\frac{1}{2}\begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}`
366 * - ``"r"``
367 - :math:`\frac{1}{2}\begin{pmatrix} 1 & -i \\ i & 1 \end{pmatrix}`
368 * - ``"l"``
369 - :math:`\frac{1}{2}\begin{pmatrix} 1 & i \\ -i & 1 \end{pmatrix}`
370
371 Args:
372 label (string): a eigenstate string ket label (see table for
373 allowed values).
374
375 Returns:
376 Statevector: The N-qubit basis state density matrix.
377
378 Raises:
379 QiskitError: if the label contains invalid characters, or the length
380 of the label is larger than an explicitly specified num_qubits.
381 """
382 return DensityMatrix(Statevector.from_label(label))
383
384 @staticmethod
385 def from_int(i, dims):
386 """Return a computational basis state density matrix.
387
388 Args:
389 i (int): the basis state element.
390 dims (int or tuple or list): The subsystem dimensions of the statevector
391 (See additional information).
392
393 Returns:
394 DensityMatrix: The computational basis state :math:`|i\\rangle\\!\\langle i|`.
395
396 Additional Information:
397 The ``dims`` kwarg can be an integer or an iterable of integers.
398
399 * ``Iterable`` -- the subsystem dimensions are the values in the list
400 with the total number of subsystems given by the length of the list.
401
402 * ``Int`` -- the integer specifies the total dimension of the
403 state. If it is a power of two the state will be initialized
404 as an N-qubit state. If it is not a power of two the state
405 will have a single d-dimensional subsystem.
406 """
407 size = np.product(dims)
408 state = np.zeros((size, size), dtype=complex)
409 state[i, i] = 1.0
410 return DensityMatrix(state, dims=dims)
411
412 @classmethod
413 def from_instruction(cls, instruction):
414 """Return the output density matrix of an instruction.
415
416 The statevector is initialized in the state :math:`|{0,\\ldots,0}\\rangle` of
417 the same number of qubits as the input instruction or circuit, evolved
418 by the input instruction, and the output statevector returned.
419
420 Args:
421 instruction (qiskit.circuit.Instruction or QuantumCircuit): instruction or circuit
422
423 Returns:
424 DensityMatrix: the final density matrix.
425
426 Raises:
427 QiskitError: if the instruction contains invalid instructions for
428 density matrix simulation.
429 """
430 # Convert circuit to an instruction
431 if isinstance(instruction, QuantumCircuit):
432 instruction = instruction.to_instruction()
433 # Initialize an the statevector in the all |0> state
434 num_qubits = instruction.num_qubits
435 init = np.zeros((2**num_qubits, 2**num_qubits), dtype=complex)
436 init[0, 0] = 1
437 vec = DensityMatrix(init, dims=num_qubits * (2, ))
438 vec._append_instruction(instruction)
439 return vec
440
441 def to_dict(self, decimals=None):
442 r"""Convert the density matrix to dictionary form.
443
444 This dictionary representation uses a Ket-like notation where the
445 dictionary keys are qudit strings for the subsystem basis vectors.
446 If any subsystem has a dimension greater than 10 comma delimiters are
447 inserted between integers so that subsystems can be distinguished.
448
449 Args:
450 decimals (None or int): the number of decimal places to round
451 values. If None no rounding is done
452 (Default: None).
453
454 Returns:
455 dict: the dictionary form of the DensityMatrix.
456
457 Examples:
458
459 The ket-form of a 2-qubit density matrix
460 :math:`rho = |-\rangle\!\langle -|\otimes |0\rangle\!\langle 0|`
461
462 .. jupyter-execute::
463
464 from qiskit.quantum_info import DensityMatrix
465
466 rho = DensityMatrix.from_label('-0')
467 print(rho.to_dict())
468
469 For non-qubit subsystems the integer range can go from 0 to 9. For
470 example in a qutrit system
471
472 .. jupyter-execute::
473
474 import numpy as np
475 from qiskit.quantum_info import DensityMatrix
476
477 mat = np.zeros((9, 9))
478 mat[0, 0] = 0.25
479 mat[3, 3] = 0.25
480 mat[6, 6] = 0.25
481 mat[-1, -1] = 0.25
482 rho = DensityMatrix(mat, dims=(3, 3))
483 print(rho.to_dict())
484
485 For large subsystem dimensions delimeters are required. The
486 following example is for a 20-dimensional system consisting of
487 a qubit and 10-dimensional qudit.
488
489 .. jupyter-execute::
490
491 import numpy as np
492 from qiskit.quantum_info import DensityMatrix
493
494 mat = np.zeros((2 * 10, 2 * 10))
495 mat[0, 0] = 0.5
496 mat[-1, -1] = 0.5
497 rho = DensityMatrix(mat, dims=(2, 10))
498 print(rho.to_dict())
499 """
500 return self._matrix_to_dict(self.data,
501 self._dims,
502 decimals=decimals,
503 string_labels=True)
504
505 @property
506 def _shape(self):
507 """Return the tensor shape of the matrix operator"""
508 return 2 * tuple(reversed(self.dims()))
509
510 def _evolve_operator(self, other, qargs=None):
511 """Evolve density matrix by an operator"""
512 if qargs is None:
513 # Evolution on full matrix
514 if self._dim != other._input_dim:
515 raise QiskitError(
516 "Operator input dimension is not equal to density matrix dimension."
517 )
518 op_mat = other.data
519 mat = np.dot(op_mat, self.data).dot(op_mat.T.conj())
520 return DensityMatrix(mat, dims=other._output_dims)
521 # Otherwise we are applying an operator only to subsystems
522 # Check dimensions of subsystems match the operator
523 if self.dims(qargs) != other.input_dims():
524 raise QiskitError(
525 "Operator input dimensions are not equal to statevector subsystem dimensions."
526 )
527 # Reshape statevector and operator
528 tensor = np.reshape(self.data, self._shape)
529 # Construct list of tensor indices of statevector to be contracted
530 num_indices = len(self.dims())
531 indices = [num_indices - 1 - qubit for qubit in qargs]
532 # Left multiple by mat
533 mat = np.reshape(other.data, other._shape)
534 tensor = Operator._einsum_matmul(tensor, mat, indices)
535 # Right multiply by mat ** dagger
536 adj = other.adjoint()
537 mat_adj = np.reshape(adj.data, adj._shape)
538 tensor = Operator._einsum_matmul(tensor, mat_adj, indices, num_indices,
539 True)
540 # Replace evolved dimensions
541 new_dims = list(self.dims())
542 for i, qubit in enumerate(qargs):
543 new_dims[qubit] = other._output_dims[i]
544 new_dim = np.product(new_dims)
545 return DensityMatrix(np.reshape(tensor, (new_dim, new_dim)),
546 dims=new_dims)
547
548 def _append_instruction(self, other, qargs=None):
549 """Update the current Statevector by applying an instruction."""
550
551 # Try evolving by a matrix operator (unitary-like evolution)
552 mat = Operator._instruction_to_matrix(other)
553 if mat is not None:
554 self._data = self._evolve_operator(Operator(mat), qargs=qargs).data
555 return
556 # Otherwise try evolving by a Superoperator
557 chan = SuperOp._instruction_to_superop(other)
558 if chan is not None:
559 # Evolve current state by the superoperator
560 self._data = chan._evolve(self, qargs=qargs).data
561 return
562 # If the instruction doesn't have a matrix defined we use its
563 # circuit decomposition definition if it exists, otherwise we
564 # cannot compose this gate and raise an error.
565 if other.definition is None:
566 raise QiskitError('Cannot apply Instruction: {}'.format(
567 other.name))
568 for instr, qregs, cregs in other.definition:
569 if cregs:
570 raise QiskitError(
571 'Cannot apply instruction with classical registers: {}'.
572 format(instr.name))
573 # Get the integer position of the flat register
574 if qargs is None:
575 new_qargs = [tup.index for tup in qregs]
576 else:
577 new_qargs = [qargs[tup.index] for tup in qregs]
578 self._append_instruction(instr, qargs=new_qargs)
579
580 def _evolve_instruction(self, obj, qargs=None):
581 """Return a new statevector by applying an instruction."""
582 if isinstance(obj, QuantumCircuit):
583 obj = obj.to_instruction()
584 vec = DensityMatrix(self.data, dims=self._dims)
585 vec._append_instruction(obj, qargs=qargs)
586 return vec
587
588 def to_statevector(self, atol=None, rtol=None):
589 """Return a statevector from a pure density matrix.
590
591 Args:
592 atol (float): Absolute tolerance for checking operation validity.
593 rtol (float): Relative tolerance for checking operation validity.
594
595 Returns:
596 Statevector: The pure density matrix's corresponding statevector.
597 Corresponds to the eigenvector of the only non-zero eigenvalue.
598
599 Raises:
600 QiskitError: if the state is not pure.
601 """
602 if atol is None:
603 atol = self.atol
604 if rtol is None:
605 rtol = self.rtol
606
607 if not is_hermitian_matrix(self._data, atol=atol, rtol=rtol):
608 raise QiskitError("Not a valid density matrix (non-hermitian).")
609
610 evals, evecs = np.linalg.eig(self._data)
611
612 nonzero_evals = evals[abs(evals) > atol]
613 if len(nonzero_evals) != 1 or not np.isclose(nonzero_evals[0], 1,
614 atol=atol, rtol=rtol):
615 raise QiskitError("Density matrix is not a pure state")
616
617 psi = evecs[:, np.argmax(evals)] # eigenvectors returned in columns.
618 return Statevector(psi)
619
620 def to_counts(self):
621 """Returns the density matrix as a counts dict of probabilities.
622
623 DEPRECATED: use :meth:`probabilities_dict` instead.
624
625 Returns:
626 dict: Counts of probabilities.
627 """
628 warnings.warn(
629 'The `Statevector.to_counts` method is deprecated as of 0.13.0,'
630 ' and will be removed no earlier than 3 months after that '
631 'release date. You should use the `Statevector.probabilities_dict`'
632 ' method instead.', DeprecationWarning, stacklevel=2)
633 return self.probabilities_dict()
634
[end of qiskit/quantum_info/states/densitymatrix.py]
[start of qiskit/quantum_info/states/statevector.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Statevector quantum state class.
17 """
18
19 import re
20 import warnings
21 from numbers import Number
22
23 import numpy as np
24
25 from qiskit.circuit.quantumcircuit import QuantumCircuit
26 from qiskit.circuit.instruction import Instruction
27 from qiskit.exceptions import QiskitError
28 from qiskit.quantum_info.states.quantum_state import QuantumState
29 from qiskit.quantum_info.operators.operator import Operator
30 from qiskit.quantum_info.operators.predicates import matrix_equal
31
32
33 class Statevector(QuantumState):
34 """Statevector class"""
35
36 def __init__(self, data, dims=None):
37 """Initialize a statevector object.
38
39 Args:
40 data (vector_like): a complex statevector.
41 dims (int or tuple or list): Optional. The subsystem dimension of
42 the state (See additional information).
43
44 Raises:
45 QiskitError: if input data is not valid.
46
47 Additional Information:
48 The ``dims`` kwarg can be None, an integer, or an iterable of
49 integers.
50
51 * ``Iterable`` -- the subsystem dimensions are the values in the list
52 with the total number of subsystems given by the length of the list.
53
54 * ``Int`` or ``None`` -- the length of the input vector
55 specifies the total dimension of the density matrix. If it is a
56 power of two the state will be initialized as an N-qubit state.
57 If it is not a power of two the state will have a single
58 d-dimensional subsystem.
59 """
60 if isinstance(data, (list, np.ndarray)):
61 # Finally we check if the input is a raw vector in either a
62 # python list or numpy array format.
63 self._data = np.asarray(data, dtype=complex)
64 elif isinstance(data, Statevector):
65 self._data = data._data
66 if dims is None:
67 dims = data._dims
68 elif isinstance(data, Operator):
69 # We allow conversion of column-vector operators to Statevectors
70 input_dim, output_dim = data.dim
71 if input_dim != 1:
72 raise QiskitError("Input Operator is not a column-vector.")
73 self._data = np.ravel(data.data)
74 else:
75 raise QiskitError("Invalid input data format for Statevector")
76 # Check that the input is a numpy vector or column-vector numpy
77 # matrix. If it is a column-vector matrix reshape to a vector.
78 ndim = self._data.ndim
79 shape = self._data.shape
80 if ndim != 1:
81 if ndim == 2 and shape[1] == 1:
82 self._data = np.reshape(self._data, shape[0])
83 elif ndim != 2 or shape[1] != 1:
84 raise QiskitError("Invalid input: not a vector or column-vector.")
85 super().__init__(self._automatic_dims(dims, shape[0]))
86
87 def __eq__(self, other):
88 return super().__eq__(other) and np.allclose(
89 self._data, other._data, rtol=self.rtol, atol=self.atol)
90
91 def __repr__(self):
92 prefix = 'Statevector('
93 pad = len(prefix) * ' '
94 return '{}{},\n{}dims={})'.format(
95 prefix, np.array2string(
96 self.data, separator=', ', prefix=prefix),
97 pad, self._dims)
98
99 @property
100 def data(self):
101 """Return data."""
102 return self._data
103
104 def is_valid(self, atol=None, rtol=None):
105 """Return True if a Statevector has norm 1."""
106 if atol is None:
107 atol = self.atol
108 if rtol is None:
109 rtol = self.rtol
110 norm = np.linalg.norm(self.data)
111 return np.allclose(norm, 1, rtol=rtol, atol=atol)
112
113 def to_operator(self):
114 """Convert state to a rank-1 projector operator"""
115 mat = np.outer(self.data, np.conj(self.data))
116 return Operator(mat, input_dims=self.dims(), output_dims=self.dims())
117
118 def conjugate(self):
119 """Return the conjugate of the operator."""
120 return Statevector(np.conj(self.data), dims=self.dims())
121
122 def trace(self):
123 """Return the trace of the quantum state as a density matrix."""
124 return np.sum(np.abs(self.data) ** 2)
125
126 def purity(self):
127 """Return the purity of the quantum state."""
128 # For a valid statevector the purity is always 1, however if we simply
129 # have an arbitrary vector (not correctly normalized) then the
130 # purity is equivalent to the trace squared:
131 # P(|psi>) = Tr[|psi><psi|psi><psi|] = |<psi|psi>|^2
132 return self.trace() ** 2
133
134 def tensor(self, other):
135 """Return the tensor product state self ⊗ other.
136
137 Args:
138 other (Statevector): a quantum state object.
139
140 Returns:
141 Statevector: the tensor product operator self ⊗ other.
142
143 Raises:
144 QiskitError: if other is not a quantum state.
145 """
146 if not isinstance(other, Statevector):
147 other = Statevector(other)
148 dims = other.dims() + self.dims()
149 data = np.kron(self._data, other._data)
150 return Statevector(data, dims)
151
152 def expand(self, other):
153 """Return the tensor product state other ⊗ self.
154
155 Args:
156 other (Statevector): a quantum state object.
157
158 Returns:
159 Statevector: the tensor product state other ⊗ self.
160
161 Raises:
162 QiskitError: if other is not a quantum state.
163 """
164 if not isinstance(other, Statevector):
165 other = Statevector(other)
166 dims = self.dims() + other.dims()
167 data = np.kron(other._data, self._data)
168 return Statevector(data, dims)
169
170 def _add(self, other):
171 """Return the linear combination self + other.
172
173 Args:
174 other (Statevector): a quantum state object.
175
176 Returns:
177 Statevector: the linear combination self + other.
178
179 Raises:
180 QiskitError: if other is not a quantum state, or has
181 incompatible dimensions.
182 """
183 if not isinstance(other, Statevector):
184 other = Statevector(other)
185 if self.dim != other.dim:
186 raise QiskitError("other Statevector has different dimensions.")
187 return Statevector(self.data + other.data, self.dims())
188
189 def _multiply(self, other):
190 """Return the scalar multiplied state self * other.
191
192 Args:
193 other (complex): a complex number.
194
195 Returns:
196 Statevector: the scalar multiplied state other * self.
197
198 Raises:
199 QiskitError: if other is not a valid complex number.
200 """
201 if not isinstance(other, Number):
202 raise QiskitError("other is not a number")
203 return Statevector(other * self.data, self.dims())
204
205 def evolve(self, other, qargs=None):
206 """Evolve a quantum state by the operator.
207
208 Args:
209 other (Operator): The operator to evolve by.
210 qargs (list): a list of Statevector subsystem positions to apply
211 the operator on.
212
213 Returns:
214 Statevector: the output quantum state.
215
216 Raises:
217 QiskitError: if the operator dimension does not match the
218 specified Statevector subsystem dimensions.
219 """
220 if qargs is None:
221 qargs = getattr(other, 'qargs', None)
222
223 # Evolution by a circuit or instruction
224 if isinstance(other, (QuantumCircuit, Instruction)):
225 return self._evolve_instruction(other, qargs=qargs)
226 # Evolution by an Operator
227 if not isinstance(other, Operator):
228 other = Operator(other)
229 if qargs is None:
230 # Evolution on full statevector
231 if self._dim != other._input_dim:
232 raise QiskitError(
233 "Operator input dimension is not equal to statevector dimension."
234 )
235 return Statevector(np.dot(other.data, self.data), dims=other.output_dims())
236 # Otherwise we are applying an operator only to subsystems
237 # Check dimensions of subsystems match the operator
238 if self.dims(qargs) != other.input_dims():
239 raise QiskitError(
240 "Operator input dimensions are not equal to statevector subsystem dimensions."
241 )
242 # Reshape statevector and operator
243 tensor = np.reshape(self.data, self._shape)
244 mat = np.reshape(other.data, other._shape)
245 # Construct list of tensor indices of statevector to be contracted
246 num_indices = len(self.dims())
247 indices = [num_indices - 1 - qubit for qubit in qargs]
248 tensor = Operator._einsum_matmul(tensor, mat, indices)
249 new_dims = list(self.dims())
250 for i, qubit in enumerate(qargs):
251 new_dims[qubit] = other._output_dims[i]
252 # Replace evolved dimensions
253 return Statevector(np.reshape(tensor, np.product(new_dims)), dims=new_dims)
254
255 def equiv(self, other, rtol=None, atol=None):
256 """Return True if statevectors are equivalent up to global phase.
257
258 Args:
259 other (Statevector): a statevector object.
260 rtol (float): relative tolerance value for comparison.
261 atol (float): absolute tolerance value for comparison.
262
263 Returns:
264 bool: True if statevectors are equivalent up to global phase.
265 """
266 if not isinstance(other, Statevector):
267 try:
268 other = Statevector(other)
269 except QiskitError:
270 return False
271 if self.dim != other.dim:
272 return False
273 if atol is None:
274 atol = self.atol
275 if rtol is None:
276 rtol = self.rtol
277 return matrix_equal(self.data, other.data, ignore_phase=True,
278 rtol=rtol, atol=atol)
279
280 def probabilities(self, qargs=None, decimals=None):
281 """Return the subsystem measurement probability vector.
282
283 Measurement probabilities are with respect to measurement in the
284 computation (diagonal) basis.
285
286 Args:
287 qargs (None or list): subsystems to return probabilities for,
288 if None return for all subsystems (Default: None).
289 decimals (None or int): the number of decimal places to round
290 values. If None no rounding is done (Default: None).
291
292 Returns:
293 np.array: The Numpy vector array of probabilities.
294
295 Examples:
296
297 Consider a 2-qubit product state
298 :math:`|\\psi\\rangle=|+\\rangle\\otimes|0\\rangle`.
299
300 .. jupyter-execute::
301
302 from qiskit.quantum_info import Statevector
303
304 psi = Statevector.from_label('+0')
305
306 # Probabilities for measuring both qubits
307 probs = psi.probabilities()
308 print('probs: {}'.format(probs))
309
310 # Probabilities for measuring only qubit-0
311 probs_qubit_0 = psi.probabilities([0])
312 print('Qubit-0 probs: {}'.format(probs_qubit_0))
313
314 # Probabilities for measuring only qubit-1
315 probs_qubit_1 = psi.probabilities([1])
316 print('Qubit-1 probs: {}'.format(probs_qubit_1))
317
318 We can also permute the order of qubits in the ``qargs`` list
319 to change the qubit position in the probabilities output
320
321 .. jupyter-execute::
322
323 from qiskit.quantum_info import Statevector
324
325 psi = Statevector.from_label('+0')
326
327 # Probabilities for measuring both qubits
328 probs = psi.probabilities([0, 1])
329 print('probs: {}'.format(probs))
330
331 # Probabilities for measuring both qubits
332 # but swapping qubits 0 and 1 in output
333 probs_swapped = psi.probabilities([1, 0])
334 print('Swapped probs: {}'.format(probs_swapped))
335 """
336 probs = self._subsystem_probabilities(
337 np.abs(self.data) ** 2, self._dims, qargs=qargs)
338 if decimals is not None:
339 probs = probs.round(decimals=decimals)
340 return probs
341
342 def reset(self, qargs=None):
343 """Reset state or subsystems to the 0-state.
344
345 Args:
346 qargs (list or None): subsystems to reset, if None all
347 subsystems will be reset to their 0-state
348 (Default: None).
349
350 Returns:
351 Statevector: the reset state.
352
353 Additional Information:
354 If all subsystems are reset this will return the ground state
355 on all subsystems. If only a some subsystems are reset this
356 function will perform a measurement on those subsystems and
357 evolve the subsystems so that the collapsed post-measurement
358 states are rotated to the 0-state. The RNG seed for this
359 sampling can be set using the :meth:`seed` method.
360 """
361 if qargs is None:
362 # Resetting all qubits does not require sampling or RNG
363 state = np.zeros(self._dim, dtype=complex)
364 state[0] = 1
365 return Statevector(state, dims=self._dims)
366
367 # Sample a single measurement outcome
368 dims = self.dims(qargs)
369 probs = self.probabilities(qargs)
370 sample = self._rng.choice(len(probs), p=probs, size=1)
371
372 # Convert to projector for state update
373 proj = np.zeros(len(probs), dtype=complex)
374 proj[sample] = 1 / np.sqrt(probs[sample])
375
376 # Rotate outcome to 0
377 reset = np.eye(len(probs))
378 reset[0, 0] = 0
379 reset[sample, sample] = 0
380 reset[0, sample] = 1
381
382 # compose with reset projection
383 reset = np.dot(reset, np.diag(proj))
384 return self.evolve(
385 Operator(reset, input_dims=dims, output_dims=dims),
386 qargs=qargs)
387
388 def to_counts(self):
389 """Returns the statevector as a counts dict
390 of probabilities.
391
392 DEPRECATED: use :meth:`probabilities_dict` instead.
393
394 Returns:
395 dict: Counts of probabilities.
396 """
397 warnings.warn(
398 'The `Statevector.to_counts` method is deprecated as of 0.13.0,'
399 ' and will be removed no earlier than 3 months after that '
400 'release date. You should use the `Statevector.probabilities_dict`'
401 ' method instead.', DeprecationWarning, stacklevel=2)
402 return self.probabilities_dict()
403
404 @classmethod
405 def from_label(cls, label):
406 """Return a tensor product of Pauli X,Y,Z eigenstates.
407
408 .. list-table:: Single-qubit state labels
409 :header-rows: 1
410
411 * - Label
412 - Statevector
413 * - ``"0"``
414 - :math:`[1, 0]`
415 * - ``"1"``
416 - :math:`[0, 1]`
417 * - ``"+"``
418 - :math:`[1 / \\sqrt{2}, 1 / \\sqrt{2}]`
419 * - ``"-"``
420 - :math:`[1 / \\sqrt{2}, -1 / \\sqrt{2}]`
421 * - ``"r"``
422 - :math:`[1 / \\sqrt{2}, i / \\sqrt{2}]`
423 * - ``"l"``
424 - :math:`[1 / \\sqrt{2}, -i / \\sqrt{2}]`
425
426 Args:
427 label (string): a eigenstate string ket label (see table for
428 allowed values).
429
430 Returns:
431 Statevector: The N-qubit basis state density matrix.
432
433 Raises:
434 QiskitError: if the label contains invalid characters, or the
435 length of the label is larger than an explicitly
436 specified num_qubits.
437 """
438 # Check label is valid
439 if re.match(r'^[01rl\-+]+$', label) is None:
440 raise QiskitError('Label contains invalid characters.')
441 # We can prepare Z-eigenstates by converting the computational
442 # basis bit-string to an integer and preparing that unit vector
443 # However, for X-basis states, we will prepare a Z-eigenstate first
444 # then apply Hadamard gates to rotate 0 and 1s to + and -.
445 z_label = label
446 xy_states = False
447 if re.match('^[01]+$', label) is None:
448 # We have X or Y eigenstates so replace +,r with 0 and
449 # -,l with 1 and prepare the corresponding Z state
450 xy_states = True
451 z_label = z_label.replace('+', '0')
452 z_label = z_label.replace('r', '0')
453 z_label = z_label.replace('-', '1')
454 z_label = z_label.replace('l', '1')
455 # Initialize Z eigenstate vector
456 num_qubits = len(label)
457 data = np.zeros(1 << num_qubits, dtype=complex)
458 pos = int(z_label, 2)
459 data[pos] = 1
460 state = Statevector(data)
461 if xy_states:
462 # Apply hadamards to all qubits in X eigenstates
463 x_mat = np.array([[1, 1], [1, -1]], dtype=complex) / np.sqrt(2)
464 # Apply S.H to qubits in Y eigenstates
465 y_mat = np.dot(np.diag([1, 1j]), x_mat)
466 for qubit, char in enumerate(reversed(label)):
467 if char in ['+', '-']:
468 state = state.evolve(x_mat, qargs=[qubit])
469 elif char in ['r', 'l']:
470 state = state.evolve(y_mat, qargs=[qubit])
471 return state
472
473 @staticmethod
474 def from_int(i, dims):
475 """Return a computational basis statevector.
476
477 Args:
478 i (int): the basis state element.
479 dims (int or tuple or list): The subsystem dimensions of the statevector
480 (See additional information).
481
482 Returns:
483 Statevector: The computational basis state :math:`|i\\rangle`.
484
485 Additional Information:
486 The ``dims`` kwarg can be an integer or an iterable of integers.
487
488 * ``Iterable`` -- the subsystem dimensions are the values in the list
489 with the total number of subsystems given by the length of the list.
490
491 * ``Int`` -- the integer specifies the total dimension of the
492 state. If it is a power of two the state will be initialized
493 as an N-qubit state. If it is not a power of two the state
494 will have a single d-dimensional subsystem.
495 """
496 size = np.product(dims)
497 state = np.zeros(size, dtype=complex)
498 state[i] = 1.0
499 return Statevector(state, dims=dims)
500
501 @classmethod
502 def from_instruction(cls, instruction):
503 """Return the output statevector of an instruction.
504
505 The statevector is initialized in the state :math:`|{0,\\ldots,0}\\rangle` of the
506 same number of qubits as the input instruction or circuit, evolved
507 by the input instruction, and the output statevector returned.
508
509 Args:
510 instruction (qiskit.circuit.Instruction or QuantumCircuit): instruction or circuit
511
512 Returns:
513 Statevector: The final statevector.
514
515 Raises:
516 QiskitError: if the instruction contains invalid instructions for
517 the statevector simulation.
518 """
519 # Convert circuit to an instruction
520 if isinstance(instruction, QuantumCircuit):
521 instruction = instruction.to_instruction()
522 # Initialize an the statevector in the all |0> state
523 init = np.zeros(2 ** instruction.num_qubits, dtype=complex)
524 init[0] = 1.0
525 vec = Statevector(init, dims=instruction.num_qubits * (2,))
526 vec._append_instruction(instruction)
527 return vec
528
529 def to_dict(self, decimals=None):
530 r"""Convert the statevector to dictionary form.
531
532 This dictionary representation uses a Ket-like notation where the
533 dictionary keys are qudit strings for the subsystem basis vectors.
534 If any subsystem has a dimension greater than 10 comma delimiters are
535 inserted between integers so that subsystems can be distinguished.
536
537 Args:
538 decimals (None or int): the number of decimal places to round
539 values. If None no rounding is done
540 (Default: None).
541
542 Returns:
543 dict: the dictionary form of the Statevector.
544
545 Example:
546
547 The ket-form of a 2-qubit statevector
548 :math:`|\psi\rangle = |-\rangle\otimes |0\rangle`
549
550 .. jupyter-execute::
551
552 from qiskit.quantum_info import Statevector
553
554 psi = Statevector.from_label('-0')
555 print(psi.to_dict())
556
557 For non-qubit subsystems the integer range can go from 0 to 9. For
558 example in a qutrit system
559
560 .. jupyter-execute::
561
562 import numpy as np
563 from qiskit.quantum_info import Statevector
564
565 vec = np.zeros(9)
566 vec[0] = 1 / np.sqrt(2)
567 vec[-1] = 1 / np.sqrt(2)
568 psi = Statevector(vec, dims=(3, 3))
569 print(psi.to_dict())
570
571 For large subsystem dimensions delimeters are required. The
572 following example is for a 20-dimensional system consisting of
573 a qubit and 10-dimensional qudit.
574
575 .. jupyter-execute::
576
577 import numpy as np
578 from qiskit.quantum_info import Statevector
579
580 vec = np.zeros(2 * 10)
581 vec[0] = 1 / np.sqrt(2)
582 vec[-1] = 1 / np.sqrt(2)
583 psi = Statevector(vec, dims=(2, 10))
584 print(psi.to_dict())
585 """
586 return self._vector_to_dict(self.data,
587 self._dims,
588 decimals=decimals,
589 string_labels=True)
590
591 @property
592 def _shape(self):
593 """Return the tensor shape of the matrix operator"""
594 return tuple(reversed(self.dims()))
595
596 def _append_instruction(self, obj, qargs=None):
597 """Update the current Statevector by applying an instruction."""
598 mat = Operator._instruction_to_matrix(obj)
599 if mat is not None:
600 # Perform the composition and inplace update the current state
601 # of the operator
602 state = self.evolve(mat, qargs=qargs)
603 self._data = state.data
604 else:
605 # If the instruction doesn't have a matrix defined we use its
606 # circuit decomposition definition if it exists, otherwise we
607 # cannot compose this gate and raise an error.
608 if obj.definition is None:
609 raise QiskitError('Cannot apply Instruction: {}'.format(obj.name))
610 for instr, qregs, cregs in obj.definition:
611 if cregs:
612 raise QiskitError(
613 'Cannot apply instruction with classical registers: {}'.format(
614 instr.name))
615 # Get the integer position of the flat register
616 if qargs is None:
617 new_qargs = [tup.index for tup in qregs]
618 else:
619 new_qargs = [qargs[tup.index] for tup in qregs]
620 self._append_instruction(instr, qargs=new_qargs)
621
622 def _evolve_instruction(self, obj, qargs=None):
623 """Return a new statevector by applying an instruction."""
624 if isinstance(obj, QuantumCircuit):
625 obj = obj.to_instruction()
626 vec = Statevector(self.data, dims=self.dims())
627 vec._append_instruction(obj, qargs=qargs)
628 return vec
629
[end of qiskit/quantum_info/states/statevector.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| Qiskit/qiskit | aae6eab589915d948f89d1b131019282560df61a | `initialize` and `Statevector` don't play nicely
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit Aer version**: 0.5.1
- **Python version**: 3.7.3
- **Operating system**: OSX
### What is the current behavior?
Using `initialize` in a circuit and then running with `Statevector` results in the error "Cannot apply Instruction: reset"
### Steps to reproduce the problem
```
import qiskit as qk
import qiskit.quantum_info as qi
from numpy import sqrt
n = 2
ket0 = [1/sqrt(2),0,0,1/sqrt(2)]
qc = qk.QuantumCircuit(n)
qc.initialize(ket0,range(n))
ket_qi = qi.Statevector.from_instruction(qc)
```
| Funny cause the circuit has no resets:
```
┌───┐┌───────────────┐┌───┐»
q_0: ─|0>──────────────────────────────┤ X ├┤ U3(-pi/2,0,0) ├┤ X ├»
┌──────────────┐┌───────────┐└─┬─┘└───────────────┘└─┬─┘»
q_1: ─|0>─┤ U3(pi/2,0,0) ├┤ U3(0,0,0) ├──■─────────────────────■──»
└──────────────┘└───────────┘ »
« ┌──────────────┐┌───┐┌───────────┐┌───┐┌───────────┐
«q_0: ┤ U3(pi/2,0,0) ├┤ X ├┤ U3(0,0,0) ├┤ X ├┤ U3(0,0,0) ├
« └──────────────┘└─┬─┘└───────────┘└─┬─┘└───────────┘
«q_1: ──────────────────■─────────────────■───────────────
«
```
That circuit has a lot of things I don't like, but at least it has no resets.
@nonhermitian That circuit *does* have resets, notice the two |0> |0> at the start? The definition of initialize has reset instructions on all qubits that are removed by the transpiler if it is the first instruction in a circuit. This is so you can "initialize" subsets of qubits at any stage of a circuit.
I'm transferring this to Qiskit Terra. In the future please post issues related to the Quantum info module in Terra, not Aer.
That is not just the initial state of the qubits? That is what used to be there. It is hard to tell the difference here. So that means that initialize is no longer a unitary operation like it was before? I guess I missed that as well.
A reset method was added to the Statevector class last release, so the simulation could be updated to work with initialize via adding support of reset to the the from_instruction method.
@nonhermitian I think this was changed recently in the draw function. Now circuits are drawn without any initial states, so any time you see them they refer to reset instructions.
Yeah, it is just that the original text drawing is very close to the reset instructions so it is hard for the user to understand. | 2020-05-15T16:51:32Z | <patch>
diff --git a/qiskit/quantum_info/states/densitymatrix.py b/qiskit/quantum_info/states/densitymatrix.py
--- a/qiskit/quantum_info/states/densitymatrix.py
+++ b/qiskit/quantum_info/states/densitymatrix.py
@@ -547,12 +547,22 @@ def _evolve_operator(self, other, qargs=None):
def _append_instruction(self, other, qargs=None):
"""Update the current Statevector by applying an instruction."""
+ from qiskit.circuit.reset import Reset
+ from qiskit.circuit.barrier import Barrier
# Try evolving by a matrix operator (unitary-like evolution)
mat = Operator._instruction_to_matrix(other)
if mat is not None:
self._data = self._evolve_operator(Operator(mat), qargs=qargs).data
return
+
+ # Special instruction types
+ if isinstance(other, Reset):
+ self._data = self.reset(qargs)._data
+ return
+ if isinstance(other, Barrier):
+ return
+
# Otherwise try evolving by a Superoperator
chan = SuperOp._instruction_to_superop(other)
if chan is not None:
diff --git a/qiskit/quantum_info/states/statevector.py b/qiskit/quantum_info/states/statevector.py
--- a/qiskit/quantum_info/states/statevector.py
+++ b/qiskit/quantum_info/states/statevector.py
@@ -595,29 +595,40 @@ def _shape(self):
def _append_instruction(self, obj, qargs=None):
"""Update the current Statevector by applying an instruction."""
+ from qiskit.circuit.reset import Reset
+ from qiskit.circuit.barrier import Barrier
+
mat = Operator._instruction_to_matrix(obj)
if mat is not None:
# Perform the composition and inplace update the current state
# of the operator
- state = self.evolve(mat, qargs=qargs)
- self._data = state.data
- else:
- # If the instruction doesn't have a matrix defined we use its
- # circuit decomposition definition if it exists, otherwise we
- # cannot compose this gate and raise an error.
- if obj.definition is None:
- raise QiskitError('Cannot apply Instruction: {}'.format(obj.name))
- for instr, qregs, cregs in obj.definition:
- if cregs:
- raise QiskitError(
- 'Cannot apply instruction with classical registers: {}'.format(
- instr.name))
- # Get the integer position of the flat register
- if qargs is None:
- new_qargs = [tup.index for tup in qregs]
- else:
- new_qargs = [qargs[tup.index] for tup in qregs]
- self._append_instruction(instr, qargs=new_qargs)
+ self._data = self.evolve(mat, qargs=qargs).data
+ return
+
+ # Special instruction types
+ if isinstance(obj, Reset):
+ self._data = self.reset(qargs)._data
+ return
+ if isinstance(obj, Barrier):
+ return
+
+ # If the instruction doesn't have a matrix defined we use its
+ # circuit decomposition definition if it exists, otherwise we
+ # cannot compose this gate and raise an error.
+ if obj.definition is None:
+ raise QiskitError('Cannot apply Instruction: {}'.format(obj.name))
+
+ for instr, qregs, cregs in obj.definition:
+ if cregs:
+ raise QiskitError(
+ 'Cannot apply instruction with classical registers: {}'.format(
+ instr.name))
+ # Get the integer position of the flat register
+ if qargs is None:
+ new_qargs = [tup.index for tup in qregs]
+ else:
+ new_qargs = [qargs[tup.index] for tup in qregs]
+ self._append_instruction(instr, qargs=new_qargs)
def _evolve_instruction(self, obj, qargs=None):
"""Return a new statevector by applying an instruction."""
</patch> | [] | [] | |||
ipython__ipython-10213 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
remove usage of backports.shutil_get_terminal_size
This is for pre-3.3 Python.
Pretty easy it should only require deleting lines.
Maybe a few need to be dedented.
</issue>
<code>
[start of README.rst]
1 .. image:: https://codecov.io/github/ipython/ipython/coverage.svg?branch=master
2 :target: https://codecov.io/github/ipython/ipython?branch=master
3
4 .. image:: https://img.shields.io/pypi/dm/IPython.svg
5 :target: https://pypi.python.org/pypi/ipython
6
7 .. image:: https://img.shields.io/pypi/v/IPython.svg
8 :target: https://pypi.python.org/pypi/ipython
9
10 .. image:: https://img.shields.io/travis/ipython/ipython.svg
11 :target: https://travis-ci.org/ipython/ipython
12
13
14 ===========================================
15 IPython: Productive Interactive Computing
16 ===========================================
17
18 Overview
19 ========
20
21 Welcome to IPython. Our full documentation is available on `ipython.readthedocs.io
22 <https://ipython.readthedocs.io/en/stable/>`_ and contains information on how to install, use and
23 contribute to the project.
24
25 Officially, IPython requires Python version 3.3 and above.
26 IPython 5.x is the last IPython version to support Python 2.7.
27
28 The Notebook, Qt console and a number of other pieces are now parts of *Jupyter*.
29 See the `Jupyter installation docs <http://jupyter.readthedocs.io/en/latest/install.html>`__
30 if you want to use these.
31
32
33
34
35 Development and Instant running
36 ===============================
37
38 You can find the latest version of the development documentation on `readthedocs
39 <http://ipython.readthedocs.io/en/latest/>`_.
40
41 You can run IPython from this directory without even installing it system-wide
42 by typing at the terminal::
43
44 $ python -m IPython
45
46 Or see the `development installation docs
47 <http://ipython.readthedocs.io/en/latest/install/install.html#installing-the-development-version>`_
48 for the latest revision on read the docs.
49
50 Documentation and installation instructions for older version of IPython can be
51 found on the `IPython website <http://ipython.org/documentation.html>`_
52
53
54
55 IPython requires Python version 3 or above
56 ==========================================
57
58 Starting with version 6.0, IPython does not support Python 2.7, 3.0, 3.1, or
59 3.2.
60
61 For a version compatible with Python 2.7, please install the 5.x LTS Long Term
62 Support version.
63
64 If you are encountering this error message you are likely trying to install or
65 use IPython from source. You need to checkout the remote 5.x branch. If you are
66 using git the following should work:
67
68 $ git fetch origin
69 $ git checkout -b origin/5.x
70
71 If you encounter this error message with a regular install of IPython, then you
72 likely need to update your package manager, for example if you are using `pip`
73 check the version of pip with
74
75 $ pip --version
76
77 You will need to update pip to the version 8.2 or greater. If you are not using
78 pip, please inquiry with the maintainers of the package for your package
79 manager.
80
81 For more information see one of our blog posts:
82
83 http://blog.jupyter.org/2016/07/08/ipython-5-0-released/
84
85 As well as the following Pull-Request for discussion:
86
87 https://github.com/ipython/ipython/pull/9900
88
[end of README.rst]
[start of IPython/utils/terminal.py]
1 # encoding: utf-8
2 """
3 Utilities for working with terminals.
4
5 Authors:
6
7 * Brian E. Granger
8 * Fernando Perez
9 * Alexander Belchenko (e-mail: bialix AT ukr.net)
10 """
11
12 # Copyright (c) IPython Development Team.
13 # Distributed under the terms of the Modified BSD License.
14
15 import os
16 import sys
17 import warnings
18 try:
19 from shutil import get_terminal_size as _get_terminal_size
20 except ImportError:
21 # use backport on Python 2
22 from backports.shutil_get_terminal_size import get_terminal_size as _get_terminal_size
23
24 from . import py3compat
25
26 #-----------------------------------------------------------------------------
27 # Code
28 #-----------------------------------------------------------------------------
29
30 # This variable is part of the expected API of the module:
31 ignore_termtitle = True
32
33
34
35 if os.name == 'posix':
36 def _term_clear():
37 os.system('clear')
38 elif sys.platform == 'win32':
39 def _term_clear():
40 os.system('cls')
41 else:
42 def _term_clear():
43 pass
44
45
46
47 def toggle_set_term_title(val):
48 """Control whether set_term_title is active or not.
49
50 set_term_title() allows writing to the console titlebar. In embedded
51 widgets this can cause problems, so this call can be used to toggle it on
52 or off as needed.
53
54 The default state of the module is for the function to be disabled.
55
56 Parameters
57 ----------
58 val : bool
59 If True, set_term_title() actually writes to the terminal (using the
60 appropriate platform-specific module). If False, it is a no-op.
61 """
62 global ignore_termtitle
63 ignore_termtitle = not(val)
64
65
66 def _set_term_title(*args,**kw):
67 """Dummy no-op."""
68 pass
69
70
71 def _set_term_title_xterm(title):
72 """ Change virtual terminal title in xterm-workalikes """
73 sys.stdout.write('\033]0;%s\007' % title)
74
75 if os.name == 'posix':
76 TERM = os.environ.get('TERM','')
77 if TERM.startswith('xterm'):
78 _set_term_title = _set_term_title_xterm
79 elif sys.platform == 'win32':
80 try:
81 import ctypes
82
83 SetConsoleTitleW = ctypes.windll.kernel32.SetConsoleTitleW
84 SetConsoleTitleW.argtypes = [ctypes.c_wchar_p]
85
86 def _set_term_title(title):
87 """Set terminal title using ctypes to access the Win32 APIs."""
88 SetConsoleTitleW(title)
89 except ImportError:
90 def _set_term_title(title):
91 """Set terminal title using the 'title' command."""
92 global ignore_termtitle
93
94 try:
95 # Cannot be on network share when issuing system commands
96 curr = os.getcwd()
97 os.chdir("C:")
98 ret = os.system("title " + title)
99 finally:
100 os.chdir(curr)
101 if ret:
102 # non-zero return code signals error, don't try again
103 ignore_termtitle = True
104
105
106 def set_term_title(title):
107 """Set terminal title using the necessary platform-dependent calls."""
108 if ignore_termtitle:
109 return
110 _set_term_title(title)
111
112
113 def freeze_term_title():
114 warnings.warn("This function is deprecated, use toggle_set_term_title()")
115 global ignore_termtitle
116 ignore_termtitle = True
117
118
119 def get_terminal_size(defaultx=80, defaulty=25):
120 return _get_terminal_size((defaultx, defaulty))
121
[end of IPython/utils/terminal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| ipython/ipython | 78ec96d7ca0147f0655d5260f2ab0c61d94e4279 | remove usage of backports.shutil_get_terminal_size
This is for pre-3.3 Python.
Pretty easy it should only require deleting lines.
Maybe a few need to be dedented.
| 2017-01-28T05:22:06Z | <patch>
diff --git a/IPython/utils/terminal.py b/IPython/utils/terminal.py
--- a/IPython/utils/terminal.py
+++ b/IPython/utils/terminal.py
@@ -15,11 +15,7 @@
import os
import sys
import warnings
-try:
- from shutil import get_terminal_size as _get_terminal_size
-except ImportError:
- # use backport on Python 2
- from backports.shutil_get_terminal_size import get_terminal_size as _get_terminal_size
+from shutil import get_terminal_size as _get_terminal_size
from . import py3compat
</patch> | [] | [] | ||||
Qiskit__qiskit-1295 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
credentials failed for qiskit ver 0.6.1
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit Terra version**: 0.6.1
- **Python version**: 3.7.0
- **Operating system**:MAC OSX 10.13.6
### What is the current behavior?
After I acquired fresh token from https://quantumexperience.ng.bluemix.net/qx/account/advanced
IBMQ.load_accounts() fails.
### Steps to reproduce the problem
```
from qiskit import IBMQ
myToken='b6abe11442c9a...'
IBMQ.save_account(myToken)
IBMQ.load_accounts()
```
Results with
```
Traceback (most recent call last):
File "/anaconda3/lib/python3.7/site-packages/qiskit/backends/ibmq/ibmqsingleprovider.py", line 71, in _authenticate
credentials.verify)
File "/anaconda3/lib/python3.7/site-packages/IBMQuantumExperience/IBMQuantumExperience.py", line 478, in __init__
self.req = _Request(token, config=config, verify=verify)
File "/anaconda3/lib/python3.7/site-packages/IBMQuantumExperience/IBMQuantumExperience.py", line 253, in __init__
ntlm_credentials=self.ntlm_credentials)
File "/anaconda3/lib/python3.7/site-packages/IBMQuantumExperience/IBMQuantumExperience.py", line 95, in __init__
self.obtain_token(config=self.config)
File "/anaconda3/lib/python3.7/site-packages/IBMQuantumExperience/IBMQuantumExperience.py", line 159, in obtain_token
raise CredentialsError('error during login: %s' % error_message)
IBMQuantumExperience.IBMQuantumExperience.CredentialsError: error during login: Wrong user or password, check your credentials.
```
### What is the expected behavior?
Would be better if IBMQ.load_accounts() accepted me. All worked well w/ ver 0.5.
### Suggested solutions
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [![PyPI](https://img.shields.io/pypi/v/qiskit.svg)](https://pypi.python.org/pypi/qiskit)
4 [![Build Status](https://travis-ci.org/Qiskit/qiskit-terra.svg?branch=master)](https://travis-ci.org/Qiskit/qiskit-terra)
5 [![Build Status IBM Q](https://travis-matrix-badges.herokuapp.com/repos/Qiskit/qiskit-terra/branches/master/8)](https://travis-ci.org/Qiskit/qiskit-terra)
6
7 **Qiskit** is a software development kit for
8 developing quantum computing applications and working with NISQ (Noisy-Intermediate Scale Quantum) computers.
9
10 Qiskit is made up elements that each work together to enable quantum computing. This element is **Terra**
11 and is the foundation on which the rest of Qiskit is built (see this [post](https://medium.com/qiskit/qiskit-and-its-fundamental-elements-bcd7ead80492) for an overview).
12
13
14 ## Installation
15
16
17 We encourage installing Qiskit via the PIP tool (a python package manager):
18
19 ```bash
20 pip install qiskit
21 ```
22
23 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version.
24
25 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using Qiskit. In
26 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended
27 for interacting with the tutorials.
28 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads)
29 python distribution, as it comes with all of these dependencies pre-installed.
30
31 See [installing](doc/install.rst) Qiskit for detailed instructions, how to build from source and using environments.
32
33
34 ## Creating your first quantum program
35
36 Now that Qiskit is installed, it's time to begin working with Terra.
37
38 We are ready to try out a quantum circuit example, which is simulated locally using
39 the Qiskt Aer element. This is a simple example that makes an entangled state.
40
41 ```
42 $ python
43 ```
44
45 ```python
46 >>> from qiskit import *
47 >>> q = QuantumRegister(2)
48 >>> c = ClassicalRegister(2)
49 >>> qc = QuantumCircuit(q, c)
50 >>> qc.h(q[0])
51 >>> qc.cx(q[0], q[1])
52 >>> qc.measure(q, c)
53 >>> backend_sim = Aer.get_backend('qasm_simulator')
54 >>> result = execute(qc, backend_sim).result()
55 >>> print(result.get_counts(qc))
56 ```
57
58 In this case, the output will be:
59
60 ```python
61 {'counts': {'00': 513, '11': 511}}
62 ```
63
64 A script is available [here](examples/python/hello_quantum.py), where we also show how to
65 run the same program on a real quantum computer via IBMQ.
66
67 ### Executing your code on a real quantum chip
68
69 You can also use Qiskit to execute your code on a
70 **real quantum chip**.
71 In order to do so, you need to configure Qiskit for using the credentials in
72 your IBM Q account:
73
74 #### Configure your IBMQ credentials
75
76 1. Create an _[IBM Q](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so.
77
78 2. Get an API token from the IBM Q website under _My Account > Advanced > API Token_.
79
80 3. Take your token from step 2, here called `MY_API_TOKEN`, and run:
81
82 ```python
83 >>> from qiskit import IBMQ
84 >>> IBMQ.save_account('MY_API_TOKEN')
85 ```
86
87 4. If you have access to the IBM Q Network features, you also need to pass the
88 url listed on your IBM Q account page to `save_account`.
89
90 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
91 Once they are stored, at any point in the future you can load and use them
92 in your program simply via:
93
94 ```python
95 >>> from qiskit import IBMQ
96 >>> IBMQ.load_accounts()
97 ```
98
99 For those who do not want to save there credentials to disk please use
100
101 ```python
102 >>> from qiskit import IBMQ
103 >>> IBMQ.enable_account('MY_API_TOKEN')
104 ```
105
106 and the token will only be active for the session. For examples using Terra with real
107 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
108 the levels.
109
110 ## Contribution guidelines
111
112 If you'd like to contribute to Qiskit, please take a look at our
113 [contribution guidelines](.github/CONTRIBUTING.rst). This project adheres to Qiskit's [code of conduct](.github/CODE_OF_CONDUCT.rst). By participating, you are expect to uphold to this code.
114
115 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs.
116 Please use our [slack](https://qiskit.slack.com) for discussion. To join our Slack community use the [link](https://join.slack.com/t/qiskit/shared_invite/enQtNDc2NjUzMjE4Mzc0LTMwZmE0YTM4ZThiNGJmODkzN2Y2NTNlMDIwYWNjYzA2ZmM1YTRlZGQ3OGM0NjcwMjZkZGE0MTA4MGQ1ZTVmYzk). To ask questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/qiskit).
117
118
119
120 ### Next Steps
121
122 Now you're set up and ready to check out some of the other examples from our
123 [Qiskit Tutorial](https://github.com/Qiskit/qiskit-tutorial) repository.
124
125
126 ## Authors
127
128 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
129 to the project at different levels.
130
131 ## License
132
133 [Apache License 2.0](LICENSE.txt)
[end of README.md]
[start of qiskit/backends/ibmq/credentials/_configrc.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Utilities for reading and writing credentials from and to configuration files.
10 """
11
12 import os
13 from ast import literal_eval
14 from collections import OrderedDict
15 from configparser import ConfigParser, ParsingError
16
17 from qiskit import QISKitError
18 from .credentials import Credentials
19
20 DEFAULT_QISKITRC_FILE = os.path.join(os.path.expanduser("~"),
21 '.qiskit', 'qiskitrc')
22
23
24 def read_credentials_from_qiskitrc(filename=None):
25 """
26 Read a configuration file and return a dict with its sections.
27
28 Args:
29 filename (str): full path to the qiskitrc file. If `None`, the default
30 location is used (`HOME/.qiskit/qiskitrc`).
31
32 Returns:
33 dict: dictionary with the contents of the configuration file, with
34 the form::
35
36 {credential_unique_id: Credentials}
37
38 Raises:
39 QISKitError: if the file was not parseable. Please note that this
40 exception is not raised if the file does not exist (instead, an
41 empty dict is returned).
42 """
43 filename = filename or DEFAULT_QISKITRC_FILE
44 config_parser = ConfigParser()
45 try:
46 config_parser.read(filename)
47 except ParsingError as ex:
48 raise QISKitError(str(ex))
49
50 # Build the credentials dictionary.
51 credentials_dict = OrderedDict()
52 for name in config_parser.sections():
53 single_credentials = dict(config_parser.items(name))
54 # Individually convert keys to their right types.
55 # TODO: consider generalizing, moving to json configuration or a more
56 # robust alternative.
57 if 'proxies' in single_credentials.keys():
58 single_credentials['proxies'] = literal_eval(
59 single_credentials['proxies'])
60 if 'verify' in single_credentials.keys():
61 single_credentials['verify'] = bool(single_credentials['verify'])
62 new_credentials = Credentials(**single_credentials)
63 credentials_dict[new_credentials.unique_id()] = new_credentials
64
65 return credentials_dict
66
67
68 def write_qiskit_rc(credentials, filename=None):
69 """
70 Write credentials to the configuration file.
71
72 Args:
73 credentials (dict): dictionary with the credentials, with the form::
74
75 {credentials_unique_id: Credentials}
76
77 filename (str): full path to the qiskitrc file. If `None`, the default
78 location is used (`HOME/.qiskit/qiskitrc`).
79 """
80 def _credentials_object_to_dict(obj):
81 return {key: getattr(obj, key)
82 for key in ['token', 'url', 'proxies', 'verify']
83 if getattr(obj, key)}
84
85 def _section_name(credentials_):
86 """Return a string suitable for use as a unique section name."""
87 base_name = 'ibmq'
88 if credentials_.is_ibmq():
89 base_name = '{}_{}_{}_{}'.format(base_name, *credentials_.unique_id())
90 return base_name
91
92 filename = filename or DEFAULT_QISKITRC_FILE
93 # Create the directories and the file if not found.
94 os.makedirs(os.path.dirname(filename), exist_ok=True)
95
96 unrolled_credentials = {
97 _section_name(credentials_object): _credentials_object_to_dict(credentials_object)
98 for _, credentials_object in credentials.items()
99 }
100
101 # Write the configuration file.
102 with open(filename, 'w') as config_file:
103 config_parser = ConfigParser()
104 config_parser.read_dict(unrolled_credentials)
105 config_parser.write(config_file)
106
107
108 def store_credentials(credentials, overwrite=False, filename=None):
109 """
110 Store the credentials for a single account in the configuration file.
111
112 Args:
113 credentials (Credentials): credentials instance.
114 overwrite (bool): overwrite existing credentials.
115 filename (str): full path to the qiskitrc file. If `None`, the default
116 location is used (`HOME/.qiskit/qiskitrc`).
117
118 Raises:
119 QISKitError: If credentials already exists and overwrite=False; or if
120 the account_name could not be assigned.
121 """
122 # Read the current providers stored in the configuration file.
123 filename = filename or DEFAULT_QISKITRC_FILE
124 stored_credentials = read_credentials_from_qiskitrc(filename)
125
126 if credentials.unique_id() in stored_credentials and not overwrite:
127 raise QISKitError('Credentials already present and overwrite=False')
128
129 # Append and write the credentials to file.
130 stored_credentials[credentials.unique_id()] = credentials
131 write_qiskit_rc(stored_credentials, filename)
132
133
134 def remove_credentials(credentials, filename=None):
135 """Remove credentials from qiskitrc.
136
137 Args:
138 credentials (Credentials): credentials.
139 filename (str): full path to the qiskitrc file. If `None`, the default
140 location is used (`HOME/.qiskit/qiskitrc`).
141
142 Raises:
143 QISKitError: If there is no account with that name on the configuration
144 file.
145 """
146 # Set the name of the Provider from the class.
147 stored_credentials = read_credentials_from_qiskitrc(filename)
148
149 try:
150 del stored_credentials[credentials.unique_id()]
151 except KeyError:
152 raise QISKitError('The account "%s" does not exist in the '
153 'configuration file')
154 write_qiskit_rc(stored_credentials, filename)
155
[end of qiskit/backends/ibmq/credentials/_configrc.py]
[start of qiskit/backends/ibmq/credentials/credentials.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """Model for representing IBM Q credentials."""
9
10 import re
11
12 # Regex that matches a IBMQ URL with hub information
13 REGEX_IBMQ_HUBS = (
14 '(?P<prefix>http[s]://.+/api)'
15 '/Hubs/(?P<hub>[^/]+)/Groups/(?P<group>[^/]+)/Projects/(?P<project>[^/]+)'
16 )
17 # Template for creating an IBMQ URL with hub information
18 TEMPLATE_IBMQ_HUBS = '{prefix}/Hubs/{hub}/Groups/{group}/Projects/{project}'
19
20
21 class Credentials(object):
22 """IBM Q account credentials.
23
24 Note that, by convention, two credentials that have the same hub, group
25 and token (regardless of other attributes) are considered equivalent.
26 The `unique_id()` returns the unique identifier.
27 """
28
29 def __init__(self, token, url, hub=None, group=None, project=None,
30 proxies=None, verify=True):
31 """
32 Args:
33 token (str): Quantum Experience or IBMQ API token.
34 url (str): URL for Quantum Experience or IBMQ.
35 hub (str): the hub used for IBMQ.
36 group (str): the group used for IBMQ.
37 project (str): the project used for IBMQ.
38 proxies (dict): proxy configuration for the API.
39 verify (bool): if False, ignores SSL certificates errors
40
41 Note:
42 `hub`, `group` and `project` are stored as attributes for
43 convenience, but their usage in the API is deprecated. The new-style
44 URLs (that includes these parameters as part of the url string, and
45 is automatically set during instantiation) should be used when
46 communicating with the API.
47 """
48 self.token = token
49 self.url, self.hub, self.group, self.project = _unify_ibmq_url(
50 url, hub, group, project)
51 self.proxies = proxies or {}
52 self.verify = verify
53
54 def is_ibmq(self):
55 """Return whether the credentials represent a IBMQ account."""
56 return all([self.hub, self.group, self.project])
57
58 def __eq__(self, other):
59 return self.__dict__ == other.__dict__
60
61 def unique_id(self):
62 """Return a value that uniquely identifies these credentials.
63
64 By convention, we assume (hub, group, project) is always unique.
65 """
66 return self.hub, self.group, self.project
67
68
69 def _unify_ibmq_url(url, hub=None, group=None, project=None):
70 """Return a new-style set of credential values (url and hub parameters).
71
72 Args:
73 url (str): URL for Quantum Experience or IBM Q.
74 hub (str): the hub used for IBM Q.
75 group (str): the group used for IBM Q.
76 project (str): the project used for IBM Q.
77
78 Returns:
79 tuple[url, hub, group, token]:
80 * url (str): new-style Quantum Experience or IBM Q URL (the hub,
81 group and project included in the URL.
82 * hub (str): the hub used for IBM Q.
83 * group (str): the group used for IBM Q.
84 * project (str): the project used for IBM Q.
85 """
86 # Check if the URL is "new style", and retrieve embedded parameters from it.
87 regex_match = re.match(REGEX_IBMQ_HUBS, url, re.IGNORECASE)
88 if regex_match:
89 _, hub, group, project = regex_match.groups()
90 else:
91 if hub and group and project:
92 # Assume it is an IBMQ URL, and update the url.
93 url = TEMPLATE_IBMQ_HUBS.format(prefix=url, hub=hub, group=group,
94 project=project)
95 else:
96 # Cleanup the hub, group and project, without modifying the url.
97 hub = group = project = None
98
99 return url, hub, group, project
100
[end of qiskit/backends/ibmq/credentials/credentials.py]
[start of qiskit/backends/ibmq/ibmqprovider.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """Provider for remote IBMQ backends with admin features."""
9
10 import warnings
11 from collections import OrderedDict
12
13 from qiskit.backends import BaseProvider
14
15 from .credentials._configrc import remove_credentials
16 from .credentials import (Credentials,
17 read_credentials_from_qiskitrc, store_credentials, discover_credentials)
18 from .ibmqaccounterror import IBMQAccountError
19 from .ibmqsingleprovider import IBMQSingleProvider
20
21 QE_URL = 'https://quantumexperience.ng.bluemix.net/api'
22
23
24 class IBMQProvider(BaseProvider):
25 """Provider for remote IBMQ backends with admin features.
26
27 This class is the entry point for handling backends from IBMQ, allowing
28 using different accounts.
29 """
30 def __init__(self):
31 super().__init__()
32
33 # dict[credentials_unique_id: IBMQSingleProvider]
34 # This attribute stores a reference to the different accounts. The
35 # keys are tuples (hub, group, project), as the convention is that
36 # that tuple uniquely identifies a set of credentials.
37 self._accounts = OrderedDict()
38
39 def backends(self, name=None, filters=None, **kwargs):
40 """Return all backends accessible via IBMQ provider, subject to optional filtering.
41
42 Args:
43 name (str): backend name to filter by
44 filters (callable): more complex filters, such as lambda functions
45 e.g. IBMQ.backends(filters=lambda b: b.configuration['n_qubits'] > 5)
46 kwargs: simple filters specifying a true/false criteria in the
47 backend configuration or backend status or provider credentials
48 e.g. IBMQ.backends(n_qubits=5, operational=True, hub='internal')
49
50 Returns:
51 list[IBMQBackend]: list of backends available that match the filter
52
53 Raises:
54 IBMQAccountError: if no account matched the filter.
55 """
56 # pylint: disable=arguments-differ
57
58 # Special handling of the credentials filters: match and prune from kwargs
59 credentials_filter = {}
60 for key in ['token', 'url', 'hub', 'group', 'project', 'proxies', 'verify']:
61 if key in kwargs:
62 credentials_filter[key] = kwargs.pop(key)
63 providers = [provider for provider in self._accounts.values() if
64 self._credentials_match_filter(provider.credentials,
65 credentials_filter)]
66
67 # Special handling of the `name` parameter, to support alias resolution.
68 if name:
69 aliases = self.aliased_backend_names()
70 aliases.update(self.deprecated_backend_names())
71 name = aliases.get(name, name)
72
73 # Aggregate the list of filtered backends.
74 backends = []
75 for provider in providers:
76 backends = backends + provider.backends(
77 name=name, filters=filters, **kwargs)
78
79 return backends
80
81 @staticmethod
82 def deprecated_backend_names():
83 """Returns deprecated backend names."""
84 return {
85 'ibmqx_qasm_simulator': 'ibmq_qasm_simulator',
86 'ibmqx_hpc_qasm_simulator': 'ibmq_qasm_simulator',
87 'real': 'ibmqx1'
88 }
89
90 @staticmethod
91 def aliased_backend_names():
92 """Returns aliased backend names."""
93 return {
94 'ibmq_5_yorktown': 'ibmqx2',
95 'ibmq_5_tenerife': 'ibmqx4',
96 'ibmq_16_rueschlikon': 'ibmqx5',
97 'ibmq_20_austin': 'QS1_1'
98 }
99
100 def enable_account(self, token, url=QE_URL, **kwargs):
101 """Authenticate a new IBMQ account and add for use during this session.
102
103 Login into Quantum Experience or IBMQ using the provided credentials,
104 adding the account to the current session. The account is not stored
105 in disk.
106
107 Args:
108 token (str): Quantum Experience or IBM Q API token.
109 url (str): URL for Quantum Experience or IBM Q (for IBM Q,
110 including the hub, group and project in the URL).
111 **kwargs (dict):
112 * proxies (dict): Proxy configuration for the API.
113 * verify (bool): If False, ignores SSL certificates errors
114 """
115 credentials = Credentials(token, url, **kwargs)
116
117 self._append_account(credentials)
118
119 def save_account(self, token, url=QE_URL, **kwargs):
120 """Save the account to disk for future use.
121
122 Login into Quantum Experience or IBMQ using the provided credentials,
123 adding the account to the current session. The account is stored in
124 disk for future use.
125
126 Args:
127 token (str): Quantum Experience or IBM Q API token.
128 url (str): URL for Quantum Experience or IBM Q (for IBM Q,
129 including the hub, group and project in the URL).
130 **kwargs (dict):
131 * proxies (dict): Proxy configuration for the API.
132 * verify (bool): If False, ignores SSL certificates errors
133 """
134 credentials = Credentials(token, url, **kwargs)
135
136 # Check if duplicated credentials are already stored. By convention,
137 # we assume (hub, group, project) is always unique.
138 stored_credentials = read_credentials_from_qiskitrc()
139
140 if credentials.unique_id() in stored_credentials.keys():
141 warnings.warn('Credentials are already stored.')
142 else:
143 store_credentials(credentials)
144
145 def active_accounts(self):
146 """List all accounts currently in the session.
147
148 Returns:
149 list[dict]: a list with information about the accounts currently
150 in the session.
151 """
152 information = []
153 for provider in self._accounts.values():
154 information.append({
155 'token': provider.credentials.token,
156 'url': provider.credentials.url,
157 })
158
159 return information
160
161 def stored_accounts(self):
162 """List all accounts stored to disk.
163
164 Returns:
165 list[dict]: a list with information about the accounts stored
166 on disk.
167 """
168 information = []
169 stored_creds = read_credentials_from_qiskitrc()
170 for creds in stored_creds:
171 information.append({
172 'token': stored_creds[creds].token,
173 'url': stored_creds[creds].url
174 })
175
176 return information
177
178 def load_accounts(self, **kwargs):
179 """Load IBMQ accounts found in the system into current session,
180 subject to optional filtering.
181
182 Automatically load the accounts found in the system. This method
183 looks for credentials in the following locations, in order, and
184 returns as soon as credentials are found:
185
186 1. in the `Qconfig.py` file in the current working directory.
187 2. in the environment variables.
188 3. in the `qiskitrc` configuration file
189
190 Raises:
191 IBMQAccountError: if no credentials are found.
192 """
193 for credentials in discover_credentials().values():
194 if self._credentials_match_filter(credentials, kwargs):
195 self._append_account(credentials)
196
197 if not self._accounts:
198 raise IBMQAccountError('No IBMQ credentials found on disk.')
199
200 def disable_accounts(self, **kwargs):
201 """Disable accounts in the current session, subject to optional filtering.
202
203 The filter kwargs can be `token`, `url`, `hub`, `group`, `project`.
204 If no filter is passed, all accounts in the current session will be disabled.
205
206 Raises:
207 IBMQAccountError: if no account matched the filter.
208 """
209 disabled = False
210
211 # Try to remove from session.
212 current_creds = self._accounts.copy()
213 for creds in current_creds:
214 credentials = Credentials(current_creds[creds].credentials.token,
215 current_creds[creds].credentials.url)
216 if self._credentials_match_filter(credentials, kwargs):
217 del self._accounts[credentials.unique_id()]
218 disabled = True
219
220 if not disabled:
221 raise IBMQAccountError('No matching account to disable in current session.')
222
223 def delete_accounts(self, **kwargs):
224 """Delete saved accounts from disk, subject to optional filtering.
225
226 The filter kwargs can be `token`, `url`, `hub`, `group`, `project`.
227 If no filter is passed, all accounts will be deleted from disk.
228
229 Raises:
230 IBMQAccountError: if no account matched the filter.
231 """
232 deleted = False
233
234 # Try to delete from disk.
235 stored_creds = read_credentials_from_qiskitrc()
236 for creds in stored_creds:
237 credentials = Credentials(stored_creds[creds].token,
238 stored_creds[creds].url)
239 if self._credentials_match_filter(credentials, kwargs):
240 remove_credentials(credentials)
241 deleted = True
242
243 if not deleted:
244 raise IBMQAccountError('No matching account to delete from disk.')
245
246 def _append_account(self, credentials):
247 """Append an account with the specified credentials to the session.
248
249 Args:
250 credentials (Credentials): set of credentials.
251
252 Returns:
253 IBMQSingleProvider: new single-account provider.
254 """
255 # Check if duplicated credentials are already in use. By convention,
256 # we assume (hub, group, project) is always unique.
257 if credentials.unique_id() in self._accounts.keys():
258 warnings.warn('Credentials are already in use.')
259
260 single_provider = IBMQSingleProvider(credentials, self)
261 self._accounts[credentials.unique_id()] = single_provider
262
263 return single_provider
264
265 def _credentials_match_filter(self, credentials, filter_dict):
266 """Return True if the credentials match a filter.
267
268 These filters apply on properties of a Credentials object:
269 token, url, hub, group, project, proxies, verify
270 Any other filter has no effect.
271
272 Args:
273 credentials (Credentials): IBMQ credentials object
274 filter_dict (dict): dictionary of filter conditions
275
276 Returns:
277 bool: True if the credentials meet all the filter conditions
278 """
279 return all(getattr(credentials, key_, None) == value_ for
280 key_, value_ in filter_dict.items())
281
[end of qiskit/backends/ibmq/ibmqprovider.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| Qiskit/qiskit | 77dc51b93e7312bbff8f5acf7d8242232bd6624f | credentials failed for qiskit ver 0.6.1
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit Terra version**: 0.6.1
- **Python version**: 3.7.0
- **Operating system**:MAC OSX 10.13.6
### What is the current behavior?
After I acquired fresh token from https://quantumexperience.ng.bluemix.net/qx/account/advanced
IBMQ.load_accounts() fails.
### Steps to reproduce the problem
```
from qiskit import IBMQ
myToken='b6abe11442c9a...'
IBMQ.save_account(myToken)
IBMQ.load_accounts()
```
Results with
```
Traceback (most recent call last):
File "/anaconda3/lib/python3.7/site-packages/qiskit/backends/ibmq/ibmqsingleprovider.py", line 71, in _authenticate
credentials.verify)
File "/anaconda3/lib/python3.7/site-packages/IBMQuantumExperience/IBMQuantumExperience.py", line 478, in __init__
self.req = _Request(token, config=config, verify=verify)
File "/anaconda3/lib/python3.7/site-packages/IBMQuantumExperience/IBMQuantumExperience.py", line 253, in __init__
ntlm_credentials=self.ntlm_credentials)
File "/anaconda3/lib/python3.7/site-packages/IBMQuantumExperience/IBMQuantumExperience.py", line 95, in __init__
self.obtain_token(config=self.config)
File "/anaconda3/lib/python3.7/site-packages/IBMQuantumExperience/IBMQuantumExperience.py", line 159, in obtain_token
raise CredentialsError('error during login: %s' % error_message)
IBMQuantumExperience.IBMQuantumExperience.CredentialsError: error during login: Wrong user or password, check your credentials.
```
### What is the expected behavior?
Would be better if IBMQ.load_accounts() accepted me. All worked well w/ ver 0.5.
### Suggested solutions
| Can you try enable_account or regenerating the token. Your code should work. If you type `IBMQ.stored_accounts()` do you see the account.
@pacomf I can confirm this has happened to me today as well.
I cant reproduce the bug, i regenerate my APIToken and it works fine using qiskit terra... is it still happening? Can you send me more details?
It happened for about 5 hours on the weekend. However, @nonhermitian could run at the same time and then it started working again.
Mmmm, maybe an issue with the API... we will investigate it
I will add that, when it happened to me, I could log into some accounts and not others.
Hi @jaygambetta,
your tip helped. IBMQ.stored_accounts() has returned some old token, not the new one.
Looks like IBMQ.save_account(myToken) is unable to replace token if it exist - I leave it to you to decide if it is a bug or a feature.
My hack around it is to execute first: IBMQ.delete_accounts()
to clear my old token. So this sequence always works:
`
IBMQ.delete_accounts()
myToken='b6abe11442c9a...'
IBMQ.save_account(myToken)
IBMQ.load_accounts()
`
I can move on, closing thus ticket.
Thanks for help
Jan
Let's leave this open and investigate whether there's a bug with `IBMQ.save_account()` re-writing old tokens.
@diego-plan9 can you please have a look?
Yes - thanks @balewski for the information, which is spot on - currently, `IBMQ.save_account()` will just print a warning and do nothing else if old credentials are present:
https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/backends/ibmq/ibmqprovider.py#L140-L143
> Looks like IBMQ.save_account(myToken) is unable to replace token if it exist - I leave it to you to decide if it is a bug or a feature.
Actually ... I can't decide if it is a bug or a feature either! :thinking: In the original draft implementation, the `.save_account()` (`.add_account()` by that time) method was [raising an Exception](https://github.com/Qiskit/qiskit-terra/blob/746245e29c5cadc44dc37851b19a4150b4e86cd8/qiskit/backends/ibmq/ibmqprovider.py#L111) in the case of trying to store a duplicate account. This was later changed to a warning, I'm unsure if by design and as a hard requisite from Jupyter-users needs, or also related to the slight tuning of the method functionality (ie. not authenticating during the call, just storing in disk). So I'm actually de-assigning myself, as probably the rest of the team has a more fresh view of the design decisions related to #1000.
I think we have several options:
* consider that not overwriting and raising a warning is indeed the desired behavior: the main drawback is that the warning might be easy to miss (and was probably the source of confusion in this issue).
* tune the method a bit in order to accept an `overwrite=True` optional parameter or a similar approach: the `credentials` module already has the needed parts in place, the main drawback would be that we touch a bit the public API.
* be a bit more restrictive and promote the warning back to an exception: it might affect users running the method twice and already used to not raising a warning (ie. maybe notebook users).
One way or the other, I think we need to make sure that the flow for updating an existing stored token is a bit smoother than the delete-save workaround proposed by @balewski, as it seems a relatively common use case.
From external user perspective:
It happens rather often that the ibmq_16_melbourne or even sometimes ibmqx4 does not accept the job, throws some 'general error', despite your web-page says both hardwares are operational.
Then, it is natural to (wrongly?) guess perhaps my token is invalid.
Then, I'd ask for a new token and try to use it - hoping it will help.
For such train of though the natural solution is assume 'user knows what he wants'. If user wants to replace the token by calling save_account(), just replace it. You can issue a warning that there was an old token (still valid), but why not just replace token each time user calls IBMQ.save_account(myToken) ?
Would this have any negative effect on your end?
Thanks
Jan
I think save_account should not raise an exception. Overwriting is not bad behavior. Similar to overwriting a key in a dict or something. Should just work.
@ajavadia is there an update.
Hi,
there is some inconsistency between the devices status you show here:
https://quantumexperience.ng.bluemix.net/qx/account/advanced
and actual avaliability.
At this moment, both ibmqx4 and ibmq_16_melbourne are reported to work.
However,. when I try to submit my circuit using Qiskit ver: 0.6.1 I get the error below for either.
Got a 400 code response to https://quantumexperience.ng.bluemix.net/api/Jobs?access_token=VCgYWnMUUBaYeT5gSmGO14cX93Foo4rccsLUVvIjf3bwYEZNjxlDcRmPArS2wZ25: {"error":{"status":400,"message":"Generic error","code":"GENERIC_ERROR"}}
Note, my token is correct, because I can submit the circuit to your simulator
'backend': 'ibmq_qasm_simulator',
'jobId2': '1814808',
'startTime': '2018-11-09 17:53:28'}
Can you have a look ?
Thanks
Jan | 2018-11-19T08:27:15Z | <patch>
diff --git a/qiskit/backends/ibmq/credentials/_configrc.py b/qiskit/backends/ibmq/credentials/_configrc.py
--- a/qiskit/backends/ibmq/credentials/_configrc.py
+++ b/qiskit/backends/ibmq/credentials/_configrc.py
@@ -9,6 +9,7 @@
Utilities for reading and writing credentials from and to configuration files.
"""
+import warnings
import os
from ast import literal_eval
from collections import OrderedDict
@@ -116,15 +117,17 @@ def store_credentials(credentials, overwrite=False, filename=None):
location is used (`HOME/.qiskit/qiskitrc`).
Raises:
- QISKitError: If credentials already exists and overwrite=False; or if
- the account_name could not be assigned.
+ QISKitError: if the account_name could not be assigned.
"""
# Read the current providers stored in the configuration file.
filename = filename or DEFAULT_QISKITRC_FILE
stored_credentials = read_credentials_from_qiskitrc(filename)
+ # Check if duplicated credentials are already stored. By convention,
+ # we assume (hub, group, project) is always unique.
if credentials.unique_id() in stored_credentials and not overwrite:
- raise QISKitError('Credentials already present and overwrite=False')
+ warnings.warn('Credentials already present. Set overwrite=True to overwrite.')
+ return
# Append and write the credentials to file.
stored_credentials[credentials.unique_id()] = credentials
diff --git a/qiskit/backends/ibmq/credentials/credentials.py b/qiskit/backends/ibmq/credentials/credentials.py
--- a/qiskit/backends/ibmq/credentials/credentials.py
+++ b/qiskit/backends/ibmq/credentials/credentials.py
@@ -22,7 +22,7 @@ class Credentials(object):
"""IBM Q account credentials.
Note that, by convention, two credentials that have the same hub, group
- and token (regardless of other attributes) are considered equivalent.
+ and project (regardless of other attributes) are considered equivalent.
The `unique_id()` returns the unique identifier.
"""
diff --git a/qiskit/backends/ibmq/ibmqprovider.py b/qiskit/backends/ibmq/ibmqprovider.py
--- a/qiskit/backends/ibmq/ibmqprovider.py
+++ b/qiskit/backends/ibmq/ibmqprovider.py
@@ -116,7 +116,7 @@ def enable_account(self, token, url=QE_URL, **kwargs):
self._append_account(credentials)
- def save_account(self, token, url=QE_URL, **kwargs):
+ def save_account(self, token, url=QE_URL, overwrite=False, **kwargs):
"""Save the account to disk for future use.
Login into Quantum Experience or IBMQ using the provided credentials,
@@ -127,20 +127,13 @@ def save_account(self, token, url=QE_URL, **kwargs):
token (str): Quantum Experience or IBM Q API token.
url (str): URL for Quantum Experience or IBM Q (for IBM Q,
including the hub, group and project in the URL).
+ overwrite (bool): overwrite existing credentials.
**kwargs (dict):
* proxies (dict): Proxy configuration for the API.
* verify (bool): If False, ignores SSL certificates errors
"""
credentials = Credentials(token, url, **kwargs)
-
- # Check if duplicated credentials are already stored. By convention,
- # we assume (hub, group, project) is always unique.
- stored_credentials = read_credentials_from_qiskitrc()
-
- if credentials.unique_id() in stored_credentials.keys():
- warnings.warn('Credentials are already stored.')
- else:
- store_credentials(credentials)
+ store_credentials(credentials, overwrite=overwrite)
def active_accounts(self):
"""List all accounts currently in the session.
</patch> | [] | [] | |||
docker__compose-6410 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade `events` to use the new API fields
In API version 1.22 the events structure was updated to include labels and better names for fields.
We should update to the new field names, and use labels directly from the event, instead of having to query for them with inspect.
Ref: https://github.com/docker/docker/pull/18888
"container destroyed" event is not published
When using "docker-compose events" container destroyed events are not shown, even though they are generated by docker itself.
**Output of `docker-compose version`**
```
docker-compose version 1.23.1, build b02f1306
docker-py version: 3.5.0
CPython version: 3.6.6
OpenSSL version: OpenSSL 1.1.0h 27 Mar 2018
```
**Output of `docker version`**
```
Client: Docker Engine - Community
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:47:43 2018
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:55:00 2018
OS/Arch: linux/amd64
Experimental: false
```
**Output of `docker-compose config`**
(Make sure to add the relevant `-f` and other flags)
```
services:
hello:
image: hello-world
version: '3.0'
```
## Steps to reproduce the issue
Compare:
1. docker-compose events &
2. docker-compose up
3. docker-compose down
with
1. docker events --filter=type=container &
2. docker-compose up
3. docker-compose down
### Observed result
```
2018-11-22 15:14:42.067545 container create 9767377cf6a576b73632c8ab7defb58c0dac51ec9212f09d862b5c288c59a8f7 (image=hello-world, name=docker-composer_hello_1_e3895d8ea937)
2018-11-22 15:14:42.084831 container attach 9767377cf6a576b73632c8ab7defb58c0dac51ec9212f09d862b5c288c59a8f7 (image=hello-world, name=docker-composer_hello_1_e3895d8ea937)
2018-11-22 15:14:42.654275 container start 9767377cf6a576b73632c8ab7defb58c0dac51ec9212f09d862b5c288c59a8f7 (image=hello-world, name=docker-composer_hello_1_e3895d8ea937)
2018-11-22 15:14:42.774647 container die 9767377cf6a576b73632c8ab7defb58c0dac51ec9212f09d862b5c288c59a8f7 (image=hello-world, name=docker-composer_hello_1_e3895d8ea937)
```
### Expected result
```
2018-11-22T15:15:18.112918200+01:00 container create 8da39f0308b3f584606548df256e2bc161aacb683419917d203678688b310794 (com.docker.compose.config-hash=22e81a07349b37675f247bfe03a045dc80de04b082b3a396eb5e5f6ba313956b, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=docker-composer, com.docker.compose.service=hello, com.docker.compose.slug=bc6436f59b6de14f3fb5f1bffa0633633fb9c1979ec4628fc4354d83f719726, com.docker.compose.version=1.23.1, image=hello-world, name=docker-composer_hello_1_bc6436f59b6d)
2018-11-22T15:15:18.127280300+01:00 container attach 8da39f0308b3f584606548df256e2bc161aacb683419917d203678688b310794 (com.docker.compose.config-hash=22e81a07349b37675f247bfe03a045dc80de04b082b3a396eb5e5f6ba313956b, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=docker-composer, com.docker.compose.service=hello, com.docker.compose.slug=bc6436f59b6de14f3fb5f1bffa0633633fb9c1979ec4628fc4354d83f719726, com.docker.compose.version=1.23.1, image=hello-world, name=docker-composer_hello_1_bc6436f59b6d)
2018-11-22T15:15:18.683667800+01:00 container start 8da39f0308b3f584606548df256e2bc161aacb683419917d203678688b310794 (com.docker.compose.config-hash=22e81a07349b37675f247bfe03a045dc80de04b082b3a396eb5e5f6ba313956b, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=docker-composer, com.docker.compose.service=hello, com.docker.compose.slug=bc6436f59b6de14f3fb5f1bffa0633633fb9c1979ec4628fc4354d83f719726, com.docker.compose.version=1.23.1, image=hello-world, name=docker-composer_hello_1_bc6436f59b6d)
2018-11-22T15:15:18.821344800+01:00 container die 8da39f0308b3f584606548df256e2bc161aacb683419917d203678688b310794 (com.docker.compose.config-hash=22e81a07349b37675f247bfe03a045dc80de04b082b3a396eb5e5f6ba313956b, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=docker-composer, com.docker.compose.service=hello, com.docker.compose.slug=bc6436f59b6de14f3fb5f1bffa0633633fb9c1979ec4628fc4354d83f719726, com.docker.compose.version=1.23.1, exitCode=0, image=hello-world, name=docker-composer_hello_1_bc6436f59b6d)
2018-11-22T15:15:22.065739100+01:00 container destroy 8da39f0308b3f584606548df256e2bc161aacb683419917d203678688b310794 (com.docker.compose.config-hash=22e81a07349b37675f247bfe03a045dc80de04b082b3a396eb5e5f6ba313956b, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=docker-composer, com.docker.compose.service=hello, com.docker.compose.slug=bc6436f59b6de14f3fb5f1bffa0633633fb9c1979ec4628fc4354d83f719726, com.docker.compose.version=1.23.1, image=hello-world, name=docker-composer_hello_1_bc6436f59b6d)
```
</issue>
<code>
[start of README.md]
1 Docker Compose
2 ==============
3 ![Docker Compose](logo.png?raw=true "Docker Compose Logo")
4
5 Compose is a tool for defining and running multi-container Docker applications.
6 With Compose, you use a Compose file to configure your application's services.
7 Then, using a single command, you create and start all the services
8 from your configuration. To learn more about all the features of Compose
9 see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#features).
10
11 Compose is great for development, testing, and staging environments, as well as
12 CI workflows. You can learn more about each case in
13 [Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#common-use-cases).
14
15 Using Compose is basically a three-step process.
16
17 1. Define your app's environment with a `Dockerfile` so it can be
18 reproduced anywhere.
19 2. Define the services that make up your app in `docker-compose.yml` so
20 they can be run together in an isolated environment.
21 3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
22
23 A `docker-compose.yml` looks like this:
24
25 version: '2'
26
27 services:
28 web:
29 build: .
30 ports:
31 - "5000:5000"
32 volumes:
33 - .:/code
34 redis:
35 image: redis
36
37 For more information about the Compose file, see the
38 [Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file/compose-versioning.md).
39
40 Compose has commands for managing the whole lifecycle of your application:
41
42 * Start, stop and rebuild services
43 * View the status of running services
44 * Stream the log output of running services
45 * Run a one-off command on a service
46
47 Installation and documentation
48 ------------------------------
49
50 - Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
51 - Code repository for Compose is on [GitHub](https://github.com/docker/compose).
52 - If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new/choose). Thank you!
53
54 Contributing
55 ------------
56
57 [![Build Status](https://jenkins.dockerproject.org/buildStatus/icon?job=docker/compose/master)](https://jenkins.dockerproject.org/job/docker/job/compose/job/master/)
58
59 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
60
61 Releasing
62 ---------
63
64 Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/project/RELEASE-PROCESS.md).
65
[end of README.md]
[start of compose/cli/log_printer.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import sys
5 from collections import namedtuple
6 from itertools import cycle
7 from threading import Thread
8
9 from docker.errors import APIError
10 from six.moves import _thread as thread
11 from six.moves.queue import Empty
12 from six.moves.queue import Queue
13
14 from . import colors
15 from compose import utils
16 from compose.cli.signals import ShutdownException
17 from compose.utils import split_buffer
18
19
20 class LogPresenter(object):
21
22 def __init__(self, prefix_width, color_func):
23 self.prefix_width = prefix_width
24 self.color_func = color_func
25
26 def present(self, container, line):
27 prefix = container.name_without_project.ljust(self.prefix_width)
28 return '{prefix} {line}'.format(
29 prefix=self.color_func(prefix + ' |'),
30 line=line)
31
32
33 def build_log_presenters(service_names, monochrome):
34 """Return an iterable of functions.
35
36 Each function can be used to format the logs output of a container.
37 """
38 prefix_width = max_name_width(service_names)
39
40 def no_color(text):
41 return text
42
43 for color_func in cycle([no_color] if monochrome else colors.rainbow()):
44 yield LogPresenter(prefix_width, color_func)
45
46
47 def max_name_width(service_names, max_index_width=3):
48 """Calculate the maximum width of container names so we can make the log
49 prefixes line up like so:
50
51 db_1 | Listening
52 web_1 | Listening
53 """
54 return max(len(name) for name in service_names) + max_index_width
55
56
57 class LogPrinter(object):
58 """Print logs from many containers to a single output stream."""
59
60 def __init__(self,
61 containers,
62 presenters,
63 event_stream,
64 output=sys.stdout,
65 cascade_stop=False,
66 log_args=None):
67 self.containers = containers
68 self.presenters = presenters
69 self.event_stream = event_stream
70 self.output = utils.get_output_stream(output)
71 self.cascade_stop = cascade_stop
72 self.log_args = log_args or {}
73
74 def run(self):
75 if not self.containers:
76 return
77
78 queue = Queue()
79 thread_args = queue, self.log_args
80 thread_map = build_thread_map(self.containers, self.presenters, thread_args)
81 start_producer_thread((
82 thread_map,
83 self.event_stream,
84 self.presenters,
85 thread_args))
86
87 for line in consume_queue(queue, self.cascade_stop):
88 remove_stopped_threads(thread_map)
89
90 if self.cascade_stop:
91 matching_container = [cont.name for cont in self.containers if cont.name == line]
92 if line in matching_container:
93 # Returning the name of the container that started the
94 # the cascade_stop so we can return the correct exit code
95 return line
96
97 if not line:
98 if not thread_map:
99 # There are no running containers left to tail, so exit
100 return
101 # We got an empty line because of a timeout, but there are still
102 # active containers to tail, so continue
103 continue
104
105 self.write(line)
106
107 def write(self, line):
108 try:
109 self.output.write(line)
110 except UnicodeEncodeError:
111 # This may happen if the user's locale settings don't support UTF-8
112 # and UTF-8 characters are present in the log line. The following
113 # will output a "degraded" log with unsupported characters
114 # replaced by `?`
115 self.output.write(line.encode('ascii', 'replace').decode())
116 self.output.flush()
117
118
119 def remove_stopped_threads(thread_map):
120 for container_id, tailer_thread in list(thread_map.items()):
121 if not tailer_thread.is_alive():
122 thread_map.pop(container_id, None)
123
124
125 def build_thread(container, presenter, queue, log_args):
126 tailer = Thread(
127 target=tail_container_logs,
128 args=(container, presenter, queue, log_args))
129 tailer.daemon = True
130 tailer.start()
131 return tailer
132
133
134 def build_thread_map(initial_containers, presenters, thread_args):
135 return {
136 container.id: build_thread(container, next(presenters), *thread_args)
137 for container in initial_containers
138 }
139
140
141 class QueueItem(namedtuple('_QueueItem', 'item is_stop exc')):
142
143 @classmethod
144 def new(cls, item):
145 return cls(item, None, None)
146
147 @classmethod
148 def exception(cls, exc):
149 return cls(None, None, exc)
150
151 @classmethod
152 def stop(cls, item=None):
153 return cls(item, True, None)
154
155
156 def tail_container_logs(container, presenter, queue, log_args):
157 generator = get_log_generator(container)
158
159 try:
160 for item in generator(container, log_args):
161 queue.put(QueueItem.new(presenter.present(container, item)))
162 except Exception as e:
163 queue.put(QueueItem.exception(e))
164 return
165 if log_args.get('follow'):
166 queue.put(QueueItem.new(presenter.color_func(wait_on_exit(container))))
167 queue.put(QueueItem.stop(container.name))
168
169
170 def get_log_generator(container):
171 if container.has_api_logs:
172 return build_log_generator
173 return build_no_log_generator
174
175
176 def build_no_log_generator(container, log_args):
177 """Return a generator that prints a warning about logs and waits for
178 container to exit.
179 """
180 yield "WARNING: no logs are available with the '{}' log driver\n".format(
181 container.log_driver)
182
183
184 def build_log_generator(container, log_args):
185 # if the container doesn't have a log_stream we need to attach to container
186 # before log printer starts running
187 if container.log_stream is None:
188 stream = container.logs(stdout=True, stderr=True, stream=True, **log_args)
189 else:
190 stream = container.log_stream
191
192 return split_buffer(stream)
193
194
195 def wait_on_exit(container):
196 try:
197 exit_code = container.wait()
198 return "%s exited with code %s\n" % (container.name, exit_code)
199 except APIError as e:
200 return "Unexpected API error for %s (HTTP code %s)\nResponse body:\n%s\n" % (
201 container.name, e.response.status_code,
202 e.response.text or '[empty]'
203 )
204
205
206 def start_producer_thread(thread_args):
207 producer = Thread(target=watch_events, args=thread_args)
208 producer.daemon = True
209 producer.start()
210
211
212 def watch_events(thread_map, event_stream, presenters, thread_args):
213 crashed_containers = set()
214 for event in event_stream:
215 if event['action'] == 'stop':
216 thread_map.pop(event['id'], None)
217
218 if event['action'] == 'die':
219 thread_map.pop(event['id'], None)
220 crashed_containers.add(event['id'])
221
222 if event['action'] != 'start':
223 continue
224
225 if event['id'] in thread_map:
226 if thread_map[event['id']].is_alive():
227 continue
228 # Container was stopped and started, we need a new thread
229 thread_map.pop(event['id'], None)
230
231 # Container crashed so we should reattach to it
232 if event['id'] in crashed_containers:
233 event['container'].attach_log_stream()
234 crashed_containers.remove(event['id'])
235
236 thread_map[event['id']] = build_thread(
237 event['container'],
238 next(presenters),
239 *thread_args)
240
241
242 def consume_queue(queue, cascade_stop):
243 """Consume the queue by reading lines off of it and yielding them."""
244 while True:
245 try:
246 item = queue.get(timeout=0.1)
247 except Empty:
248 yield None
249 continue
250 # See https://github.com/docker/compose/issues/189
251 except thread.error:
252 raise ShutdownException()
253
254 if item.exc:
255 raise item.exc
256
257 if item.is_stop and not cascade_stop:
258 continue
259
260 yield item.item
261
[end of compose/cli/log_printer.py]
[start of compose/const.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import sys
5
6 from .version import ComposeVersion
7
8 DEFAULT_TIMEOUT = 10
9 HTTP_TIMEOUT = 60
10 IMAGE_EVENTS = ['delete', 'import', 'load', 'pull', 'push', 'save', 'tag', 'untag']
11 IS_WINDOWS_PLATFORM = (sys.platform == "win32")
12 LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
13 LABEL_ONE_OFF = 'com.docker.compose.oneoff'
14 LABEL_PROJECT = 'com.docker.compose.project'
15 LABEL_SERVICE = 'com.docker.compose.service'
16 LABEL_NETWORK = 'com.docker.compose.network'
17 LABEL_VERSION = 'com.docker.compose.version'
18 LABEL_SLUG = 'com.docker.compose.slug'
19 LABEL_VOLUME = 'com.docker.compose.volume'
20 LABEL_CONFIG_HASH = 'com.docker.compose.config-hash'
21 NANOCPUS_SCALE = 1000000000
22 PARALLEL_LIMIT = 64
23
24 SECRETS_PATH = '/run/secrets'
25 WINDOWS_LONGPATH_PREFIX = '\\\\?\\'
26
27 COMPOSEFILE_V1 = ComposeVersion('1')
28 COMPOSEFILE_V2_0 = ComposeVersion('2.0')
29 COMPOSEFILE_V2_1 = ComposeVersion('2.1')
30 COMPOSEFILE_V2_2 = ComposeVersion('2.2')
31 COMPOSEFILE_V2_3 = ComposeVersion('2.3')
32 COMPOSEFILE_V2_4 = ComposeVersion('2.4')
33
34 COMPOSEFILE_V3_0 = ComposeVersion('3.0')
35 COMPOSEFILE_V3_1 = ComposeVersion('3.1')
36 COMPOSEFILE_V3_2 = ComposeVersion('3.2')
37 COMPOSEFILE_V3_3 = ComposeVersion('3.3')
38 COMPOSEFILE_V3_4 = ComposeVersion('3.4')
39 COMPOSEFILE_V3_5 = ComposeVersion('3.5')
40 COMPOSEFILE_V3_6 = ComposeVersion('3.6')
41 COMPOSEFILE_V3_7 = ComposeVersion('3.7')
42
43 API_VERSIONS = {
44 COMPOSEFILE_V1: '1.21',
45 COMPOSEFILE_V2_0: '1.22',
46 COMPOSEFILE_V2_1: '1.24',
47 COMPOSEFILE_V2_2: '1.25',
48 COMPOSEFILE_V2_3: '1.30',
49 COMPOSEFILE_V2_4: '1.35',
50 COMPOSEFILE_V3_0: '1.25',
51 COMPOSEFILE_V3_1: '1.25',
52 COMPOSEFILE_V3_2: '1.25',
53 COMPOSEFILE_V3_3: '1.30',
54 COMPOSEFILE_V3_4: '1.30',
55 COMPOSEFILE_V3_5: '1.30',
56 COMPOSEFILE_V3_6: '1.36',
57 COMPOSEFILE_V3_7: '1.38',
58 }
59
60 API_VERSION_TO_ENGINE_VERSION = {
61 API_VERSIONS[COMPOSEFILE_V1]: '1.9.0',
62 API_VERSIONS[COMPOSEFILE_V2_0]: '1.10.0',
63 API_VERSIONS[COMPOSEFILE_V2_1]: '1.12.0',
64 API_VERSIONS[COMPOSEFILE_V2_2]: '1.13.0',
65 API_VERSIONS[COMPOSEFILE_V2_3]: '17.06.0',
66 API_VERSIONS[COMPOSEFILE_V2_4]: '17.12.0',
67 API_VERSIONS[COMPOSEFILE_V3_0]: '1.13.0',
68 API_VERSIONS[COMPOSEFILE_V3_1]: '1.13.0',
69 API_VERSIONS[COMPOSEFILE_V3_2]: '1.13.0',
70 API_VERSIONS[COMPOSEFILE_V3_3]: '17.06.0',
71 API_VERSIONS[COMPOSEFILE_V3_4]: '17.06.0',
72 API_VERSIONS[COMPOSEFILE_V3_5]: '17.06.0',
73 API_VERSIONS[COMPOSEFILE_V3_6]: '18.02.0',
74 API_VERSIONS[COMPOSEFILE_V3_7]: '18.06.0',
75 }
76
[end of compose/const.py]
[start of compose/project.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import datetime
5 import logging
6 import operator
7 import re
8 from functools import reduce
9
10 import enum
11 import six
12 from docker.errors import APIError
13
14 from . import parallel
15 from .config import ConfigurationError
16 from .config.config import V1
17 from .config.sort_services import get_container_name_from_network_mode
18 from .config.sort_services import get_service_name_from_network_mode
19 from .const import IMAGE_EVENTS
20 from .const import LABEL_ONE_OFF
21 from .const import LABEL_PROJECT
22 from .const import LABEL_SERVICE
23 from .container import Container
24 from .network import build_networks
25 from .network import get_networks
26 from .network import ProjectNetworks
27 from .service import BuildAction
28 from .service import ContainerNetworkMode
29 from .service import ContainerPidMode
30 from .service import ConvergenceStrategy
31 from .service import NetworkMode
32 from .service import parse_repository_tag
33 from .service import PidMode
34 from .service import Service
35 from .service import ServiceNetworkMode
36 from .service import ServicePidMode
37 from .utils import microseconds_from_time_nano
38 from .utils import truncate_string
39 from .volume import ProjectVolumes
40
41
42 log = logging.getLogger(__name__)
43
44
45 @enum.unique
46 class OneOffFilter(enum.Enum):
47 include = 0
48 exclude = 1
49 only = 2
50
51 @classmethod
52 def update_labels(cls, value, labels):
53 if value == cls.only:
54 labels.append('{0}={1}'.format(LABEL_ONE_OFF, "True"))
55 elif value == cls.exclude:
56 labels.append('{0}={1}'.format(LABEL_ONE_OFF, "False"))
57 elif value == cls.include:
58 pass
59 else:
60 raise ValueError("Invalid value for one_off: {}".format(repr(value)))
61
62
63 class Project(object):
64 """
65 A collection of services.
66 """
67 def __init__(self, name, services, client, networks=None, volumes=None, config_version=None):
68 self.name = name
69 self.services = services
70 self.client = client
71 self.volumes = volumes or ProjectVolumes({})
72 self.networks = networks or ProjectNetworks({}, False)
73 self.config_version = config_version
74
75 def labels(self, one_off=OneOffFilter.exclude, legacy=False):
76 name = self.name
77 if legacy:
78 name = re.sub(r'[_-]', '', name)
79 labels = ['{0}={1}'.format(LABEL_PROJECT, name)]
80
81 OneOffFilter.update_labels(one_off, labels)
82 return labels
83
84 @classmethod
85 def from_config(cls, name, config_data, client, default_platform=None):
86 """
87 Construct a Project from a config.Config object.
88 """
89 use_networking = (config_data.version and config_data.version != V1)
90 networks = build_networks(name, config_data, client)
91 project_networks = ProjectNetworks.from_services(
92 config_data.services,
93 networks,
94 use_networking)
95 volumes = ProjectVolumes.from_config(name, config_data, client)
96 project = cls(name, [], client, project_networks, volumes, config_data.version)
97
98 for service_dict in config_data.services:
99 service_dict = dict(service_dict)
100 if use_networking:
101 service_networks = get_networks(service_dict, networks)
102 else:
103 service_networks = {}
104
105 service_dict.pop('networks', None)
106 links = project.get_links(service_dict)
107 network_mode = project.get_network_mode(
108 service_dict, list(service_networks.keys())
109 )
110 pid_mode = project.get_pid_mode(service_dict)
111 volumes_from = get_volumes_from(project, service_dict)
112
113 if config_data.version != V1:
114 service_dict['volumes'] = [
115 volumes.namespace_spec(volume_spec)
116 for volume_spec in service_dict.get('volumes', [])
117 ]
118
119 secrets = get_secrets(
120 service_dict['name'],
121 service_dict.pop('secrets', None) or [],
122 config_data.secrets)
123
124 project.services.append(
125 Service(
126 service_dict.pop('name'),
127 client=client,
128 project=name,
129 use_networking=use_networking,
130 networks=service_networks,
131 links=links,
132 network_mode=network_mode,
133 volumes_from=volumes_from,
134 secrets=secrets,
135 pid_mode=pid_mode,
136 platform=service_dict.pop('platform', None),
137 default_platform=default_platform,
138 **service_dict)
139 )
140
141 return project
142
143 @property
144 def service_names(self):
145 return [service.name for service in self.services]
146
147 def get_service(self, name):
148 """
149 Retrieve a service by name. Raises NoSuchService
150 if the named service does not exist.
151 """
152 for service in self.services:
153 if service.name == name:
154 return service
155
156 raise NoSuchService(name)
157
158 def validate_service_names(self, service_names):
159 """
160 Validate that the given list of service names only contains valid
161 services. Raises NoSuchService if one of the names is invalid.
162 """
163 valid_names = self.service_names
164 for name in service_names:
165 if name not in valid_names:
166 raise NoSuchService(name)
167
168 def get_services(self, service_names=None, include_deps=False):
169 """
170 Returns a list of this project's services filtered
171 by the provided list of names, or all services if service_names is None
172 or [].
173
174 If include_deps is specified, returns a list including the dependencies for
175 service_names, in order of dependency.
176
177 Preserves the original order of self.services where possible,
178 reordering as needed to resolve dependencies.
179
180 Raises NoSuchService if any of the named services do not exist.
181 """
182 if service_names is None or len(service_names) == 0:
183 service_names = self.service_names
184
185 unsorted = [self.get_service(name) for name in service_names]
186 services = [s for s in self.services if s in unsorted]
187
188 if include_deps:
189 services = reduce(self._inject_deps, services, [])
190
191 uniques = []
192 [uniques.append(s) for s in services if s not in uniques]
193
194 return uniques
195
196 def get_services_without_duplicate(self, service_names=None, include_deps=False):
197 services = self.get_services(service_names, include_deps)
198 for service in services:
199 service.remove_duplicate_containers()
200 return services
201
202 def get_links(self, service_dict):
203 links = []
204 if 'links' in service_dict:
205 for link in service_dict.get('links', []):
206 if ':' in link:
207 service_name, link_name = link.split(':', 1)
208 else:
209 service_name, link_name = link, None
210 try:
211 links.append((self.get_service(service_name), link_name))
212 except NoSuchService:
213 raise ConfigurationError(
214 'Service "%s" has a link to service "%s" which does not '
215 'exist.' % (service_dict['name'], service_name))
216 del service_dict['links']
217 return links
218
219 def get_network_mode(self, service_dict, networks):
220 network_mode = service_dict.pop('network_mode', None)
221 if not network_mode:
222 if self.networks.use_networking:
223 return NetworkMode(networks[0]) if networks else NetworkMode('none')
224 return NetworkMode(None)
225
226 service_name = get_service_name_from_network_mode(network_mode)
227 if service_name:
228 return ServiceNetworkMode(self.get_service(service_name))
229
230 container_name = get_container_name_from_network_mode(network_mode)
231 if container_name:
232 try:
233 return ContainerNetworkMode(Container.from_id(self.client, container_name))
234 except APIError:
235 raise ConfigurationError(
236 "Service '{name}' uses the network stack of container '{dep}' which "
237 "does not exist.".format(name=service_dict['name'], dep=container_name))
238
239 return NetworkMode(network_mode)
240
241 def get_pid_mode(self, service_dict):
242 pid_mode = service_dict.pop('pid', None)
243 if not pid_mode:
244 return PidMode(None)
245
246 service_name = get_service_name_from_network_mode(pid_mode)
247 if service_name:
248 return ServicePidMode(self.get_service(service_name))
249
250 container_name = get_container_name_from_network_mode(pid_mode)
251 if container_name:
252 try:
253 return ContainerPidMode(Container.from_id(self.client, container_name))
254 except APIError:
255 raise ConfigurationError(
256 "Service '{name}' uses the PID namespace of container '{dep}' which "
257 "does not exist.".format(name=service_dict['name'], dep=container_name)
258 )
259
260 return PidMode(pid_mode)
261
262 def start(self, service_names=None, **options):
263 containers = []
264
265 def start_service(service):
266 service_containers = service.start(quiet=True, **options)
267 containers.extend(service_containers)
268
269 services = self.get_services(service_names)
270
271 def get_deps(service):
272 return {
273 (self.get_service(dep), config)
274 for dep, config in service.get_dependency_configs().items()
275 }
276
277 parallel.parallel_execute(
278 services,
279 start_service,
280 operator.attrgetter('name'),
281 'Starting',
282 get_deps,
283 )
284
285 return containers
286
287 def stop(self, service_names=None, one_off=OneOffFilter.exclude, **options):
288 containers = self.containers(service_names, one_off=one_off)
289
290 def get_deps(container):
291 # actually returning inversed dependencies
292 return {(other, None) for other in containers
293 if container.service in
294 self.get_service(other.service).get_dependency_names()}
295
296 parallel.parallel_execute(
297 containers,
298 self.build_container_operation_with_timeout_func('stop', options),
299 operator.attrgetter('name'),
300 'Stopping',
301 get_deps,
302 )
303
304 def pause(self, service_names=None, **options):
305 containers = self.containers(service_names)
306 parallel.parallel_pause(reversed(containers), options)
307 return containers
308
309 def unpause(self, service_names=None, **options):
310 containers = self.containers(service_names)
311 parallel.parallel_unpause(containers, options)
312 return containers
313
314 def kill(self, service_names=None, **options):
315 parallel.parallel_kill(self.containers(service_names), options)
316
317 def remove_stopped(self, service_names=None, one_off=OneOffFilter.exclude, **options):
318 parallel.parallel_remove(self.containers(
319 service_names, stopped=True, one_off=one_off
320 ), options)
321
322 def down(
323 self,
324 remove_image_type,
325 include_volumes,
326 remove_orphans=False,
327 timeout=None,
328 ignore_orphans=False):
329 self.stop(one_off=OneOffFilter.include, timeout=timeout)
330 if not ignore_orphans:
331 self.find_orphan_containers(remove_orphans)
332 self.remove_stopped(v=include_volumes, one_off=OneOffFilter.include)
333
334 self.networks.remove()
335
336 if include_volumes:
337 self.volumes.remove()
338
339 self.remove_images(remove_image_type)
340
341 def remove_images(self, remove_image_type):
342 for service in self.get_services():
343 service.remove_image(remove_image_type)
344
345 def restart(self, service_names=None, **options):
346 containers = self.containers(service_names, stopped=True)
347
348 parallel.parallel_execute(
349 containers,
350 self.build_container_operation_with_timeout_func('restart', options),
351 operator.attrgetter('name'),
352 'Restarting',
353 )
354 return containers
355
356 def build(self, service_names=None, no_cache=False, pull=False, force_rm=False, memory=None,
357 build_args=None, gzip=False, parallel_build=False):
358
359 services = []
360 for service in self.get_services(service_names):
361 if service.can_be_built():
362 services.append(service)
363 else:
364 log.info('%s uses an image, skipping' % service.name)
365
366 def build_service(service):
367 service.build(no_cache, pull, force_rm, memory, build_args, gzip)
368
369 if parallel_build:
370 _, errors = parallel.parallel_execute(
371 services,
372 build_service,
373 operator.attrgetter('name'),
374 'Building',
375 limit=5,
376 )
377 if len(errors):
378 combined_errors = '\n'.join([
379 e.decode('utf-8') if isinstance(e, six.binary_type) else e for e in errors.values()
380 ])
381 raise ProjectError(combined_errors)
382
383 else:
384 for service in services:
385 build_service(service)
386
387 def create(
388 self,
389 service_names=None,
390 strategy=ConvergenceStrategy.changed,
391 do_build=BuildAction.none,
392 ):
393 services = self.get_services_without_duplicate(service_names, include_deps=True)
394
395 for svc in services:
396 svc.ensure_image_exists(do_build=do_build)
397 plans = self._get_convergence_plans(services, strategy)
398
399 for service in services:
400 service.execute_convergence_plan(
401 plans[service.name],
402 detached=True,
403 start=False)
404
405 def events(self, service_names=None):
406 def build_container_event(event, container):
407 time = datetime.datetime.fromtimestamp(event['time'])
408 time = time.replace(
409 microsecond=microseconds_from_time_nano(event['timeNano']))
410 return {
411 'time': time,
412 'type': 'container',
413 'action': event['status'],
414 'id': container.id,
415 'service': container.service,
416 'attributes': {
417 'name': container.name,
418 'image': event['from'],
419 },
420 'container': container,
421 }
422
423 service_names = set(service_names or self.service_names)
424 for event in self.client.events(
425 filters={'label': self.labels()},
426 decode=True
427 ):
428 # The first part of this condition is a guard against some events
429 # broadcasted by swarm that don't have a status field.
430 # See https://github.com/docker/compose/issues/3316
431 if 'status' not in event or event['status'] in IMAGE_EVENTS:
432 # We don't receive any image events because labels aren't applied
433 # to images
434 continue
435
436 # TODO: get labels from the API v1.22 , see github issue 2618
437 try:
438 # this can fail if the container has been removed
439 container = Container.from_id(self.client, event['id'])
440 except APIError:
441 continue
442 if container.service not in service_names:
443 continue
444 yield build_container_event(event, container)
445
446 def up(self,
447 service_names=None,
448 start_deps=True,
449 strategy=ConvergenceStrategy.changed,
450 do_build=BuildAction.none,
451 timeout=None,
452 detached=False,
453 remove_orphans=False,
454 ignore_orphans=False,
455 scale_override=None,
456 rescale=True,
457 start=True,
458 always_recreate_deps=False,
459 reset_container_image=False,
460 renew_anonymous_volumes=False,
461 silent=False,
462 ):
463
464 self.initialize()
465 if not ignore_orphans:
466 self.find_orphan_containers(remove_orphans)
467
468 if scale_override is None:
469 scale_override = {}
470
471 services = self.get_services_without_duplicate(
472 service_names,
473 include_deps=start_deps)
474
475 for svc in services:
476 svc.ensure_image_exists(do_build=do_build, silent=silent)
477 plans = self._get_convergence_plans(
478 services, strategy, always_recreate_deps=always_recreate_deps)
479
480 def do(service):
481
482 return service.execute_convergence_plan(
483 plans[service.name],
484 timeout=timeout,
485 detached=detached,
486 scale_override=scale_override.get(service.name),
487 rescale=rescale,
488 start=start,
489 reset_container_image=reset_container_image,
490 renew_anonymous_volumes=renew_anonymous_volumes,
491 )
492
493 def get_deps(service):
494 return {
495 (self.get_service(dep), config)
496 for dep, config in service.get_dependency_configs().items()
497 }
498
499 results, errors = parallel.parallel_execute(
500 services,
501 do,
502 operator.attrgetter('name'),
503 None,
504 get_deps,
505 )
506 if errors:
507 raise ProjectError(
508 'Encountered errors while bringing up the project.'
509 )
510
511 return [
512 container
513 for svc_containers in results
514 if svc_containers is not None
515 for container in svc_containers
516 ]
517
518 def initialize(self):
519 self.networks.initialize()
520 self.volumes.initialize()
521
522 def _get_convergence_plans(self, services, strategy, always_recreate_deps=False):
523 plans = {}
524
525 for service in services:
526 updated_dependencies = [
527 name
528 for name in service.get_dependency_names()
529 if name in plans and
530 plans[name].action in ('recreate', 'create')
531 ]
532
533 if updated_dependencies and strategy.allows_recreate:
534 log.debug('%s has upstream changes (%s)',
535 service.name,
536 ", ".join(updated_dependencies))
537 containers_stopped = any(
538 service.containers(stopped=True, filters={'status': ['created', 'exited']}))
539 has_links = any(c.get('HostConfig.Links') for c in service.containers())
540 if always_recreate_deps or containers_stopped or not has_links:
541 plan = service.convergence_plan(ConvergenceStrategy.always)
542 else:
543 plan = service.convergence_plan(strategy)
544 else:
545 plan = service.convergence_plan(strategy)
546
547 plans[service.name] = plan
548
549 return plans
550
551 def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False,
552 include_deps=False):
553 services = self.get_services(service_names, include_deps)
554 msg = not silent and 'Pulling' or None
555
556 if parallel_pull:
557 def pull_service(service):
558 strm = service.pull(ignore_pull_failures, True, stream=True)
559 if strm is None: # Attempting to pull service with no `image` key is a no-op
560 return
561
562 writer = parallel.get_stream_writer()
563
564 for event in strm:
565 if 'status' not in event:
566 continue
567 status = event['status'].lower()
568 if 'progressDetail' in event:
569 detail = event['progressDetail']
570 if 'current' in detail and 'total' in detail:
571 percentage = float(detail['current']) / float(detail['total'])
572 status = '{} ({:.1%})'.format(status, percentage)
573
574 writer.write(
575 msg, service.name, truncate_string(status), lambda s: s
576 )
577
578 _, errors = parallel.parallel_execute(
579 services,
580 pull_service,
581 operator.attrgetter('name'),
582 msg,
583 limit=5,
584 )
585 if len(errors):
586 combined_errors = '\n'.join([
587 e.decode('utf-8') if isinstance(e, six.binary_type) else e for e in errors.values()
588 ])
589 raise ProjectError(combined_errors)
590
591 else:
592 for service in services:
593 service.pull(ignore_pull_failures, silent=silent)
594
595 def push(self, service_names=None, ignore_push_failures=False):
596 unique_images = set()
597 for service in self.get_services(service_names, include_deps=False):
598 # Considering <image> and <image:latest> as the same
599 repo, tag, sep = parse_repository_tag(service.image_name)
600 service_image_name = sep.join((repo, tag)) if tag else sep.join((repo, 'latest'))
601
602 if service_image_name not in unique_images:
603 service.push(ignore_push_failures)
604 unique_images.add(service_image_name)
605
606 def _labeled_containers(self, stopped=False, one_off=OneOffFilter.exclude):
607 ctnrs = list(filter(None, [
608 Container.from_ps(self.client, container)
609 for container in self.client.containers(
610 all=stopped,
611 filters={'label': self.labels(one_off=one_off)})])
612 )
613 if ctnrs:
614 return ctnrs
615
616 return list(filter(lambda c: c.has_legacy_proj_name(self.name), filter(None, [
617 Container.from_ps(self.client, container)
618 for container in self.client.containers(
619 all=stopped,
620 filters={'label': self.labels(one_off=one_off, legacy=True)})])
621 ))
622
623 def containers(self, service_names=None, stopped=False, one_off=OneOffFilter.exclude):
624 if service_names:
625 self.validate_service_names(service_names)
626 else:
627 service_names = self.service_names
628
629 containers = self._labeled_containers(stopped, one_off)
630
631 def matches_service_names(container):
632 return container.labels.get(LABEL_SERVICE) in service_names
633
634 return [c for c in containers if matches_service_names(c)]
635
636 def find_orphan_containers(self, remove_orphans):
637 def _find():
638 containers = self._labeled_containers()
639 for ctnr in containers:
640 service_name = ctnr.labels.get(LABEL_SERVICE)
641 if service_name not in self.service_names:
642 yield ctnr
643 orphans = list(_find())
644 if not orphans:
645 return
646 if remove_orphans:
647 for ctnr in orphans:
648 log.info('Removing orphan container "{0}"'.format(ctnr.name))
649 ctnr.kill()
650 ctnr.remove(force=True)
651 else:
652 log.warning(
653 'Found orphan containers ({0}) for this project. If '
654 'you removed or renamed this service in your compose '
655 'file, you can run this command with the '
656 '--remove-orphans flag to clean it up.'.format(
657 ', '.join(["{}".format(ctnr.name) for ctnr in orphans])
658 )
659 )
660
661 def _inject_deps(self, acc, service):
662 dep_names = service.get_dependency_names()
663
664 if len(dep_names) > 0:
665 dep_services = self.get_services(
666 service_names=list(set(dep_names)),
667 include_deps=True
668 )
669 else:
670 dep_services = []
671
672 dep_services.append(service)
673 return acc + dep_services
674
675 def build_container_operation_with_timeout_func(self, operation, options):
676 def container_operation_with_timeout(container):
677 if options.get('timeout') is None:
678 service = self.get_service(container.service)
679 options['timeout'] = service.stop_timeout(None)
680 return getattr(container, operation)(**options)
681 return container_operation_with_timeout
682
683
684 def get_volumes_from(project, service_dict):
685 volumes_from = service_dict.pop('volumes_from', None)
686 if not volumes_from:
687 return []
688
689 def build_volume_from(spec):
690 if spec.type == 'service':
691 try:
692 return spec._replace(source=project.get_service(spec.source))
693 except NoSuchService:
694 pass
695
696 if spec.type == 'container':
697 try:
698 container = Container.from_id(project.client, spec.source)
699 return spec._replace(source=container)
700 except APIError:
701 pass
702
703 raise ConfigurationError(
704 "Service \"{}\" mounts volumes from \"{}\", which is not the name "
705 "of a service or container.".format(
706 service_dict['name'],
707 spec.source))
708
709 return [build_volume_from(vf) for vf in volumes_from]
710
711
712 def get_secrets(service, service_secrets, secret_defs):
713 secrets = []
714
715 for secret in service_secrets:
716 secret_def = secret_defs.get(secret.source)
717 if not secret_def:
718 raise ConfigurationError(
719 "Service \"{service}\" uses an undefined secret \"{secret}\" "
720 .format(service=service, secret=secret.source))
721
722 if secret_def.get('external'):
723 log.warn("Service \"{service}\" uses secret \"{secret}\" which is external. "
724 "External secrets are not available to containers created by "
725 "docker-compose.".format(service=service, secret=secret.source))
726 continue
727
728 if secret.uid or secret.gid or secret.mode:
729 log.warn(
730 "Service \"{service}\" uses secret \"{secret}\" with uid, "
731 "gid, or mode. These fields are not supported by this "
732 "implementation of the Compose file".format(
733 service=service, secret=secret.source
734 )
735 )
736
737 secrets.append({'secret': secret, 'file': secret_def.get('file')})
738
739 return secrets
740
741
742 class NoSuchService(Exception):
743 def __init__(self, name):
744 if isinstance(name, six.binary_type):
745 name = name.decode('utf-8')
746 self.name = name
747 self.msg = "No such service: %s" % self.name
748
749 def __str__(self):
750 return self.msg
751
752
753 class ProjectError(Exception):
754 def __init__(self, msg):
755 self.msg = msg
756
[end of compose/project.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| docker/compose | 14e7a11b3c96f0ea818c8c28d84a1aff7967e579 | Upgrade `events` to use the new API fields
In API version 1.22 the events structure was updated to include labels and better names for fields.
We should update to the new field names, and use labels directly from the event, instead of having to query for them with inspect.
Ref: https://github.com/docker/docker/pull/18888
"container destroyed" event is not published
When using "docker-compose events" container destroyed events are not shown, even though they are generated by docker itself.
**Output of `docker-compose version`**
```
docker-compose version 1.23.1, build b02f1306
docker-py version: 3.5.0
CPython version: 3.6.6
OpenSSL version: OpenSSL 1.1.0h 27 Mar 2018
```
**Output of `docker version`**
```
Client: Docker Engine - Community
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:47:43 2018
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:55:00 2018
OS/Arch: linux/amd64
Experimental: false
```
**Output of `docker-compose config`**
(Make sure to add the relevant `-f` and other flags)
```
services:
hello:
image: hello-world
version: '3.0'
```
## Steps to reproduce the issue
Compare:
1. docker-compose events &
2. docker-compose up
3. docker-compose down
with
1. docker events --filter=type=container &
2. docker-compose up
3. docker-compose down
### Observed result
```
2018-11-22 15:14:42.067545 container create 9767377cf6a576b73632c8ab7defb58c0dac51ec9212f09d862b5c288c59a8f7 (image=hello-world, name=docker-composer_hello_1_e3895d8ea937)
2018-11-22 15:14:42.084831 container attach 9767377cf6a576b73632c8ab7defb58c0dac51ec9212f09d862b5c288c59a8f7 (image=hello-world, name=docker-composer_hello_1_e3895d8ea937)
2018-11-22 15:14:42.654275 container start 9767377cf6a576b73632c8ab7defb58c0dac51ec9212f09d862b5c288c59a8f7 (image=hello-world, name=docker-composer_hello_1_e3895d8ea937)
2018-11-22 15:14:42.774647 container die 9767377cf6a576b73632c8ab7defb58c0dac51ec9212f09d862b5c288c59a8f7 (image=hello-world, name=docker-composer_hello_1_e3895d8ea937)
```
### Expected result
```
2018-11-22T15:15:18.112918200+01:00 container create 8da39f0308b3f584606548df256e2bc161aacb683419917d203678688b310794 (com.docker.compose.config-hash=22e81a07349b37675f247bfe03a045dc80de04b082b3a396eb5e5f6ba313956b, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=docker-composer, com.docker.compose.service=hello, com.docker.compose.slug=bc6436f59b6de14f3fb5f1bffa0633633fb9c1979ec4628fc4354d83f719726, com.docker.compose.version=1.23.1, image=hello-world, name=docker-composer_hello_1_bc6436f59b6d)
2018-11-22T15:15:18.127280300+01:00 container attach 8da39f0308b3f584606548df256e2bc161aacb683419917d203678688b310794 (com.docker.compose.config-hash=22e81a07349b37675f247bfe03a045dc80de04b082b3a396eb5e5f6ba313956b, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=docker-composer, com.docker.compose.service=hello, com.docker.compose.slug=bc6436f59b6de14f3fb5f1bffa0633633fb9c1979ec4628fc4354d83f719726, com.docker.compose.version=1.23.1, image=hello-world, name=docker-composer_hello_1_bc6436f59b6d)
2018-11-22T15:15:18.683667800+01:00 container start 8da39f0308b3f584606548df256e2bc161aacb683419917d203678688b310794 (com.docker.compose.config-hash=22e81a07349b37675f247bfe03a045dc80de04b082b3a396eb5e5f6ba313956b, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=docker-composer, com.docker.compose.service=hello, com.docker.compose.slug=bc6436f59b6de14f3fb5f1bffa0633633fb9c1979ec4628fc4354d83f719726, com.docker.compose.version=1.23.1, image=hello-world, name=docker-composer_hello_1_bc6436f59b6d)
2018-11-22T15:15:18.821344800+01:00 container die 8da39f0308b3f584606548df256e2bc161aacb683419917d203678688b310794 (com.docker.compose.config-hash=22e81a07349b37675f247bfe03a045dc80de04b082b3a396eb5e5f6ba313956b, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=docker-composer, com.docker.compose.service=hello, com.docker.compose.slug=bc6436f59b6de14f3fb5f1bffa0633633fb9c1979ec4628fc4354d83f719726, com.docker.compose.version=1.23.1, exitCode=0, image=hello-world, name=docker-composer_hello_1_bc6436f59b6d)
2018-11-22T15:15:22.065739100+01:00 container destroy 8da39f0308b3f584606548df256e2bc161aacb683419917d203678688b310794 (com.docker.compose.config-hash=22e81a07349b37675f247bfe03a045dc80de04b082b3a396eb5e5f6ba313956b, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=docker-composer, com.docker.compose.service=hello, com.docker.compose.slug=bc6436f59b6de14f3fb5f1bffa0633633fb9c1979ec4628fc4354d83f719726, com.docker.compose.version=1.23.1, image=hello-world, name=docker-composer_hello_1_bc6436f59b6d)
```
| This upgrade may also allow us to handle image events. Since we won't need to inspect every event, we'll be able to filter on the client for just the events about images in the Compsoe file.
Unless there is some other reason to upgrade the minimum API version for the Compose v1 format, I don't think we'll do this for 1.7.
Thanks for the report!
I believe this is due to [this particular slice of code](https://github.com/docker/compose/blob/master/compose/project.py#L436-L440), and would likely be resolved through #2618.
We'll take a closer look soon! | 2018-12-11T01:55:02Z | <patch>
diff --git a/compose/cli/log_printer.py b/compose/cli/log_printer.py
--- a/compose/cli/log_printer.py
+++ b/compose/cli/log_printer.py
@@ -236,7 +236,8 @@ def watch_events(thread_map, event_stream, presenters, thread_args):
thread_map[event['id']] = build_thread(
event['container'],
next(presenters),
- *thread_args)
+ *thread_args
+ )
def consume_queue(queue, cascade_stop):
diff --git a/compose/const.py b/compose/const.py
--- a/compose/const.py
+++ b/compose/const.py
@@ -7,7 +7,6 @@
DEFAULT_TIMEOUT = 10
HTTP_TIMEOUT = 60
-IMAGE_EVENTS = ['delete', 'import', 'load', 'pull', 'push', 'save', 'tag', 'untag']
IS_WINDOWS_PLATFORM = (sys.platform == "win32")
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
diff --git a/compose/project.py b/compose/project.py
--- a/compose/project.py
+++ b/compose/project.py
@@ -10,13 +10,13 @@
import enum
import six
from docker.errors import APIError
+from docker.utils import version_lt
from . import parallel
from .config import ConfigurationError
from .config.config import V1
from .config.sort_services import get_container_name_from_network_mode
from .config.sort_services import get_service_name_from_network_mode
-from .const import IMAGE_EVENTS
from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT
from .const import LABEL_SERVICE
@@ -402,11 +402,13 @@ def create(
detached=True,
start=False)
- def events(self, service_names=None):
+ def _legacy_event_processor(self, service_names):
+ # Only for v1 files or when Compose is forced to use an older API version
def build_container_event(event, container):
time = datetime.datetime.fromtimestamp(event['time'])
time = time.replace(
- microsecond=microseconds_from_time_nano(event['timeNano']))
+ microsecond=microseconds_from_time_nano(event['timeNano'])
+ )
return {
'time': time,
'type': 'container',
@@ -425,17 +427,15 @@ def build_container_event(event, container):
filters={'label': self.labels()},
decode=True
):
- # The first part of this condition is a guard against some events
- # broadcasted by swarm that don't have a status field.
+ # This is a guard against some events broadcasted by swarm that
+ # don't have a status field.
# See https://github.com/docker/compose/issues/3316
- if 'status' not in event or event['status'] in IMAGE_EVENTS:
- # We don't receive any image events because labels aren't applied
- # to images
+ if 'status' not in event:
continue
- # TODO: get labels from the API v1.22 , see github issue 2618
try:
- # this can fail if the container has been removed
+ # this can fail if the container has been removed or if the event
+ # refers to an image
container = Container.from_id(self.client, event['id'])
except APIError:
continue
@@ -443,6 +443,56 @@ def build_container_event(event, container):
continue
yield build_container_event(event, container)
+ def events(self, service_names=None):
+ if version_lt(self.client.api_version, '1.22'):
+ # New, better event API was introduced in 1.22.
+ return self._legacy_event_processor(service_names)
+
+ def build_container_event(event):
+ container_attrs = event['Actor']['Attributes']
+ time = datetime.datetime.fromtimestamp(event['time'])
+ time = time.replace(
+ microsecond=microseconds_from_time_nano(event['timeNano'])
+ )
+
+ container = None
+ try:
+ container = Container.from_id(self.client, event['id'])
+ except APIError:
+ # Container may have been removed (e.g. if this is a destroy event)
+ pass
+
+ return {
+ 'time': time,
+ 'type': 'container',
+ 'action': event['status'],
+ 'id': event['Actor']['ID'],
+ 'service': container_attrs.get(LABEL_SERVICE),
+ 'attributes': dict([
+ (k, v) for k, v in container_attrs.items()
+ if not k.startswith('com.docker.compose.')
+ ]),
+ 'container': container,
+ }
+
+ def yield_loop(service_names):
+ for event in self.client.events(
+ filters={'label': self.labels()},
+ decode=True
+ ):
+ # TODO: support other event types
+ if event.get('Type') != 'container':
+ continue
+
+ try:
+ if event['Actor']['Attributes'][LABEL_SERVICE] not in service_names:
+ continue
+ except KeyError:
+ continue
+ yield build_container_event(event)
+
+ return yield_loop(set(service_names) if service_names else self.service_names)
+
def up(self,
service_names=None,
start_deps=True,
</patch> | [] | [] | |||
ytdl-org__youtube-dl-1591 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | ytdl-org/youtube-dl | b4cdc245cf0af0672207a5090cb6eb6c29606cdb | "Opus audio conversion failure\nWhen trying to extract and convert audio from a youtube video into a(...TRUNCATED) | "I had the same issue and after an hour-long debug session with a friend of mine we found out that t(...TRUNCATED) | 2013-10-12T11:32:27Z | "<patch>\ndiff --git a/youtube_dl/PostProcessor.py b/youtube_dl/PostProcessor.py\n--- a/youtube_dl/P(...TRUNCATED) | [] | [] | |||
numpy__numpy-13703 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | numpy/numpy | 40ada70d9efc903097b2ff3f968c23a7e2f14296 | "Dtype.base attribute not documented\ndtype instances have a `base` attribute that, I think, is mean(...TRUNCATED) | "I've found myself using `.subdtype` instead, which contains the same info\nI think base also means (...TRUNCATED) | 2019-06-03T20:15:13Z | "<patch>\ndiff --git a/numpy/core/_add_newdocs.py b/numpy/core/_add_newdocs.py\n--- a/numpy/core/_ad(...TRUNCATED) | [] | [] | |||
wagtail__wagtail-7855 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | wagtail/wagtail | 4550bba562286992b27e667b071451e6886fbf44 | "Use html_url in alias_of API serialisation\nAs per https://github.com/wagtail/wagtail/pull/7669#iss(...TRUNCATED) | 2022-01-13T15:50:23Z | "<patch>\ndiff --git a/wagtail/api/v2/serializers.py b/wagtail/api/v2/serializers.py\n--- a/wagtail/(...TRUNCATED) | [] | [] | ||||
pandas-dev__pandas-27237 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | pandas-dev/pandas | f683473a156f032a64a1d7edcebde21c42a8702d | "Add key to sorting functions\nMany python functions (sorting, max/min) accept a key argument, perh(...TRUNCATED) | "Here's a specific use case that came up on [StackOverflow](http://stackoverflow.com/questions/29580(...TRUNCATED) | 2019-07-04T23:04:26Z | "<patch>\ndiff --git a/doc/source/user_guide/basics.rst b/doc/source/user_guide/basics.rst\n--- a/do(...TRUNCATED) | [] | [] | |||
conan-io__conan-3187 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | conan-io/conan | c3baafb780b6e5498f8bd460426901d9d5ab10e1 | "Using \"?\" in a tools.get() URL will fail, while using it in a tools.download() will succeed.\nTo (...TRUNCATED) | "Verified: ``tools.get()`` uses the last part of the URL as the name of the file to be saved.\r\n\r\(...TRUNCATED) | 2018-07-10T10:30:23Z | "<patch>\ndiff --git a/conans/client/tools/net.py b/conans/client/tools/net.py\n--- a/conans/client/(...TRUNCATED) | [] | [] | |||
googleapis__google-cloud-python-3348 | "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED) | googleapis/google-cloud-python | 520637c245d461db8ee45ba466d763036b82ea42 | Error reporting system tests needed
Follow up to #3263.
| 2017-05-01T22:40:28Z | "<patch>\ndiff --git a/error_reporting/nox.py b/error_reporting/nox.py\n--- a/error_reporting/nox.py(...TRUNCATED) | [] | [] |
Dataset Card for "SWE-bench_oracle"
Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
The dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
This dataset SWE-bench_oracle
includes a formatting of each instance using the "Oracle" retrieval setting as described in the paper. The text
column can be used directly with LMs to generate patch files.
Models are instructed to generate patch
formatted file using the following template:
<patch>
diff
--- a/path/to/file.py
--- b/path/to/file.py
@@ -1,3 +1,3 @@
This is a test file.
-It contains several lines.
+It has been modified.
This is the third line.
</patch>
This format can be used directly with the SWE-bench inference scripts. Please refer to these scripts for more details on inference.
Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
Dataset Structure
Data Instances
An example of a SWE-bench datum is as follows:
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
text: (str) - The input text including instructions, the "Oracle" retrieved file, and an example of the patch format for output.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
problem_statement: (str) - The issue title and body.
version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
- Downloads last month
- 699