Spaces:
Runtime error
Runtime error
zhaoxiang
commited on
Commit
•
aa7ef71
1
Parent(s):
5db794d
fix query match bug
Browse files- __pycache__/ez_cite.cpython-310.pyc +0 -0
- app.py +2 -8
- ez_cite.py +5 -2
__pycache__/ez_cite.cpython-310.pyc
CHANGED
Binary files a/__pycache__/ez_cite.cpython-310.pyc and b/__pycache__/ez_cite.cpython-310.pyc differ
|
|
app.py
CHANGED
@@ -8,12 +8,7 @@ from ez_cite import ez_cite
|
|
8 |
|
9 |
example1 = r"""Instead of measuring physical systems and then processing the classical measurement outcomes to infer properties of the physical systems, quantum sensors will eventually be able to transduce quantum information in physical systems directly to a quantum memory, where it can be processed by a quantum computer."""
|
10 |
|
11 |
-
|
12 |
-
example2 = r"""In order for a learning model to generalise well from training data, it is often crucial to encode some knowledge about the structure of the data into the model itself. Convolutional neural networks are a classic illustration of this principle, whose success at image related tasks is often credited to the existence of model structures that relate to label invariance of the data under translation symmetries. Together with the choice of loss function and hyperparameters, these structures form part of the basic assumptions that a learning model makes about the data, which is commonly referred to as the \textit{inductive bias} of the model.
|
13 |
-
|
14 |
-
One of the central challenges facing quantum machine learning is to identify data structures that can be encoded usefully into quantum learning models; in other words, what are the forms of inductive bias that naturally lend themselves to quantum computation? In answering this question, we should be wary of hoping for a one-size-fits-all approach in which quantum models outperform neural network models at generic learning tasks. Rather, effort should be placed in understanding how the Hilbert space structure and probabilistic nature of the theory suggest particular biases for which quantum machine learning may excel. Indeed, an analogous perspective is commonplace in quantum computation, where computational advantages are expected only for specific problems that happen to benefit from the peculiarities of quantum logic."""
|
15 |
-
|
16 |
-
example3 = r"""Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures.
|
17 |
|
18 |
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks and conditional computation, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
|
19 |
|
@@ -45,8 +40,7 @@ iface = gr.Interface(
|
|
45 |
""",
|
46 |
examples=[
|
47 |
[example1],
|
48 |
-
[example2]
|
49 |
-
[example3]
|
50 |
]
|
51 |
)
|
52 |
|
|
|
8 |
|
9 |
example1 = r"""Instead of measuring physical systems and then processing the classical measurement outcomes to infer properties of the physical systems, quantum sensors will eventually be able to transduce quantum information in physical systems directly to a quantum memory, where it can be processed by a quantum computer."""
|
10 |
|
11 |
+
example2 = r"""Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures.
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks and conditional computation, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
|
14 |
|
|
|
40 |
""",
|
41 |
examples=[
|
42 |
[example1],
|
43 |
+
[example2]
|
|
|
44 |
]
|
45 |
)
|
46 |
|
ez_cite.py
CHANGED
@@ -267,8 +267,11 @@ def main(sentences, count, client, llm_model, max_tokens, service_context):
|
|
267 |
|
268 |
# Define a regular expression pattern to find the value of 'query'
|
269 |
pattern = r"'query': '(.*?)'"
|
270 |
-
|
271 |
-
|
|
|
|
|
|
|
272 |
|
273 |
relevant_papers = get_relevant_papers(search_query, sort=True, count=count)
|
274 |
|
|
|
267 |
|
268 |
# Define a regular expression pattern to find the value of 'query'
|
269 |
pattern = r"'query': '(.*?)'"
|
270 |
+
matches = re.findall(pattern, response)
|
271 |
+
if matches:
|
272 |
+
search_query = matches[0]
|
273 |
+
else:
|
274 |
+
search_query = sentence[:2] # use the first two words as the search query
|
275 |
|
276 |
relevant_papers = get_relevant_papers(search_query, sort=True, count=count)
|
277 |
|