import gradio as gr import time import re from ez_cite import ez_cite example1 = r"""Instead of measuring physical systems and then processing the classical measurement outcomes to infer properties of the physical systems, quantum sensors will eventually be able to transduce quantum information in physical systems directly to a quantum memory, where it can be processed by a quantum computer.""" example2 = r"""In order for a learning model to generalise well from training data, it is often crucial to encode some knowledge about the structure of the data into the model itself. Convolutional neural networks are a classic illustration of this principle, whose success at image related tasks is often credited to the existence of model structures that relate to label invariance of the data under translation symmetries. Together with the choice of loss function and hyperparameters, these structures form part of the basic assumptions that a learning model makes about the data, which is commonly referred to as the \textit{inductive bias} of the model. One of the central challenges facing quantum machine learning is to identify data structures that can be encoded usefully into quantum learning models; in other words, what are the forms of inductive bias that naturally lend themselves to quantum computation? In answering this question, we should be wary of hoping for a one-size-fits-all approach in which quantum models outperform neural network models at generic learning tasks. Rather, effort should be placed in understanding how the Hilbert space structure and probabilistic nature of the theory suggest particular biases for which quantum machine learning may excel. Indeed, an analogous perspective is commonplace in quantum computation, where computational advantages are expected only for specific problems that happen to benefit from the peculiarities of quantum logic.""" example3 = r"""Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures. Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks and conditional computation, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences. In all but a few cases, however, such attention mechanisms are used in conjunction with a recurrent network. In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.""" # wrap the ez_cite.py def generate_cite_and_bib_data(introduction): cite_text, bib_data = ez_cite(introduction, debug=False) return cite_text, bib_data iface = gr.Interface( fn=generate_cite_and_bib_data, inputs=[gr.Textbox(value='Enter your introduction here', label='introduction', show_label=True, lines=10)], outputs=[gr.Textbox(info='This may take several minutes.', label='.tex', lines=5, show_label=True, show_copy_button=True, interactive=True), gr.Textbox(info='This may take several minutes.', label='.bib', lines=5, show_label=True, show_copy_button=True, interactive=True)], live=False, title="Easy Cite App (beta-v0.0.5 made by Chip)", description="Enter your introduction and click the buttons to generate Cite Text and Bib Data.", css=""" .output { white-space: pre-line; } .container { width: 100%; margin: auto; padding: 5px; } .textbox { width: 100%; } """, examples=[ [example1], [example2], [example3] ] ) iface.launch(share=True)