tugot17 commited on
Commit
0b57fdd
1 Parent(s): 641ab1e

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +17 -14
app.py CHANGED
@@ -9,13 +9,13 @@ classifier = pipeline("text-classification",
9
  tokenizer=scibert_tokenizer)
10
 
11
 
12
- title_1 = "Type 2 Diabetes Mellitus, Oral Diabetic Medications, Insulin Therapy, and Overall Breast Cancer Risk"
13
- abstract_1 = "Breast cancer is among the most common cancers worldwide. Diabetes is an important chronic health problem associated with insulin resistance, increased insulin level, changes in growth hormones and factors, and activation of mitogen-activating protein kinase (MAPK) pathways, leading to an increased breast cancer risk. This paper looked at the epidemiologic studies of the association between type 2 diabetes and risk of breast cancer and its effect on overall cancer-specific survival. The combined evidence overall supported a modest association between type 2 diabetes and the risk of breast cancer, which was found to be more prevalent among postmenopausal women. Effect of oral diabetics and insulin therapy on breast cancer risk was also evaluated. It was found that metformin and thiazolidinones tended to have a protective role. Metformin therapy trials for its use as an adjuvant for breast cancer treatment are still ongoing. Sulfonylurea and insulin therapy were found to be mildly associated with increased overall cancers. No evidence or studies evaluated the association of DPPIV inhibitors and GLP 1 agonists with breast cancer risk because of their recent introduction into the management of diabetes."
14
- text_1 = """During the 2 nd International Go Game Science Conference, held within the LIX European Go Congress (Liberec, CZ) on the 29 th -30 th July 2015, we presented PhotoKifu, 1 a software for Windows operating systems indeed able to reconstruct the record of a Go game from a series of photographs taken during the game itself, each one after each move played. We described the program in detail, explaining the importance of taking each photograph immediately after a stone had been released on the goban; we described the algorithms employed to first identify the grid on the goban, also inferring the position and orientation of the camera, then to track the grid through the pictures (in order to compensate for small, accidental movements like some bumping on the table and so on), and eventually to detect every new stone placed on the goban. We also described how to avoid false negatives (stones not detected), false positives (stones wrongly detected), how to circumvent problems caused by "disturbance" (for example, hands of the players still visible in the pictures) as well as missing or duplicate pictures and how, in the worst cases, manual correction of the moves is allowed. The performance of the program was """
15
 
16
- title_2 = "Diagnosing and quantifying a common deficit in multiple sclerosis: Internuclear ophthalmoplegia"
17
- abstract_2 = "Objective We present an objective and quantitative approach for diagnosing internuclear ophthalmoplegia (INO) in multiple sclerosis (MS). Methods A validated standardized infrared oculography protocol (DEMoNS [Demonstrate Eye Movement Networks with Saccades]) was used for quantifying prosaccades in patients with MS and healthy controls (HCs). The versional dysconjugacy index (VDI) was calculated, which describes the ratio between the abducting and adducting eye. The VDI was determined for peak velocity, peak acceleration, peak velocity divided by amplitude, and area under the curve (AUC) of the saccadic trajectory. We calculated the diagnostic accuracy for the several VDI parameters by a receiver operating characteristic analysis comparing HCs and patients with MS. The National Eye Institute Visual Function Questionnaire–25 was used to investigate vision-related quality of life of MS patients with INO. Results Two hundred ten patients with MS and 58 HCs were included. The highest diagnostic accuracy was achieved by the VDI AUC of 15° horizontal prosaccades. Based on a combined VDI AUC and peak velocity divided by amplitude detection, the prevalence of an INO in MS calculated to 34%. In the INO group, 35.2% of the patients with MS reported any complaints of double vision, compared to 18.4% in the non-INO group (p = 0.010). MS patients with an INO had a lower overall vision-related quality of life (median 89.9, interquartile range 12.8) compared to patients without an INO (median 91.8, interquartile range 9.3, p = 0.011). Conclusions This study provides an accurate quantitative and clinically relevant definition of an INO in MS. This infrared oculography-based INO standard will require prospective validation. The high prevalence of INO in MS provides an anatomically well described and accurately quantifiable model for treatment trials in MS."
18
- text_2 = """ During the 2 nd International Go Game Science Conference, held within the LIX European Go Congress (Liberec, CZ) on the 29 th -30 th July 2015, we presented PhotoKifu, 1 a software for Windows operating systems indeed able to reconstruct the record of a Go game from a series of photographs taken during the game itself, each one after each move played. We described the program in detail, explaining the importance of taking each photograph immediately after a stone had been released on the goban; we described the algorithms employed to first identify the grid on the goban, also inferring the position and orientation of the camera, then to track the grid through the pictures (in order to compensate for small, accidental movements like some bumping on the table and so on), and eventually to detect every new stone placed on the goban. We also described how to avoid false negatives (stones not detected), false positives (stones wrongly detected), how to circumvent problems caused by "disturbance" (for example, hands of the players still visible in the pictures) as well as missing or duplicate pictures and how, in the worst cases, manual correction of the moves is allowed. The performance of the program was very good, as shown in the paper we wrote for the occasion [CC15] , and further improved in the following releases of PhotoKifu, when we were eventually able to make use of the OpenCV 2 library, in place of the effective, but slow, ImageMagick 3 suite that was needed for some preprocessing of the pictures before the actual algorithms could do their job."""
19
 
20
  class_map = {"LABEL_0": 'Biology 🦠🧬🦖',
21
  "LABEL_1": 'Chemistry 👨‍🔬⚗️🔬',
@@ -27,15 +27,18 @@ class_map = {"LABEL_0": 'Biology 🦠🧬🦖',
27
  ['Biology', 'Chemistry', 'Computer Science', 'Medicine', 'Physics']
28
 
29
  def predict(title:str, abstract: str, aricle_text):
30
- output = {}
31
 
32
- text = f"{title}\n {abstract}\n {aricle_text}"
33
 
34
- for result in classifier(text, top_k=None):
35
- label = class_map[result["label"]]
36
- output[label] = result["score"]
37
 
38
- return output
 
 
 
 
39
 
40
  iface = gr.Interface(
41
  fn=predict,
@@ -44,10 +47,10 @@ iface = gr.Interface(
44
  gr.Textbox(lines=5, label="Article Text")],
45
  outputs=gr.Label(num_top_classes=5),
46
  examples=[[title_1, abstract_1, text_1], [title_2, abstract_2, text_2]],
47
- title="Article topic classifier",
48
  description= "Upload the paper title, its abstract, and the beginning of the text. Our model will figure out whether this is a Biology, Chemistry, Computer Science, Medicine or Physics related article."
49
  )
50
 
51
- iface.launch()
52
 
53
 
 
9
  tokenizer=scibert_tokenizer)
10
 
11
 
12
+ title_1 = "Depth Control Method of Profiling Float Based on an Improved Double PD Controller"
13
+ abstract_1 = "A kinematic equation of profiling float is nonlinear and has time-varying parameters. Traditional PD controllers not only demonstrate an inconsistent response to different depth controls but also face problems of overshooting and high power consumption. To realize the goal of depth control of profiling buoy under low power consumption, an improved double PD control method was proposed in this paper. The real-time prediction of position and low-power running of the sensor were realized through sparse sampling and depth prediction. The combination control over position, speed, and flow was realized by introducing the speed and flow expectation function. Then, a MATLAB/Simulink simulation model was constructed, and the proposed controller was compared with a single PD controller and an improved single PD controller. Among ten depth control tests, the proposed method was superior given its short response time, small overshooting, small steady-state error, and low power consumption. Moreover, it achieved a consistent control effect on different target depths. The simulation results demonstrated that a nonlinear and time-varying floating system controlled by the proposed method has favorable robustness and stability. This system will consume minimal power simultaneously."
14
+ text_1 = """ With the increase in the cognition of marine and accelerating ocean exploitation, research and development of submersible vehicles with large diving depth and long voyage has become a research hotspot [1] . Autonomous underwater vehicle (AUV) [2] , autonomous profile observation float (hereinafter referred to as ''float'') [3] , manned deep submersible vehicle, and underwater gliders have attracted considerable attention and have been studied extensively."""
15
 
16
+ title_2 = "Cosmography and Data Visualization"
17
+ abstract_2 = " Cosmography, the study and making of maps of the universe or cosmos, is a field where visual representation benefits from modern three-dimensional visualization techniques and media. At the extragalactic distance scales, visualization is contributing in understanding the complex structure of the local universe, in terms of spatial distribution and flows of galaxies and dark matter. In this paper, we report advances in the field of extragalactic cosmography obtained using the SDvision visualization software in the context of the Cosmicflows Project. Here, multiple visualization techniques are applied to a variety of data products: catalogs of galaxy positions and galaxy peculiar velocities, reconstructed velocity field, density field, gravitational potential field, velocity shear tensor viewed in terms of its eigenvalues and eigenvectors, envelope surfaces enclosing basins of attraction. These visualizations, implemented as high-resolution images, videos, and interactive viewers, have contributed to a number of studies: the cosmography of the local part of the universe, the nature of the Great Attractor, the discovery of the boundaries of our home supercluster of galaxies Laniakea, the mapping of the cosmic web, the study of attractors and repellers."
18
+ text_2 = """Throughout the ages, astronomers have strived to materialize their discoveries and understanding of the cosmos by the means of visualizations. The oldest known depiction of celestial objects, the Nebra sky disc, dates back from the Bronze age, 3600 years ago (Benson 2014) . While most of the astronomical representations are projections to two dimensional sketches and images, the introduction of the third dimension in depictive apparatuses has been sought by astronomers as an essential mean to promote understanding. Such objects as the armillary spheres, dating back from the Hellenistic world, or the modern era orreries used to mechanically model the solar system, played an important role in the history of astronomy. Today, computer-based interactive three-dimensional visualization techniques have become a fruitful research tool. Here, we present the impact of visualization on cosmography in the context of the Cosmicflows Project."""
19
 
20
  class_map = {"LABEL_0": 'Biology 🦠🧬🦖',
21
  "LABEL_1": 'Chemistry 👨‍🔬⚗️🔬',
 
27
  ['Biology', 'Chemistry', 'Computer Science', 'Medicine', 'Physics']
28
 
29
  def predict(title:str, abstract: str, aricle_text):
30
+ output = {}
31
 
32
+ input_text = f"{title}\n {abstract}\n {aricle_text}"
33
 
34
+ results = classifier(input_text, top_k=None)
35
+ print(results)
 
36
 
37
+ for result in results:
38
+ label = class_map[result["label"]]
39
+ output[label] = result["score"]
40
+
41
+ return output
42
 
43
  iface = gr.Interface(
44
  fn=predict,
 
47
  gr.Textbox(lines=5, label="Article Text")],
48
  outputs=gr.Label(num_top_classes=5),
49
  examples=[[title_1, abstract_1, text_1], [title_2, abstract_2, text_2]],
50
+ title="Article topic classifier",
51
  description= "Upload the paper title, its abstract, and the beginning of the text. Our model will figure out whether this is a Biology, Chemistry, Computer Science, Medicine or Physics related article."
52
  )
53
 
54
+ iface.launch(debug=True)
55
 
56