_id
stringlengths 36
36
| text
stringlengths 200
328k
| label
stringclasses 5
values |
---|---|---|
265cbe01-23ea-4e4d-9e35-4edb7a97dc2f | Metadata is as essential as the primary data to benefit from recent analytics tools and predictive modeling through machine learning. The metadata helps to contextualize the primary datasets and add more explanatory variables into the predictive model for more robustness. The use of cloud technology, telecommunications, and tablets with embedded optimized forms, could facilitate gathering such third-party information. The cloud would help to store the data and to perform further analysis; the internet connection could help to gather GPS coordinates and inform on the location where the data were gathered (not only at the household level but at the farm itself to allow the analysis of biogeophysical parameters from RSPs). For such to occur, at least three enabling technologies are required: (i) An improved internet connection in rural areas where most of the farms are located; (ii) The inclusion of metadata information gathering into agricultural surveys; (iii) The renewal of data gathering tools to migrate from papers and laptops to tablets that are more suitable for such a task. Such an approach of using emerging and well-established technologies to support a better-quality data gathering about the agricultural sector will progressively require fewer resources due to some data that do not need to be updated from the ground, thanks to the use of remote sensing.
| d |
c99f5ea6-9696-48ed-84f0-8684c60b57fe | Information asymmetry between researchers and policymakers is longstanding in Africa, especially in the agricultural sector. Moreover, the fast pace of turn-over in offices makes the consolidation of technical knowledge within an institution difficult. For instance, an individual at the national bureau of statistics could be trained to work with remote sensing products and machine learning techniques in a year. The following year, they could be in another ministry, another entity of the same ministry, or transition to another institution. From a general point of view, the training is not lost. However, the corresponding technical capacity moves from one entity to another with the risk of not being used where it is needed the most.
| d |
f5076e31-bb12-482e-9036-15d9ab74e8fd | The complex African cropping system makes it difficult to collect accurate and timely data sustainably. On the one hand, such a data scarcity does not allow the type of detailed analysis that decision-making would require in a time of uncertainties. On the other hand, when the data quality and disaggregation requirements are met, the way the knowledge is produced seems not to be digestible to policymakers, especially when emerging technologies are used and are far from reach. One way of closing the gap mentioned above is to appeal to data visualization expertise to transform the data and knowledge from its raw stage to its information stage. Such expertise is not yet well developed across African countries and needs to be built.
| d |
ea17d7a4-241e-4df7-8440-7ff431877cb2 | The results of this chapter not only support the use of emerging technologies such as RSPs and machine learning techniques to improve agricultural statistics, but they also showed how they could be leveraged to increase African countries’ preparedness to shocks post-COVID-19. The pandemic shows how timely and accurate data are most needed for early action and intervention in the agricultural sector and beyond. Recent technologies must be considered in any part of the data environment – from collection to analysis.
| d |
a511c811-07ba-4fd0-8bed-a0f31d5f42ef | African Development Bank (AfDB). (2020, November). AN EFFECTIVE RESPONSE TO COVID-19 IMPACTS ON AFRICA’S AVIATION SECTOR. https://www.afdb.org/sites/default/files/2020/11 26/afdb_aviation_covid_19_recovery_conference_draft_background_paper_nov2020.pdf.
| d |
beacdfd4-eb1c-4d71-af3b-4c24c842060c | Arvor, D., Jonathan, M., Meirelles, M. S. P., Dubreuil, V., & Durieux, L. (2011). Classification of MODIS EVI time series for crop mapping in the state of Mato Grosso, Brazil. International Journal of Remote Sensing, 32(22), 7847–7871. https://doi.org/10.1080/01431161.2010.531783
| d |
f9547126-7998-48eb-ba0d-ada925069692 | Ayanlade, A., Radeny, M. COVID-19 and food security in Sub-Saharan Africa: implications of lockdown during agricultural planting seasons. npj Sci Food 4, 13 (2020). https://doi.org/10.1038/s41538-020-00073-0
| d |
d7fc4991-08ac-47d4-ab67-ae45e05b22dd | Bhatt, R., & Hossain, A. (2019). Concept and Consequence of Evapotranspiration for Sustainable Crop Production in the Era of Climate Change. Advanced Evapotranspiration Methods and Applications, 1–13. https://doi.org/10.5772/intechopen.83707
| d |
ec954467-a856-4618-b43a-94eae559901d | Buetti-Dinh, A., Galli, V., Bellenberg, S., Ilie, O., Herold, M., Christel, S., Boretska, M., Pivkin, I. V., Wilmes, P., Sand, W., Vera, M., & Dopson, M. (2019). Deep neural networks outperform human expert’s capacity in characterizing bioleaching bacterial biofilm composition. Biotechnology Reports, 22, e00321. https://doi.org/10.1016/j.btre.2019.e00321
| d |
4eaf8805-1c58-4c2f-ac4b-28171481dd7f | Christiaensen Luc, and Lionel Demery, eds. 2018.Agriculture in Africa: Telling Myths from Facts. Directions in Development. Washington, DC: World Bank. doi:10.1596/978-1-4648-1134-0. License: Creative Commons Attribution CC BY 3.0 IGO
| d |
27e24a09-980f-4ebc-989c-4325183655ff | Gao, B.-. (1996). NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sensing of Environment, 58(3), 257–266. https://doi.org/10.1016/s0034-4257(96)00067-3
| d |
38c218f7-10b2-423d-b803-957b4f25c39a | Huang, J., & Han, D. (2014). Meta-analysis of influential factors on crop yield estimation by remote sensing. International Journal of Remote Sensing, 35(6), 2267–2295. https://doi.org/10.1080/01431161.2014.890761
| d |
5a59783c-d65c-45f2-8765-de72f06be6c7 | IFC. (2020, September 4). COVID-19 Economic Impact: Sub-Saharan Africa. International Finance Corporation. https://www.ifc.org/wps/wcm/connect/publications_ext_content/ifc_external_publication_site/publications_listing_page/covid-19-response-brief-ssa
| d |
ca0e1aa6-e0fe-43cb-b4f3-d0490bd89522 | International Labor Organization. (2020, April). COVID-19 and the impact on agriculture and food security. ILO. https://www.ilo.org/wcmsp5/groups/public/---ed_dialogue/---sector/documents/briefingnote/wcms_742023.pdf
| d |
163ade8c-90a8-4d7c-89d8-cfd28a058742 | International Food Policy Research Institute (IFPRI); International Institute for Applied Systems Analysis (IIASA), 2016, “Global Spatially-Disaggregated Crop Production Statistics Data for 2005 Version 3.2”, https://doi.org/10.7910/DVN/DHXBJX, Harvard Dataverse, V9
| d |
404234a1-4590-45c4-a948-f5fed49bfcc3 | International Food Policy Research Institute, 2020, “Spatially-Disaggregated Crop Production Statistics Data in Africa South of the Sahara for 2017”, https://doi.org/10.7910/DVN/FSSKBW, Harvard Dataverse, V3
| d |
9fe95cb9-b375-439d-8ee9-5442ff47dfa5 | Kpienbaareh, D., Sun, X., Wang, J., Luginaah, I., Bezner Kerr, R., Lupafya, E., & Dakishoni, L. (2021). Crop Type and Land Cover Mapping in Northern Malawi Using the Integration of Sentinel-1, Sentinel-2, and PlanetScope Satellite Data. Remote Sensing, 13(4), 700. https://doi.org/10.3390/rs13040700
| d |
d2baf1f3-9a58-4ff4-baba-adcc736922ce | Leroux, L., Baron, C., Zoungrana, B., Traore, S. B., Lo Seen, D., & Begue, A. (2016). Crop Monitoring Using Vegetation and Thermal Indices for Yield Estimates: Case Study of a Rainfed Cereal in Semi-Arid West Africa. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 9(1), 347–362. https://doi.org/10.1109/jstars.2015.2501343
| d |
68717b63-b719-4594-88ed-b172e7636a69 | Liu, J., Shang, J., Qian, B., Huffman, T., Zhang, Y., Dong, T., Jing, Q., & Martin, T. (2019). Crop Yield Estimation Using Time-Series MODIS Data and the Effects of Cropland Masks in Ontario, Canada. Remote Sensing, 11(20), 2419. https://doi.org/10.3390/rs11202419
| d |
ba36db87-118f-4d7e-9fd1-2d8f370f4859 | Li, Q., Qiu, C., Ma, L., Schmitt, M., & Zhu, X. (2020). Mapping the Land Cover of Africa at 10 m Resolution from Multi-Source Remote Sensing Data with Google Earth Engine. Remote Sensing, 12(4), 602. https://doi.org/10.3390/rs12040602
| d |
afc35549-8e65-46b9-ace9-9ac211a0096c | Pekel, J.-F., Cottam, A., Gorelick, N., & Belward, A. S. (2016). High-resolution mapping of global surface water and its long-term changes. Nature, 540(7633), 418–422. https://doi.org/10.1038/nature20584
| d |
f8108203-d3ba-423e-b892-6008ab98899f | Rasmussen, M. S. (1992). Assessment of millet yields and production in northern Burkina Faso using integrated NDVI from the AVHRR. International Journal of Remote Sensing, 13(18), 3431–3442. https://doi.org/10.1080/01431169208904132
| d |
aaed7f03-0ea3-455c-8255-a8db1337e06c | Rasmussen, M. S. (1997). Operational yield forecast using AVHRR NDVI data: Reduction of environmental and inter-annual variability. International Journal of Remote Sensing, 18(5), 1059–1077. https://doi.org/10.1080/014311697218575
| d |
0df03d2b-6e13-44d6-9b34-b74ac16dacd1 | Rembold, F., Atzberger, C., Savin, I., & Rojas, O. (2013). Using Low Resolution Satellite Imagery for Yield Prediction and Yield Anomaly Detection. Remote Sensing, 5(4), 1704–1733. https://doi.org/10.3390/rs5041704
| d |
002725a6-5f16-4d09-b4d4-bd3878a19f27 | Rezaei, E.E., Ghazaryan, G., González, J., Cornish, N., Dubovyk, O., & Siebert, S. (2020). The use of remote sensing to derive maize sowing dates for large-scale crop yield simulations. Int J Biometeorol. https://doi.org/10.1007/s00484-020-02050-4
| d |
9cbea90c-7bdb-4d97-b74f-a840fb6a6083 | Running, S., Mu, Q., Zhao, M. (2021). MODIS/Terra Net Evapotranspiration 8-Day L4 Global 500m SIN Grid V061 [Data set]. NASA EOSDIS Land Processes DAAC. Accessed 2021-03-16 from https://doi.org/10.5067/MODIS/MOD16A2.061
| d |
1a0963f8-621d-4882-b262-a29cbcbf239c | Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489. https://doi.org/10.1038/nature16961
| d |
f92d4c24-fa91-460a-80aa-eb7c86b5292e | Smith, R. C. G., Barrs, H. D., Steiner, J. L., & Stapper, M. (1985). Relationship between wheat yield and foliage temperature: theory and its application to infrared measurements. Agricultural and Forest Meteorology, 36(2), 129–143. https://doi.org/10.1016/0168-1923(85)90005-x
| d |
ad673139-c7ac-4f99-ad58-a491dc61a893 | Stockholm International Water Institute (SIWI). (2018). Unlocking the potential of enhanced rainfed agriculture (No. 39). https://www.siwi.org/wp-content/uploads/2018/12/Unlocking-the-potential-of-rainfed-agriculture-2018-FINAL.pdf
| d |
0eaa63b8-c672-4e68-a709-b3a5ed1c4ac3 | Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013
| d |
83239cf7-f36f-4ebe-a297-94fdb6502dca | Wan, Z., Hook, S., Hulley, G. (2015). MOD11A2 MODIS/Terra Land Surface Temperature/Emissivity 8-Day L3 Global 1km SIN Grid V006 [Data set]. NASA EOSDIS Land Processes DAAC. Accessed 2021-03-16 from https://doi.org/10.5067/MODIS/MOD11A2.006
| d |
af075eb9-f1da-4dec-b9f2-b3904cb8e2bf | Zeufack, Albert G.; Calderon, Cesar; Kambou, Gerard; Djiofack, Calvin Z.; Kubota, Megumi; Korman, Vijdan; Cantu Canales, Catalina. 2020. "Africa’s Pulse, No. 21" (April), World Bank, Washington, DC. Doi: 10.1596/978-1-4648-1568-3. License: Creative Commons Attribution CC BY 3.0 IGO.
| d |
e50c1d96-2c28-4435-84b5-a40e0aef1a2b | Transfer learning has received increased attention in recent years because it provides an approach to a common problem for many realistic Natural Language Processing (NLP) tasks: the shortage of high-quality, annotated training data. While different implementations exist, the basic tenet is to utilize available data in a source domain to help training
a classifier for a low-resource target domain.
| i |
c1cd05e6-1081-4014-b23f-038c9a22adc1 | An
interesting new direction of research leverages the world knowledge captured by pre-trained language models (PLMs) with cloze-style natural language prompts for few-shot classification (e.g. [1]}, [2]}) and regression [3]}. These approaches are attractive because they require little to no training, making them especially suitable for low-resource settings.
| i |
aada28be-7287-4e3e-9646-4d5a5678c679 | In this paper, we contribute to this research area by introducing a novel cloze-style approach to Named Entity Recognition (NER), an important task which has previously not been addressed via cloze-style prompts. In its classical setting, i.e. recognizing a small number of entity types in newspaper texts, NER achieves
state-of-the-art F1 scores of \(\sim {}95\%\) [1]}. This is not necessarily the case, however, for more specialized domains where data is more scarce and annotations cannot easily be provided because they may require expert knowledge, such as e.g., for biomedical texts. With the approach presented here, the expertise of highly trained specialists can be utilized in a different way, by providing representative words for the named entity types, rather than having to annotate corpus data.
| i |
a85ff100-82a1-495c-897f-0751c28c346a | The main appeal of our method lies in its simplicity, as applying it to a new domain requires very little effort and technical expertise. Our contribution
is three-fold: (1) we introduce a new method for Named Entity Recognition (NER) with a focus on simplicity; (2) our technique is scalable down to zero-shot in which case no training is required on top of the PLM; (3) we show how a hybrid combination of our method with a standard classifier based on a simple threshold outperforms both of the individual classifiers (Section ).
| i |
b5bcc421-aae3-419e-bb05-64e2605b67bd | The effectiveness of our method is demonstrated by a thorough evaluation comparing different variants of the approach across a number of different data sets (Section ). For reproducibility, we release our code on Githubhttps://github.com/uds-lsv/TOKEN-is-a-MASK
| i |
6d93a210-9105-4456-ab49-4034521a7ba0 | Named entity recognition is a well-studied task in NLP, and is usually approached as a sequence-labeling problem where pre-trained language models such as ELMO [1]}, BERT [2]}, RoBERTa [3]} and LUKE [4]} have brought significant improvements in recent years. All these methods are based on supervised learning but they do not generalize to new domains in zero and few-shot settings.
| w |
bdb45be2-30af-4abb-8961-43d93a275499 | Meta-learning or learning to learn [1]}, [2]}, [3]} is a popular approach to few-shot learning. In the context of few-shot NER, most applications of meta-learning make use
of Prototypical Networks [4]}, [5]}, [6]} or Model-Agnostic Meta-Learning (MAML) [7]}. These approaches require training on diverse domains or datasets to generalize to new domains.
| w |
7da547eb-da03-44fd-8c38-16969ea0a06f | Pre-trained language models have shown impressive potential in learning many NLP tasks without training data [1]}, [2]}.
[3]} proposed using a cloze-style question to enable masked LMs in few-shot settings to perform text classification and natural inference tasks with better performance than GPT-3 [4]}.
As creating cloze-style questions is time consuming, there are some attempts to automate this process.
[5]} makes use of the T5 model [6]} to generate appropriate template by filling a [MASK] phrase similar to how T5 was trained. Shin et al. (2020) [7]} use a template that combines the original sentence to classify with some trigger tokens and a [MASK] token that is related to the label name. The trigger tokens are learned using gradient-based search strategy proposed in [8]}.
In this paper, we extend this PLM prompt technique to named entity recognition.
| w |
d29557de-f6ce-4993-83b1-a9e5ce1fef4c | Our approach consists of two parts. We first describe the base method
that can be used as a stand-alone, zero- or few-shot classifier (Section REF ). In Section REF , we then lay out how a simple ensemble method can combine the base method with another classifier to potentially improve over the individual performance of both. We call this setup the hybrid method.
| m |
57fbd903-85a9-4577-95a6-ed8db3f812a6 | We proposed a novel, lightweight approach to NER for zero- and few-shot settings, using pre-trained language models to fill in cloze-style prompts. It is based on extracting information available in PLMs and utilizes it to labels named entity instances identified by a domain-independent POS tagger. Results show that masked language models have a better performance in this setting compared with auto-regressive language models.
We explored a wide range of possible prompts with different datasets. We observed that the proposed method is robust against contextual details of prompts. This is of practical significance in the low-resource setting where there is not enough data to tune the model.
Our method is simple, general and can be used to boost the performance of available domain adaptation baselines. We also propose a hybrid approach that can easily combine the template method with any other supervised/unsupervised classifier, and demonstrated the effectiveness of this hybrid approach empirically.
| d |
8a9cce5f-ec11-45ed-bf2a-97f969cf3998 | Further work could investigate the possibility of fine-tuning templates while having access only to a few training samples. It would be also interesting to explore more sophisticated approaches for combining the predictions of the template model and other few-shot NER baselines.
Two aspects of our approach currently require manual intervention, the template and the representative word lists. Finding ways to determine these fully automatically is another interesting direction to explore. As mentioned before, one way to extract representative words is by making use of word embeddings like GloVe. Indeed, we found that almost all subsets of our representative words perform fairly well in practice. We leave automatically extraction of representative words and its evaluation to future work.
| d |
f629c444-79e2-47fa-bd4b-8862705efd48 | Expanding annotated data for training and evaluation has driven progress in automatic layout analysis of page images. Most commonly, these annotated datasets are produced by manual annotation or by aligning the input documents with the typesetting information in PDF and similar formats [1]}.
| i |
84fa9b9c-05de-41c6-8bd4-a99e72d22406 | This paper describes methods for exploiting a further source of information for training and testing layout analysis systems: digital editions with semantic markup. Many researchers in archival and literary studies, book history, and digital humanities have focused on digitally encoding books from the early modern period (from 1450) and the nineteenth century [1]}. These editions have often employed semantic markup—now usually expressed in XML—to record logical components of a document, such as notes and figures, as well as physical features, such as page and line breaks.
| i |
fec490ba-12ad-4864-8f7f-5da8c54683d8 | Common markup schemes—as codified by the Text Encoding Initiative, EpiDoc, or others—have been mostly used for “representing those features of textual resources which need to be identified explicitly in order to facilitate processing by computer programs” [1]}. Due to their intended uses in literary and linguistic analysis, many digital editions abstract away precise appearance information. The typefaces used to distinguish footnotes from body text, for example, and the presence of separators such as horizontal rules or whitespace, often go unrecorded in digital editions, even when the semantic separation of these two page regions is encoded.
| i |
17357037-bb04-4986-9cf2-ae81de3a9423 | After discussing related work on modeling layout analysis (§), we describe the steps in our procedure for exploiting digital editions with semantic markup to produce annotated data for layout analysis.For data and models, see https://github.com/NULabTMN/PrintedBookLayout
| i |
8cb01910-99bd-4cc6-a50d-3e0d08abb683 | First (§), we analyze the markup in a corpus of digital editions for those elements corresponding to page-layout features. We demonstrate this analysis on the Deutsches Textarchiv (DTA) in German and the Women Writers Online (WWO) and Text Creation Partnership (TCP) in English.
| i |
8f00e9f4-f5b1-4415-835d-3302f102b151 | Then (§), we perform forced alignment to link these digital editions to page images and to link regions to subareas on those page images. For the DTA, which forms our primary test case, open-license images are already linked to the XML at the page level; for the WWO, we demonstrate large-scale alignment techniques for finding digital page images for a subset of books in the Internet Archive. For pages with adequate baseline OCR, we also align OCR output with associated page coordinates with text in regions in the ground-truth XML. Some page regions, such as figures, are not adequately analyzed by baseline OCR, so we describe models to locate them on the page.
| i |
33d3fdea-f6c3-4406-8753-f3f4963d1c97 | In experimental evaluations (§), we compare several model architectures, pretrained, fine-tuned, and trained from scratch on these bootstrapped page annotations. We compare region-level detection metrics, which can be computed on a whole semantically annotated corpus, to pixel- and word-level metrics and find a high correlation among them.
| i |
b810bda1-872a-4512-93af-672c64836066 | Perhaps the largest dataset proposed recently for document layout analysis is PubLayNet [1]}. The dataset is obtained by matching XML representations and PDF articles of over 1 million publicly available academic papers on PubMed CentralTM. This dataset is then used to train both Faster-RCNN and Mask-RCNN to detect text, title, list, table, and figure elements. Both models use ResNeXt-101-64x4d from Detectron as their backbone. Their Faster-RCNN and Mask-RCNN achieve macro mean average precision (MAP) at intersection over union (IOU) [0.50:0.95] of 0.900 and 0.907 respectively on the test set.
| w |
fdd6dca0-7cdd-48f1-8d16-c8009df5bf31 | Newspaper Navigator [1]} comprises a dataset and model for detecting non-textual elements in the historic newspapers in the Chronicling America corpus. The model is a finetuned R50-FPN Faster-RCNN from Detectron2 and is trained to detect photographs, illustrations, maps, comics/cartoons, editorial cartoons, headlines, and advertisements. The authors report a MAP of 63.4%.
| w |
5cb7196b-0bf5-4b31-bfec-7acf52f37669 | U-net was first proposed for medical image segmentation [1]}. Its architecture, based on convolutional layers, consists of a down-sampling analysis path (encoder) and an up-sampling synthesis path (decoder) which, unlike regular encoder-decoders, are not decoupled. There are skip connections to transfer fine-grained information from the low-level layers of the analysis path to the high-level layers of the synthesis path as this information is required to accurately generate fine-grained reconstructions. In this work, we employ the U-net implementation P2PaLahttps://github.com/lquirosd/P2PaLA described in [2]} for detection and semantic classification of both text regions and lines. This implementation has been trained
and tested on different publicly available datasets:
cBAD [3]} for baseline detection, and
Bozen [4]} and OHG [5]} for both text region
classification and baseline detection. Reported mean intersection over
union results are above 84% for region and baseline detection on the
Bozen dataset.
It is worth noting that the U-net implementation is
provided with a weighted loss function
mechanism [6]}, which can mitigate possible
class imbalance problems.
| w |
69af9fb3-9159-492a-a5e9-c87645ca767b | Kraken, an OCR system forked from Ocropy, uses neural
networks to perform both document layout analysis and text
recognition.See http://kraken.re and https://github.com/ocropus/ocropy.
For pixel classification in layout analysis, Kraken's network
architecture was designed for fewer memory resources than U-net. Roughly, it comprises down-sampling convolutional layers
with an increasing number of feature maps followed by BLSTM blocks for
processing such feature maps in both horizontal and vertical
directions [1]}. The final convolutional layer,
with sigmoid activation function, outputs probability maps of regions
and text lines.
Kraken's model for baseline detection has been trained and tested on
the public dataset BADAM [2]} and also on the
same datasets as P2PaLA. For region detection, Kraken obtained mean
intersection over union figures are 0.81 and 0.49 for Bozen and OHG
datasets respectively.
| w |
4b0121fe-0afe-4355-93b4-4aa24361e415 | Several evaluation metrics have been commonly employed for document layout analysis. The Jaccard Index, also known as intersection over union (iu), is one of the most popular pixel-level evaluation measures used in ICDAR's
organized competitions related with document layout analysis as
[1]},
[2]}. Likewise this measure has also served as a way
to consider when there is a match between detected objects and their
references as in
[3]}.
| w |
146d8660-55be-46f0-b110-d29099eee296 | Having produced alignments between ground-truth editions and page images at the pixel level for the DTA and at the page level for a subset of the WWO, we train and benchmark several page-layout analysis models on different tasks. First, we consider common pixel-level evaluation metrics. Then, we evaluate the ability of layout analysis models to retrieve the positions of words in various page regions. In this way, we aim for a better proxy for end-to-end OCR performance than pixel-level metrics, which might simply capture variation in the parts of regions not overlapping any text. Then, we evaluate the ability of layout models to retrieve page elements in the full dataset, where pixel-level annotations are not available but the ground-truth provides a set of regions to be detected on each page. We find a high correlation of these region-level and word-level evaluations with the more common pixel-level metrics. We close by measuring the possibilities for improving accuracy by self-training and the generalization of models trained on the DTA to the WWO corpus.
| m |
5a7c00fa-a648-4b83-82fe-9cc9e4a8a7f7 | We found that several broad-coverage collections of digital editions can be aligned to page images in order to construct large testbeds for document layout analysis. We manually checked a sample of regions annotated at the pixel level by forced alignment. We benchmarked several state-of-the-art methods and showed a high correlation of standard pixel-level evaluations with word- and region-level evaluations applicable to the full corpus of a half million images from the DTA. We publicly released the annotations on these open-source images at https://github.com/NULabTMN/PrintedBookLayout. Future work on these corpora could focus on standardizing table layout annotations; on annotating sub-regions, such as section headers, poetry, quotations, and contrasting typefaces; and on developing improved layout analysis models for early modern print.
| d |
cf5835be-da14-47b8-88cb-5f8b148ef781 | Suicide has been identified as one of the leading causes of deaths and approximately \(1.5\%\) of people die by suicide every year , . Despite years of clinical research on suicide, researoners have concluded that suicide cannot be predicted using the standard clinical practice of asking patients about their suicidal thoughts . Recently, and discuss the opportunities of using social media combined with natural language processing (NLP) techniques to complement traditional clinical records and help in suicide risk analysis and early suicide intervention.
| i |
0eb198be-d84d-4940-865d-d7a8a37e272a | To facilitate further research on automatic suicide risk assessment, proposed a shared task, where they collected user data from r/SuicideWatch subreddit and annotated it with user-level suicide risk: no-risk, low-risk, medium-risk and high-risk. By comparing the results of the participating teams in this shared task, conclude that one of the major challenges comes from the insufficient data for intermediate suicide risk levels (i.e., low risk and medium risk) rather than extreme risk levels (i.e., no risk and high risk). find that using a dual BERT-LSTM-Attention model to separately extract information from both SuicideWatch and Non-SuicideWatch posts together with feature engineering that includes emotion features, personality scores, user's anxiety and depression scores are important for model performance.
| i |
f6115bc6-df3a-4232-8922-4a2dc017797b | In this paper, instead of feature engineering or complex model architectures, we explore whether weakly supervised methods and data augmentation techniques based on clinical psychology research can help improve model performance.
We explore several weakly-supervised methods, and show that a simple approach based on insights from clinical psychology research obtains the best performance. This model uses pseudo-labeling (PL) on data from the subreddits r/Anxiety and r/depression, which are considered important risk factors for suicide. We also present a potential application of our model for studying the suicide risk among people who use drugs, opening the door for using NLP methods to deepen our understanding between opioid use disorder (OUD) and mental health.
The code for this paper can be found at https://github.com/yangalan123/WM-SRA.
| i |
95eeb501-120f-4dcb-a4cc-d4d1e56a1747 | We focus on Task A from the CLPsych 2019 shared task “Predicting the Degree of Suicide Risk in Reddit Posts" .
The goal of the task is to predict the user-level suicide risk category based on their posts in the r/SuicideWatch subreddit. Specifically, a user \(u_{i}\) is associated with a collection of \(n(i)\) posts \(C_i=\lbrace x_{i, 1}, x_{i, 2}, \dots , x_{i, n(i)}\rbrace \) , where each post \(x_{i, k} (1\le k \le n)\) has \(m(i, k)\) sentences \(x_{i, k} = [s_{ik, 1}, s_{ik, 2}, \dots , s_{ik, m(i, k)}]\) . We need to predict \(y_i \in \lbrace a, b, c, d\rbrace \) using \(C_i\) , where \(a, b, c, d\) represent no-risk, low-risk, medium-risk and high-risk, respectively.
In the original dataset, there are 496 users in the training set and 125 users in the test sets. We further split 100 users from the training set to create the validation set. The sizes for the train/valid/test sets are 746, 173, and 186 respectively.
| m |
445a2bcc-b271-4371-9d26-3bd8e212a7d3 | Over the recent years there is a great progress in the area of generative models. In particular, Generative Adversarial Networks (GANs) [1]}, proposed a radically new approach for training a generative model. This formulation allows one to capture high dimensional and complex distributions efficiently using a minmax game between two Neural Networks - one that generates samples and one that evaluates them as real or fake. In this game and for image data, the generator learns the mapping from a low dimensional space to the space of images, allowing one to sample images from the high-dimensional distribution by sampling from a lower-dimensional distribution. These models have been applied in many computer vision tasks like human brain decoding [2]} or realistic images synthesis [3]}. Furthermore, in tasks where data are limited (e.g. biomedical datasets) it can be used for data augmentation.
| i |
5485817d-2b49-4389-bc2e-b40857585d36 | One major limitation of current deep neural network models is the lack of a mechanism for knowledge transfer between tasks. Transfer learning aims to address this limitation by suggesting novel ways for transferring the experience between different tasks [1]} or even models[2]}. However, these works focus on transferring the knowledge of a network with respect to the ability of mapping images to a number of categories. Recently, authors in [3]}, following a Teacher-Student approach, attempted to transfer the attention of a large network to one with fewer parameters by incorporating loss functions between the Teacher and the Student intermediate layers.
| i |
515b8167-feda-4751-8fb5-11de7718b56b | It has also been observed, that intermediate layers of CNNs act as weak object detectors [1]}, [2]}, [3]}. This occurs naturally as the filters, especially in higher layers, tend to synthesize objects. Authors in [4]} inspired by this observation, have shown that global average pooling layers can be used in order to create Class Activation Maps (CAMs).
| i |
c4b121e9-f518-429c-89d2-b03fd616d782 | In this work, we focus on GAN discriminator’s inability to locate the regions of interest when trying to learn how to discriminate real versus fake images. Specifically, we investigate if a discriminator uses patterns within areas of interest of an image. Our findings indicate that discriminators in a regular GAN fail to locate the object efficiently- thus evaluating regions that are not always of much interest. In this context, we propose a novel formulation for training GANs, using a Teacher network in order to help the discriminator learn where to pay attention to. Results indicate that our method both improves the quality of the generated image as well as it provides a weakly localization of the objects on the generated images.
| i |
64a746c2-e0ee-4c56-b8d5-9241bd425905 | Here we present some experimental results demonstrating the quality of generated images as well as the ability of the proposed scheme to provide both realistic new samples as well as weakly annotations. For this, in order to construct the Teacher network, we used ResNet-18 [1]}, which we trained a on the task of HEp-2 cell image classification using the dataset presented in [2]}. This network achieved a remarkable performance on the dataset of HEp-2 cell images [3]} (76.28% accuracy) surpassing human level performance (73.3%) and approaching the top performance (78.1%) achieved in [4]}. Also, this network was able to efficiently locate the object of interest on images as shown in Figure REF .
| r |
3ff17575-e049-4152-969f-8726ff99dcf1 | Then we used this Teacher-network in order to train our discriminator by introducing one extra cost in the discriminator as shown in Equation REF during the minmax game between the generator and the discriminator. In Figure REF we provide some results of the generated images one the left side, and some real images on the right side. Our approach clearly generated very realistic images and most importantly, we verified that incorporating an attention loss does not affect not affect negatively the quality of the generators images.
| r |
71e3f54a-83bf-4b34-970b-cfea6a4a2a5b | Regarding the ability of the proposed scheme to perform a weakly localization, in Figure REF we provide some Soft-CAMs from both generated as well as real images. As compared to Figure REF , our modified minmax game was able to improve the attention of the discriminator significantly.
<FIGURE> | r |
45523fd2-766a-4bc1-b2bc-e4e36cf8508d | Moreover, as we are interested in the ability of the network (generator) to generate realistic image and the discriminator to perform weak object localization, we provide some input-output pairs from both read as well as generated images. Results in the Figure REF verify that the proposed scheme can be used to both generate images as well as to provide weakly annotations.
<FIGURE> | r |
f188524c-e05a-41ae-8948-0267eea08a54 | In this work we proposed a method which allows the discriminator network of a GAN to perform weakly localization of the objects of interest. In order to achieve this we proposed a Teacher-Student learning scheme as well as a novel type of Soft-Class Activation maps. This scheme allows the discriminator to generate weak annotation of the generated images which can be used for automatic annotation of generated images. In the future we plan to apply this scheme on datasets designed for object detection in order to generate both images as well as weak annotations. We plan to make the code publicly available after the publication of this work.
| d |
501a4f3e-a444-4f7a-b25f-8f4ef890e924 | There have been a proliferation of laser scanners and three dimensional (3D) cameras in domains as varied as mobile robotics [1]}, autonomous vehicles [2]} and advanced manufacturing [3]} along with traditional domains such as land surveying. These 3D scanners measure 3D locations of points on objects and store them as point cloud data. Analyzing the point cloud data is a fundamental task in mapping and navigation applications. For example, urban and indoor environments usually comprise of a large amount of planar surfaces. The planar information can be collected as 3D point clouds via light detection and ranging (LiDAR) techniques. Identifying planes from 3D point clouds is a non-trivial task in the presence of multiple inlier structures and contamination of observed data with noise [4]}.
| i |
3000ccc8-2996-4689-8fd3-c9e2a64d43db | Generally speaking, there are two disparate domains which utilize 3D point clouds and perform plane identification. While Photogrammetry utilizes plane fitting methods such as region growing [1]}, 3D Hough transform [2]} and RANSAC [3]} to cluster points belonging to an actual planar surface, Computer Vision typically relies on per point normal estimation that is computed based on neighboring points on the local planar neighborhoods [4]}. In this research, we take the view that computing the per-point normal is a lower level function which can then be clustered to derive the planar surfaces as detailed in [5]}.
| i |
6f193361-5574-4303-8739-cf978efaa91d | Given that 3D point cloud data is inherently unstructured, our proposed approach is established based on graphs which can provide an inherently principled way of connecting adjacent points in a global manner and promote relationship between them based on the given edge characteristics. Although the graphical formulation of 3D point clouds has been attempted in some previous studies [1]}, [2]} primarily for segmentation of ground and objects, these studies do not provide a rigorous formulation of normal estimation using the graphical approaches. On the other hand, the normal estimation from multi-model fitting has been attempted under an optimization setup [3]} which can be shown to be a specific instance of our proposed approach.
| i |
9ab0ca64-166b-43ae-9fea-15d3e0312a12 | In order to develop a fast and robust extraction algorithm, we analyze point clouds through a variant of the proximity graphs, the k-nearest neighbor graph (k-NNG) [1]}. The plane extraction problem can then be formulated as a problem of finding the latent structure in the graph. This stems from an underlying assumption that the data samples can be aggregated on a low-dimensional manifold and this manifold can be represented by its discrete conjugate, the graph [2]}. Thus, instead of finding a handful of planar surfaces by segmentation as done by most of the other approaches, we find the normal of each point in point clouds conditioned on the graph. Inspired by analogous research in mining low-dimensional data from high-dimensional data using robust principal component analysis (rPCA) [3]}, we impose a graph smoothness constraint on the samples.
| i |
12e93c55-68e6-47cf-8e1c-6894a7bf9b15 | We further extend the proposed formulation to a weighted version to improve the estimation accuracy of the normals at the points near the boundary between two distinct planar surfaces. In the weighted version, we estimate the normals by formulating an optimization problem with a weighted loss function, and update the weights iteratively. Different weighting strategies are chosen intuitively - the dot product of the estimated normals between neighboring points; the inverse of distance between the neighboring points and the product of dot product between the normals and inverse distance. The weights provide a discriminative feature where the preference is given to closer points with similar normal vectors.
| i |
d8178ffb-233f-43c2-8e8e-793125962004 | The rest of paper is organized as follows. In Section , we will introduce the related work. In Section , the proposed method is elaborated for normal estimation. Section demonstrates the effectiveness of the method by using a small simulated dataset composed of points sampled from three orthogonal planar surfaces, as well as a large scale synthetic plane estimation benchmarking dataset, SynPeb [1]}. A comparison between the proposed method and other benchmark methods are also provided. The paper then ends with a conclusion in Section .
| i |
12c40392-496f-4634-8758-54c397df6cb7 | The dichotomy of token-based and account-based models for payments systems is often discussed in the literature around blockchain and digital currencies [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, including central bank digital currency (CBDC) and crypto-assets. In the context of digital currencies, there are a range of different and contrasting interpretations of the terms “tokens” or “token-based systems.” In particular, the term “tokens” is often used to refer to designs or applications that are not necessarily directly linked to the concept of token-based systems.
| i |
cf132c26-c08c-4656-93a1-14b0aaba5379 | Most drastically, the terminology regarding tokens in the cryptocurrency community, including the concept of tokenisation, usually refers to an implementation that is unambiguously an account-based system.
For example, the Ethereum community proposed a standard for fungible units of value termed “tokens,” which was introduced in the Ethereum Whitepaper [1]}. The adopted standard, widely known by its proposal identifier ERC-20, is arguably the primary reference point for the concept of tokens on Ethereum and other public blockchain today. Although called tokens, ERC-20 tokens are recorded in the form of an account balance (under an account address) in the smart contract copies hosted on replicated databases of the public blockchain, not stored as digital “objects” in the user's software wallet. The wallet only store the private key used to sign instructions sent to the smart contract. In essence, this is an account-based system.
| i |
800137dc-5b55-42cb-bf52-736b13bbdb1e | For the case of Bitcoin [1]}, it exhibits the characteristics partially of token-based systems and partially of account-based systems. Bitcoin handles record-keeping using a format known as “unspent transaction outputs” (more commonly referred to as UTXO), which are a data structure sharing many similarities with objects in a token-based system. On the other hand, a Bitcoin address is in essence an account and the private key associated with the Bitcoin address is the proof of identity needed to transact from that account. Funds are not stored as objects in user software wallets but recorded on a ledger (which is the public blockchain of Bitcoin), and the process of moving funds requires intermediation through the collective works of Bitcoin miners [2]}. Some researchers therefore hold the view that Bitcoin can be both token-based and account-based, and suggest that “future classifications could modify the definitions of the terms account-based and token-based to more clearly distinguish them” [3]}.
| i |
a6bad57b-bec1-44ff-9140-faeec361959e | This article discusses why UTXOs should be seen as an account-based arrangement according to the classical economic notions of tokens and accounts [1]}, [2]}, [3]}. The distinction between token-based methods and account-based systems is pretty well entrenched in payment economics: For a transaction to be deemed satisfactory in an account-based system, the payer has to be identified as the holder of the account from which the payment will be made, whereas, in a token-based system, what needs to be identified is the genuineness of the object being transferred. Through explaining the details of transaction processing in Bitcoin's UTXO, this article discusses why a UTXO does not fulfil this definition of token-based systems but is closer to the definition of account-based systems. It also explains why achieving purely peer-to-peer, decentralised exchanges without any intermediation by third parties in the digital domain is difficult. A comparison between UTXO-based systems and account-based systems is presented. Finally, a suggestion is made on the defining features of digital tokens (i.e. the data structure used to represent the system state) that may be used to create a new taxonomy suitable for distinguishing digital token-based systems from other arrangements.
| i |
56c60c6f-cf1d-40ae-bb36-d712a52b9611 | The key contributions of this article is two-fold. First, a detailed exposition of the design of a UTXO and a discussion on whether it should be classified as account-based systems are given with well-grounded justifications. Second, an extension of the definition of token-based systems based on the global system state representation of the respective record system is proposed, which neatly distinguishes between token-based systems and account-based systems. The resulting taxonomy could cover both physical and digital token-based systems.
| i |
25945c22-ee04-4214-b9f7-adb669fc28e7 | This article is organised as follows. Section discusses the classical distinction of tokens and accounts in payment economics. Section why the economic notions of tokens is difficult to achieve in the digital domain. Section presents the design of a UTXO and the transaction process of Bitcoin, highlighting the differences between UTXO-based and account-based implementations of blockchain. A discussion on why UTXO should be viewed as account-based is given in Section , with implications and suggestions for a new way to distinguish token-based systems and account-based systems given in Section .
| i |
969a553a-0028-4e6b-9d8f-d3fb5f086180 | The term “tokens” is widely used in the discussion of digital currencies and crypto-assets but subject to different interpretations. On the other hand, the distinction between token-based and account-based systems is well established in payment economics. Through a detailed exposition of the design of UTXO-based systems such as Bitcoin, this article discusses why UTXO-based systems should be seen as account-based systems. Understanding this reality would have practical implications on anonymity and system interoperability. In addition, a comparison of UTXO-based systems and account-based systems is given, with a discussion on a new taxonomy which classifies UTXO-based systems as token-based systems. The proposed definition of token-based systems based on their global/system state representation, regardless of whether a record system is required, is an extension of the classical economic notion of tokens which covers both physical and digital tokens while neatly distinguishing token-based and account-based systems.
| d |
7c36dfbd-b54e-4475-b025-e4e4587b63c7 | Machine Learning, which is one of the subfields of Artificial Intelligence, has its applications in various fields including Economics, Medicine [1]}, Cosmology [2]}, Particle physics [3]}, Robotics [4]}, etc. The machine learns and models based on non-explicit programming based on the datasets that we have collected in the preprocessed datasets, and we compare the modeled data with the real data. Thus, we can see the data extent accurately which is modeled by the machine.
Artificial Neural Networks are derived from Natural Neural Networks in living things, which are a subset of Machine Learning, designed to predict responses from complex systems. One of the most famous neural networks is Recurrent Neural Networks or RNNs that function close to the human brain.
| i |
f9b36b1c-097d-4d9a-8312-ce7acdbccd97 | We know that the largest market in the field of Energy belongs to the oil companies. In the field of oil, there are large companies around the world that have a very high impact. In the world economy, oil can be considered the most vital factor of the economy, because, for example, if the export or import of oil from many countries is sanctioned, the economy of that country will be practically paralyzed, especially for countries with Oil-dependent economies, eg. Persian Gulf countries. Stock indices of Oil companies are among the most important indicators in the global stock market, which have correlations between the shares of oil companies, Gold, US dollar, crude oil, which has an impact on them.
| i |
da17b69d-0841-43d7-b006-8c082d10111f | Although Neural Networks are not interpretive and can not interpret these correlations well, they can be used in learning and modeling. The LSTM is one of the most powerful architectures in Recurrent Neural Networks that has solved the problem of gradient vanishing in Recurrent Neural Networks and can help us to predict the stock market more accurately. The oil stock market is like an unstable dynamic system with many non-linear correlations, and researchers have proposed various methods to predict the behavior of this system.
| i |
75580332-0892-46ee-9891-1e0a07c7c2c9 | To review the literature oil research path, we can mention Alvarez-Ramirez et al. work [1]} which analyzed the auto-correlations of international crude oil.
After carrying out several tests, in 2005, Moshiri and Foroutan [2]} concluded that oil stock markets have a recursive architecture because they are time series. They used three methods ANN, GARCH, and ARMA, and the best results come from the ANN method.
| i |
6ac68bf6-677e-4f28-b286-fb9b5561b918 | Author in [1]} published an article using Multilayer Neural Networks, which examined the relationship between crude oil prices and current prices.
Moreover, they showed that the future prices of crude oil contain new information about oil spot price detection.
Ye et al. [2]} studied the changing relationships between the prices of crude oil and several other factors from January 1992 to December 2007 by the Short-Run Crude Oil Price Forecast Model.
| i |
80db1487-1460-42e7-a096-d21e73171ccd | Chen and colleagues developed a model based on deep learning and used this model to model the unknown non-linear behavior of WTI stocks [1]}.
QI, KHUSHI, and POON used different recursive neural network architectures including LSTM, GRU, BiLSTM, RNN to model the Forex market and obtained significant results from these models to predict several currency pairs. They used a database that relates to ELLIOT method information, one of the stock market forecasting methods [2]}.
| i |
e4113196-1b09-47a4-a974-fc4c152f71e2 | In 2018, Gupta and Pandey predicted crude oil prices by using LSTM network [1]}, and following that Cen and Wang applied deep learning algorithms to anticipate the volatility behavior of crude oil prices [2]}. To solve the chaotic and nonlinear features of crude oil time series Altan and Karasu used a new crude oil price prediction model is proposed in this study, which includes the long short-term memory (LSTM), technical indicators [3]}.
| i |
fb5ab463-41be-4b12-9b02-ec016d061180 | In 2017, Arfaoui and Rejeb published an article examining the effects and relationships of stock markets, oil, dollars, and gold based on global market documentation.
They concluded that oil prices are significantly affected by stock markets, gold, and the dollar and that there are always indirect effects, which also confirms the presence of correlations in the global market[1]}. In this paper, we compare this correlation feature and the relationships between stocks with the dollar, crude oil, gold, and major oil company stock indices, we create datasets and compare the results of forecasts with real data.
| i |
68d3c858-767f-4d78-9eb3-748dcc936b7c | The paper is organized as follows: Section II is dedicated to an analysis of the correlation between different shares and economic stocks. In Section III, we apply LSTM architecture to predict oil prices. Finally, we conclude and summarize the main results in Section VI.
| i |
0e3f08b5-df3f-445a-9a9b-132e1022054d | First, we studied the correlation coefficient between oil companies' stocks, dollar, crude oil, and gold.
In Sec. (REF ), we got that there is a correlation between the shares of the oil companies in question together with the gold, dollar, and crude oil indices. Moreover, we found that the stock indices of the oil companies in question have a weaker correlation with the dollar, gold, and crude oil, but they have a strong correlation with each other's indicators.
| d |
d5164cd4-21cc-44bb-b374-c5c39cc824a3 | Second, to predict the stocks of different companies, we have used Recurrent Neural Networks with LSTM architecture, because these stocks change in time series.
We carried on empirical experiments and perform on the stock indices dataset to evaluate the prediction performance in terms of several common error metrics MSE, MAE, RMSE, and MAPE. Let us summarize the results with these items:
| d |
ed96d23d-7016-45fe-bf87-baf253ea688c |
In table (REF ), we saw that adding the WTI (crude oil), gold, and dollar indices do not improve the model prediction and do not reduce the badness of the model (error metrics), and the same four main features have the lowest amount of cost functions. We see BP empirical analysis result in Fig. (REF ):
<TABLE><FIGURE>
In table (REF ), we saw that adding the crude oil index and the dollar index did not reduce the cost function, but at the same time, when the machine learned the gold index in the learning process, it was able to have better measurement and reduce costs significantly and help us improve modeling as shown in the Carien Energy diagrams in Fig. (REF ).
<TABLE><FIGURE>
Table (REF ) shows that the WTI and gold indices do not improve the model in the learning process and increase the cost function, but the dollar index has reduced the cost function and thus improved modeling as shown in Fig. (REF ).
<TABLE><FIGURE>
In table (REF ), we see that the WTI index has increased the value of the cost function and adding the gold and dollar indices has made almost no change in the learning process. We see the diagrams for the Schlumberger company in Fig. (REF ).
<TABLE><FIGURE>
In Table (REF ) for Total Shares, we see that none of the indicators of WTI, dollar, gold has improved the learning process which can be seen in Fig. (REF ).
<TABLE><FIGURE>
| d |
1e42af51-163f-4da9-81ce-cb298baa4a0a | Overall, the above results show that adding gold, dollar, and WTI indices to the learning experience for each stock of the oil companies for modelling and forecasting lead that RNNs do not have the interpretability, and any correlated data does not improve prediction by RNNs models.
| d |
4b6cf9ec-00da-403e-959c-238d6c87542d | One of the main applications of examining different correlations and obtaining indirect factors to improve the learning process is modeling and forecasting in the fundamental analysis of the stock market, which indicates that factors in the future of the stock market have an indirect impact.
| d |
39f54c83-3863-4baa-9b1c-3625d15cdbd8 | Human beings and animals are naturally able to memorize information presented in a sequence [1]}; on the contrary, Artificial Neural Networks (ANNs) learning from a non-i.i.d. stream of data incur in Catastrophic Forgetting [2]}, [3]}. Continual Learning (CL) [4]}, [5]} aims at designing methods that compensate for this issue and facilitate the retention of previous knowledge either by means of regularization [6]}, [7]}, architectural designs [8]}, [9]} or (pseudo-)replay of past data [3]}, [11]}, [12]}.
| i |
5b3f77ef-7906-47c9-bde2-ecb42589dc3e | The insurgence of catastrophic forgetting is ascribed to the tendency of models to rewrite their hidden representations as they adjust their parameters to best fit an input distribution that changes in time [1]}. However, McRae & Hetherington
highlight a meaningful difference in the way humans and ML models learn from a sequence of data: whenever human subjects are evaluated on their ability to memorize a sequence of concepts, they start out possessing an already-large body of knowledge [2]}.
In other words, humans are generalists that can anchor novel data in the context of previous knowledge, while ANNs must specialize on a limited pool of data at each time without any additional reference.
| i |
d7528632-f4ee-408e-9d2e-2b5201debcc1 | An obvious choice to bridge this gap is pre-training the models on a large amount of available off-the-shelf i.i.d. data, leading to a better initialization for the learning procedure [1]}, [2]}. However, we observe that pre-training is not always rewarding in a CL setting, especially in case of small-size replay memories: the ever-changing stream of data entails large changes in model parameters, leading to the forgetting of the pre-training.
| i |